Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A curve trait for general interoperation #80

Open
wants to merge 16 commits into
base: main
Choose a base branch
from

Conversation

mweatherley
Copy link

@mweatherley mweatherley commented Apr 11, 2024

RENDERED

This RFC describes a trait API for general curves within the Bevy ecosystem, abstracting over their low-level implementations.

It has a partial implementation in this draft PR.

Integration with bevy_animation has a proof-of-concept prototype in this draft PR.

Integration with bevy_math's cubic splines has a proof-of-concept prototype in this draft PR.

@jnhyatt
Copy link

jnhyatt commented Apr 12, 2024

The proposal looks great so far. Some thoughts:

  1. Since these curves are general purpose rather than specifically targeting animation, I see a lot of value in being able to express curves over any arbitrary range, e.g. -1.0..=1.0, f32::NEG_INFINITY..=0.0, 100.0..=200.0. Instead of storing a duration, what about a domain: RangeInclusive<f32>?
  2. I may have missed in in the RFC, but how are ranges sampled outside of bounds? For a range with duration 1.0, what happens when you sample -1 or 2? Is it defined by extrapolating using the derivative or clamped to the nearest value? This is something that comes up a lot in animation curves.

@NthTensor
Copy link

Might have to have option/result samples. I think the general domain is worth considering.

@NthTensor
Copy link

Also do we want this to be general enough to capture parametric surfaces? Because I think we can do this if T is not Ord (a Vec2 for example).

@mweatherley
Copy link
Author

mweatherley commented Apr 12, 2024

The proposal looks great so far. Some thoughts:

1. Since these curves are general purpose rather than specifically targeting animation, I see a lot of value in being able to express curves over any arbitrary range, e.g. `-1.0..=1.0`, `f32::NEG_INFINITY..=0.0`, `100.0..=200.0`. Instead of storing a duration, what about a `domain: RangeInclusive<f32>`?

2. I may have missed in in the RFC, but how are ranges sampled outside of bounds? For a range with duration 1.0, what happens when you sample -1 or 2? Is it defined by extrapolating using the derivative or clamped to the nearest value? This is something that comes up a lot in animation curves.

Ah, I'm glad you brought this up. The basic form of Curve<T> itself is actually based on this animation draft RFC, so it has inherited some properties which might not be totally appropriate in a general setting. I think that your suggestion in (1) is quite reasonable. My main qualm is actually that Rust's Range types are not Copy, which is insanely annoying in situations like this where we really have no interest in actually using them as iterators (and this is a burden we pass on to the consumer).

As for (2) — good question. Once again, the animation RFC I linked suggested that sampling would always return a value, but that the way that out-of-bounds samples might be formed would be left implementation-specific. Perhaps @james7132 might chime in with some insight there. I think that it's tempting to believe that returning an error or None when the sample point is out of range is just enforcing invariants, but I wouldn't be surprised if there is a compelling rationale to the contrary. For curves defined algebraically, extrapolating outside of the given bounds also generally makes sense anyway (e.g. Hermite interpolation between two points), so I might be in favor of that kind of thing regardless.

@mweatherley
Copy link
Author

Also do we want this to be general enough to capture parametric surfaces? Because I think we can do this if T is not Ord (a Vec2 for example).

I guess I'm a little confused by this since there are no Ord constraints anywhere as far as I'm aware. I imagine that what you have in mind is encoding a two-parameter surface as a one-parameter family of curves; I would think that the main barrier there is the constraint that you need T: Curve<Point> where T itself is actually Interpolable in some reasonable fashion, but perhaps this is actually less challenging than I imagined when I started writing this response; e.g. a variant of FunctionCurve<T> (the thing holding a function as a wrapper to make a Curve<T>) could probably do the trick if you store a weight field (and use this to pass on the interpolation to the results of evaluation).

@jnhyatt
Copy link

jnhyatt commented Apr 12, 2024

Rust's Range types are not Copy

I had no idea, that's pretty unfortunate.

What if sampling outside was done by means of a wrapper? For example a Curve that wraps another but has a domain over all reals? Sampling could look like: Clamp(my_curve).sample(...) or Hermite(my_curve).sample(...)

@mweatherley
Copy link
Author

A random thought on the range business: we could implement our own Copy version of ranges (say, T), and then use impl Into<T> or impl TryInto<T> together with a corresponding implementation for certain Bevy ranges in order to still allow for the use of the ..= syntax.

What if sampling outside was done by means of a wrapper? For example a Curve that wraps another but has a domain over all reals? Sampling could look like: Clamp(my_curve).sample(...) or Hermite(my_curve).sample(...)

I think something like that could work, but it would still be a little annoying to have to unwrap its sampling output all the time despite knowing it would always succeed.

@jnhyatt
Copy link

jnhyatt commented Apr 12, 2024

I think something like that could work, but it would still be a little annoying to have to unwrap its sampling output all the time despite knowing it would always succeed.

I was assuming sample would just return a T, not an Option/Result. If it won't always succeed, we could embed information for extrapolating in the result's Err variant instead, then maybe provide some convenience method for extrapolating:

fn clamp<T>(x: Result<T, OutOfBounds<T>>) -> T {
    match x {
        Ok(x) => x,
        Err(out_of_bounds) => todo!(), // `out_of_bounds` has all the info we need to extrapolate out-of-bounds sampling
    }
}

let curve = function_curve((0.0..=1.0), |x| x);
let out_of_bounds_sample = clamp(curve.sample(2.0)); // returns 2.0

@alice-i-cecile
Copy link
Member

Rust's Range types are not Copy

FYI, this is slated to change over either the 2024 or 2027 edition boundary: https://github.com/pitaj/rfcs/blob/new-range/text/3550-new-range.md

@mweatherley
Copy link
Author

mweatherley commented Apr 13, 2024

Rust's Range types are not Copy

FYI, this is slated to change over either the 2024 or 2027 edition boundary: https://github.com/pitaj/rfcs/blob/new-range/text/3550-new-range.md

That's good to know! Honestly, I tinkered around and found that RangeInclusive is honestly kind of poor regardless (doesn't enforce any invariants, lacks virtually any useful methods other than contains...), so I'm leaning towards making our own Interval type with a TryFrom<RangeInclusive<f32>> implementation regardless.

(This wouldn't be very heavy, but would enforce non-emptiness and include methods like length, is_finite, and intersect, and it would be Copy.)

@mweatherley
Copy link
Author

mweatherley commented Apr 13, 2024

Okay, here again to explain a couple changes:

Firstly, I added Interval, which is a lightweight type representing a non-empty closed interval that is possibly infinite in either direction. This has three main benefits:

  • Its invariants are enforced at the type level, which allows many cases of input sanitization to be removed from the API.
  • It allows the provision of a large number of useful methods; these turned out to be more numerous than I expected while refactoring, so there are a number of those now.
  • It is Copy.

Secondly, with the addition of sample_checked and sample_clamped, I now have a belief about the suite of sample functions that I am willing to defend: While we might alter what is named what here, I think that this is what the API should actually look like. My reasoning is mainly as follows:

  • The existence of zero-overhead (i.e. fully unchecked) sampling in the trait definition is non-negotiable, since sampling is one of the main paths in applications and some may be performance-sensitive.
  • I think that sample_checked and sample_clamped cover almost all use-cases to handle sampling outside of the domain.

Outside of this, I don't really care whether sample_checked gets renamed to sample and sample becomes sample_unchecked or something.

@NthTensor
Copy link

Read through the updates. Looks really good.

The interval type makes sense, and I agree Range doesn't seem like quite the right fit. Among other things, allowing curves over arbitrary intervals could simplify thinking about segments of splines as curves in their own right. I think we may want to investigate functions for sticking/appending curves together based on their domains or splitting them apart.

I do worry that requiring an interval domain will cause problems if we want to work with derivatives. It would be nice to be able to represent the first and second derivatives as "functional curves" where they can be determined analytically (eg. when working with splines), but most Bezier splines aren't generally C2 (and some aren't even C1). I suppose a tangent curve could be a Curve<Option<Vec3>>but this seems at bit clunky and generally at-odds with the direction of sample_unchecked. This is a problem for moving frames as well.

One possible solution might be to allow curves over arbitrary measurable sets (or otherwise generalize Interval) but I'm not sure what that would look like and I'd rather be able to work with derivatives without that sort of theoretical equipment.

Setting derivatives aside, I agree with you're points about domain checking. My preference would be for sample_unchecked as the main implementation point, and then having the trait provide checked wrappers sample and sample_clamped. New users (who may be prone to bounds errors) will probably reach for sample first. Implementers will know not to write redundant bounds checking because of the function name.

By the way, I can think of two other traits that also provide interpolation: Mix, Animatable. You've suggested adding a blanket implementation for VectorSpace and we could do the same here, but I wonder if we might want to simply generalize Mix and then have all the others extend it. It seems like there is pretty much one correct way to interpolate any given type, so the only reason to have multiple versions in different traits is name ergonomics.


Also, please disregard my earlier comment about Ord. I tossed this off in a hurry and was confused something I was thinking about with the RFC (I was playing with sample(t: D) -> R where D: Field, R: Interpolable).

@mweatherley
Copy link
Author

The interval type makes sense, and I agree Range doesn't seem like quite the right fit. Among other things, allowing curves over arbitrary intervals could simplify thinking about segments of splines as curves in their own right. I think we may want to investigate functions for sticking/appending curves together based on their domains or splitting them apart.

Yeah, this could be a good idea; I am a bit wary of scope creep at this point, since this seems like something that can be adjudicated outside of the RFC proper, but I agree that it's definitely worth investigating and seriously considering things in this direction.

I do worry that requiring an interval domain will cause problems if we want to work with derivatives. It would be nice to be able to represent the first and second derivatives as "functional curves" where they can be determined analytically (eg. when working with splines), but most Bezier splines aren't generally C2 (and some aren't even C1). I suppose a tangent curve could be a Curve<Option<Vec3>>but this seems at bit clunky and generally at-odds with the direction of sample_unchecked. This is a problem for moving frames as well.

My thoughts on this have yet to materialize into concrete implementation details, but as far as I can tell, we are mostly in the clear; the main thing I would like to do with our spline code in order to adapt it to this interface is to differentiate classes of curves a bit more at the level of types. All of the spline constructions that we currently support, for instance, are globally C1; in my ideal world, this would mean that they naturally produce something that looks like Curve<(Vec3, Vec3)>, packaging the points and derivatives together (or some struct does the same thing).

Then, for instance, for the B-spline construction, the produced type would actually be slightly different from the previous one; in addition to implementing the Curve trait for the C1 data, it would also implement Curve<C2Data> or whatever. (Both of these concrete return types could also provide access to the underlying Bézier segments as well, not through the Curve interface itself, and those would have as much information as you could wish for regardless.)

I am unsure on the finer points of what that type-level differentiation would look like (kind of next on my list to investigate), but I guess my angle here is this: most of the things you would want to do with spline output are only going to care about the Curve interface anyway and not the concrete return types; for instance, most things you would dream of doing with curve geometry really only care that you start with a Curve<C1Data> (positions and derivatives), and both of the concrete return types would get you there.

So, to be brief, my vision for actual implementation involves reifying the quality of spline constructions more strongly at the type level; I hope that makes sense.

@NthTensor
Copy link

NthTensor commented Apr 14, 2024

I am a bit wary of scope creep at this point, since this seems like something that can be adjudicated outside of the RFC proper

Quite right. There's very little I would add to the proposal at this point, and I'm not pushing for any of this to be added to the RFC. But I do think it's worth noting future work items here as well as evaluating the sorts of things the RFC lets us build. Please let me know if you think I'm derailing the discussion, that's not my intention.

All of the spline constructions that we currently support, for instance, are globally C1.

Bezier splines are only C1 within curve segments, and may or may not have smooth transitions between segments depending on the position of the control points. I don't think we can or should assume all our curves will be C1.

Part of the problem is that "tangents" can mean like four different things depending on the underlying curve: "Functional curves" are generally going to be either C1 or piecewise C1, which is the difference between a T and an Option<T>. Sampled curves can have two sources of approximate tangents: Finite differences, or samples from a derivative.

As I see it:

  • We need to be able to specify continuity information as part of a parameter type bound,
  • Depending on the bound, the tangents/acceleration may or may not be guaranteed to exist,
  • And we need to be able to sample both position+tangent and position+tangent+acceleration as single functions for efficiency purposes.

Then, for instance, for the B-spline construction, the produced type would actually be slightly different from the previous one; in addition to implementing the Curve trait for the C1 data, it would also implement Curve or whatever.

The idea of multiple implementations of Curve is interesting, but I don't love the fully qualified syntax required to disambiguate between them.

fn foo<C>(curve: C)
where C: Curve<C1<Vec3>> + Curve<C2<Vec3>>
{
    let (pos, acc) = <C as Curve<C1<Vec3>>>::sample(t)
    let vel = <C as Curve<C2<Vec3>>>::sample(t)
}

Would something like the following work?

// These all have blanket implementations of curve and other Cn/Pn traits 
trait C2C1Curve<T> { ... } // C2 + C1: Curve<(T, T, T)>
trait P2C1Curve<T> { ... } // Piecewise C2 + C1: Curve<T, T, Option<T>>
trait P2P1Curve<T> { ... } // Piecewise C2 + Piecewise C1: Curve<(T, Option<T>, Option<T>)>
trait C1Curve<T> { ... } // C1: Curve<(T, T)>
trait P1Curve<T> { ... } // Piecewise C1: Curve<(T, Option<T>)>

fn foo(curve: impl P2C1Curve<Vec3>) {
    if let (pos, vel, Some(acc)) = curve::sample(t) {
        ...
    }
    let acc = curve.sample_acc(t);
    let pos = curve.sample_pos(t);
}

New curves which provide tangents/acceleration would implement the strongest trait they can. The types are a bit cumbersome but I think it would avoid the qualified syntax.

We don't have to spec anything out as part of the RFC, but I'd like a better idea of what this would look like to make sure we aren't locking ourselves out of anything in the future.

@mweatherley
Copy link
Author

Please let me know if you think I'm derailing the discussion, that's not my intention.

You're quite fine. :)

Bezier splines are only C1 within curve segments, and may or may not have smooth transitions between segments depending on the position of the control points. I don't think we can or should assume all our curves will be C1.

It seems I actually misspoke, since the cubic NURBS construction is only C0 (in full generality). What I was trying to get at was really that only the CubicCurve type itself really says 'C0' for the Bézier constructions; I'm not saying we shouldn't support C0 data, by any means! What I'm really trying to say is that my view is that we should do some differentiation at the level of the curve constructors (the first thing that came to mind was genericity with respect to marker types of some kind, which it seems you have also thought about).

Part of the problem is that "tangents" can mean like four different things depending on the underlying curve: "Functional curves" are generally going to be either C1 or piecewise C1, which is the difference between a T and an Option<T>.

Yeah, I agree here. My main thing is that I would prefer for, e.g., Option<Vec3> not to appear as a curve output for a "maybe tangent" directly — rather, I would prefer a system where we give access to as much output as possible that we know is valid globally, while also providing type-level tools (rather than through Curve per se) for them to access data that might not be 'valid' (e.g. continuous across the curve) because of type constraints alone.

For instance, with how things are currently set up, something like CubicCurve might only have a Curve<Vec3> implementation on its own, but we could provide access to

  • the curve segments, which would be able to give the derivative data individually, and/or
  • a "promotion" procedure, wherein we give the derivative data globally, with the user-level understanding that it may not be globally continuous.

Sampled curves can have two sources of approximate tangents: Finite differences, or samples from a derivative.

This is true, but I suppose I see "finite differences" as a "promotion" procedure, something like:

fn numerical_derivative(position_curve: SampleCurve<Position>) -> SampleCurve<PositionAndVelocity>

hence impl Curve<Position> -> impl Curve<PositionAndVelocity>, if that makes sense; so they are at least the same kind of Curve.

When you start with a Curve<PositionAndVelocity> to begin with, you can map down to just a Curve<Position> and then resample and call numerical_derivative, and you should get something similar to what you would get if you just called resample to begin with (at least if numerical_derivative is any good), so this all sort of makes sense to me.

As I see it:

* We need to be able to specify continuity information as part of a parameter type bound,

* Depending on the bound, the tangents/acceleration may or may not be guaranteed to exist,

* And we need to be able to sample both position+tangent and position+tangent+acceleration as single functions for efficiency purposes.

Agreed on all counts.

The idea of multiple implementations of Curve is interesting, but I don't love the fully qualified syntax required to disambiguate between them.

Having sat with it a little, I don't like it much either; I think this is a situation where explicit methods producing Curve outputs for certain kinds of data would be better.

Would something like the following work?

// These all have blanket implementations of curve and other Cn/Pn traits 
trait C2C1Curve<T>; // C2 + C1: Curve<(T, T, T)>
trait P2C1Curve<T>; // Piecewise C1 + C1: Curve<T, T, Option<T>>
trait P2P1Curve<T>; // Piecewise C2 + Piecewise C1: Curve<(T, Option<T>, Option<T>)>
trait C1Curve<T>; // C1: Curve<(T, T)>
trait P1Curve<T>; // Piecewise C1: Curve<(T, Option<T>)>

fn foo<C>(curve: C)
where C: P2C1Curve<Vec3>
{
    if let (pos, vel, Some(acc)) = curve::sample(t) {
        ...
    }
    let acc = curve.sample_acc(t);
    let pos = curve.sample_pos(t);
}

New curves which provide tangents/acceleration would implement the strongest trait they can. The types are a bit cumbersome but I think it would avoid the qualified syntax.

We don't have to spec anything out as part of the RFC, but I'd like a better idea of what this would look like to make sure we aren't locking ourselves out of anything in the future.

I think this is reasonable, although I am a little wary of actually putting Option types in curve return values, because I'm skeptical that we can make interpolation work well for them. (I wrote more on this in the earlier portion of this reply.)

I think what I'd like to do now is to sit down when I have time and actually prototype this (at least to the point where we can convince ourselves we won't be stepping on our own toes); I think we are mostly on the same page, though.

@NthTensor
Copy link

NthTensor commented Apr 14, 2024

I think this is reasonable, although I am a little wary of actually putting Option types in curve return values, because I'm skeptical that we can make interpolation work well for them.

It seems to me like that is an issue with interpolation. We shouldn't try to interpolate across a discontinuity, and if we want to represent derivatives directly using curves we can only assume they are piecewise continuous. Maybe the concrete curve representations need to know about their discontinuities and treat them as boundaries for interpolation.

Does that make sense? I'll try to expand on this more when I have time.

@mweatherley
Copy link
Author

mweatherley commented Apr 15, 2024

It seems to me like that is an issue with interpolation. We shouldn't try to interpolate across a discontinuity, and if we want to represent derivatives directly using curves we can only assume they are piecewise continuous. Maybe the concrete curve representations need to know about their discontinuities and treat them as boundaries for interpolation.

Does that make sense? I'll try to expand on this more when I have time.

It does make sense, and I am curious where this line of inquiry leads — especially in the matter of how such a thing would be distinct from a vector of curves.

Copy link
Member

@alice-i-cecile alice-i-cecile left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really like the API, and see the need for this. I've left some further comments on areas that I feel could be improved.

While I fully agree with the need for precise mathematical language when talking about this domain, I think we can and should do a better job making this approachable, by sprinkling in more tangible examples, explaining concepts in simple language first and so on :)

rfcs/80-curve-trait.md Outdated Show resolved Hide resolved
rfcs/80-curve-trait.md Show resolved Hide resolved
rfcs/80-curve-trait.md Outdated Show resolved Hide resolved
rfcs/80-curve-trait.md Outdated Show resolved Hide resolved
rfcs/80-curve-trait.md Outdated Show resolved Hide resolved
rfcs/80-curve-trait.md Show resolved Hide resolved
rfcs/80-curve-trait.md Show resolved Hide resolved
rfcs/80-curve-trait.md Outdated Show resolved Hide resolved
Copy link
Member

@alice-i-cecile alice-i-cecile left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm happy with the state of this now :)

@NthTensor
Copy link

NthTensor commented Apr 24, 2024

I highly approve of the technical and design aspects of this proposal. It should simplify learning and support, and encourage compatibility between ecosystem crates which I love. All in all, it's extremely strong and well-thought-out.

That said, I feel we are lacking a broader plan for integration with the existing codebase. I would love to see:

  1. A list of all of the ad-hoc curve/interpolation implementations already in the engine
  2. and a brief discussion of if/how each should be ported to support this api.

Off the top of my head, we currently have:

  • Animation curves: AnimationClip/AnimationCurves/VariableCurve
  • Spline curves: CubicCurve/RationalCurve
  • Color interpolation: Mix
  • Animation interpolation: Animatable
  • Vector space interpolation: VectorSpace

VariableCurves are keyframe based and not supporting them is the main thing I think would halt this proposal at this point.

@mweatherley
Copy link
Author

mweatherley commented Apr 24, 2024

* Animation curves: AnimationClip/AnimationCurves/VariableCurve

Yeah, VariableCurve is currently the primary thing keeping me up at night (so to speak). As I see it, a VariableCurve is something that "wants to" be either:

  • A Curve<Transform>, although there are some barriers to making this literally a reality, because each individual component gets to declare its own interpolation mode, or
  • A Curve<MorphWeights> (or perhaps a collection of Curve<f32> for each morph target, really)

In the first case, there isn't really any getting around storing interpolation mode metadata for the translation, rotation, and scale components, which really introduces a number of different kinds of curves (although the translation and scale components can presumably share everything).

Actually, the main thing of interest in the first case is that cubic spline interpolation does not produce tangents (at least, in the GLTF spec they are not produced) but it does use them. This is an interesting wrinkle, because the keyframe data being stored is actually different from the data being interpolated, so it doesn't fit quite so neatly into the Interpolable shell. However, that doesn't mean that it can't use Curve; there are a few options available in this regard:

  • Since cubic spline interpolation does produce tangents mathematically speaking, we can put those into a type implementing Interpolable and then use all the machinery as one would expect (creating an UnevenSampleCurve directly) and throwing away the output tangent data at the end;
  • Implement Curve<Vec3> (for example) using an essentially irrelevant interpolation mode on Vec3 (or a wrapper thereof) but internally storing tangent data at keyframes and using it for interpolation when sampling;
  • Retool the definitions of Curve/Interpolable somehow to accommodate situations like this, where the data being sampled is actually different from (typically a projection of) the data that it is interpolated from.

My thoughts on these are as follows: The first is most closely in-line with the present vision of this RFC (but may have some negative performance implications worth investigating). The second is kind of sacrilegious, but only mildly so (after all, the way that a Curve computes its data is actually not really the API's business), mostly leading to negative consequences for any attempts at using resampling. The third is interesting and, I think, worth investigating, but also kind of risks overcomplicating things for little actual benefit; actually, as I'm writing this, it's kind of impossible to imagine an API for such a thing where something like resample is even possible — if you can't get the full data intermediately, it's something you simply cannot perform.

Now, for that MorphWeights business: actually, the tricky thing here is that the size of the set of morph targets is unknown statically (since it is basically determined by the length of a vector loaded through GLTF), which is why I mentioned a collection of Curves rather than a Curve<MorphWeights> (which, when used directly, would suffer issues with allocation — interpolating data for such a thing would require allocating), since, for instance, a Vec of Curve<f32>s reifies the fact that the set of morph weights has constant size. I think this can work, but it brings into scope the question of multiplexing over families of curves in a way which is efficient.

As a closing thought on this, (I should have some more concrete proposals in the near future), I don't think there's really anything at all sacrosanct about the way that VariableCurve is presently set up right now: it's essentially just a mirror in Bevy of how animations are encoded in GLTF, and changing it would mostly involve changing the code in AnimationTargetContext to function somewhat differently.

@NthTensor
Copy link

NthTensor commented Apr 24, 2024

I don't think there's really anything at all sacrosanct about the way that VariableCurve

Yeah, that's absolutely true. I think our current animation system was the first thing merged for the 0.14 cycle, and VariableCurve wasn't even mutable until yesterday. So now is a really good time to go for further breaking changes.

I just want to make sure we articulate a concrete plan for animation without performance or support degradation.

@NthTensor
Copy link

After reviewing this, the associated PRs, and the animation graph RFC: I'm happy with this. The trait itself is sound and user friendly. The prototype animation PR shows how this can be integrated across the wider engine, and the recent work on exposure curves demonstrates the need for curve abstractions. Everything remaining is just implementation details, and can be handed off to the dedicated working group.

Furthermore, a poor implementation would lead to additional maintenance burden and compile times with
little benefit.

## Rationale and alternatives
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A key question that isn't addressed here is "why f32 everywhere". I know this has come up on Discord, but I would like to see a sentence or two of rationale.

Copy link
Member

@alice-i-cecile alice-i-cecile left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remarkably clear and thought through. You've sold me on the value of this API, and the abstractions chosen seem natural and powerful. We will need so many examples to make this tangible and useful to non-mathematicians, but that's fine.

Before I approve, there are a couple of straightforward edits to be made. More importantly, I want to make sure that we record why f32 is used as the base numerical type for t, rather than making this generic. I agree with that decision, but it's an important design consideration that should be documented as it has come up repeatedly in other contexts.

@mweatherley
Copy link
Author

This has now been substantially rewritten. The goals of this rewrite were as follows:

  1. Confine type-inferred interpolation to a more mathematical core for which it is well-scoped and well-behaved. The imposition of type-inferred interpolation was too opinionated, so the API has been updated to better support explicitly-provided interpolation instead. If bevy_animation wants to use general type-inferred interpolation in its internals, its interoperation with Curve can be built on top of SampleCurve and UnevenSampleCurve (which now use explicit interpolators) rather than using some intrinsic notion of interpolation owned by bevy_math.
  2. Absorb some changes that were already being made in early review of the code as part of the working group. Part of this was interpolation-related (it had already been confined to resampling methods rather than the trait itself), but there were some other small changes too; e.g. I no longer think it's a good idea to default to non-lazy map behavior for sample-interpolated curves — the non-laziness should just be explicit instead.

I think that this ends up pushing the Curve API to be more flexible and less closely married to the original machinations of bevy_animation (ironically, spinning off interpolation from Animatable was kind of what spurred this in the first place), but I think the changes are for the better.

github-merge-queue bot pushed a commit to bevyengine/bevy that referenced this pull request Jun 10, 2024
# Objective

Partially address #13408 

Rework of #13613

Unify the very nice forms of interpolation specifically present in
`bevy_math` under a shared trait upon which further behavior can be
based.

The ideas in this PR were prompted by [Lerp smoothing is broken by Freya
Holmer](https://www.youtube.com/watch?v=LSNQuFEDOyQ).

## Solution

There is a new trait `StableInterpolate` in `bevy_math::common_traits`
which enshrines a quite-specific notion of interpolation with a lot of
guarantees:
```rust
/// A type with a natural interpolation that provides strong subdivision guarantees.
///
/// Although the only required method is `interpolate_stable`, many things are expected of it:
///
/// 1. The notion of interpolation should follow naturally from the semantics of the type, so
///    that inferring the interpolation mode from the type alone is sensible.
///
/// 2. The interpolation recovers something equivalent to the starting value at `t = 0.0`
///    and likewise with the ending value at `t = 1.0`.
///
/// 3. Importantly, the interpolation must be *subdivision-stable*: for any interpolation curve
///    between two (unnamed) values and any parameter-value pairs `(t0, p)` and `(t1, q)`, the
///    interpolation curve between `p` and `q` must be the *linear* reparametrization of the original
///    interpolation curve restricted to the interval `[t0, t1]`.
///
/// The last of these conditions is very strong and indicates something like constant speed. It
/// is called "subdivision stability" because it guarantees that breaking up the interpolation
/// into segments and joining them back together has no effect.
///
/// Here is a diagram depicting it:
/// ```text
/// top curve = u.interpolate_stable(v, t)
///
///              t0 => p   t1 => q    
///   |-------------|---------|-------------|
/// 0 => u         /           \          1 => v
///              /               \
///            /                   \
///          /        linear         \
///        /     reparametrization     \
///      /   t = t0 * (1 - s) + t1 * s   \
///    /                                   \
///   |-------------------------------------|
/// 0 => p                                1 => q
///
/// bottom curve = p.interpolate_stable(q, s)
/// ```
///
/// Note that some common forms of interpolation do not satisfy this criterion. For example,
/// [`Quat::lerp`] and [`Rot2::nlerp`] are not subdivision-stable.
///
/// Furthermore, this is not to be used as a general trait for abstract interpolation.
/// Consumers rely on the strong guarantees in order for behavior based on this trait to be
/// well-behaved.
///
/// [`Quat::lerp`]: crate::Quat::lerp
/// [`Rot2::nlerp`]: crate::Rot2::nlerp
pub trait StableInterpolate: Clone {
    /// Interpolate between this value and the `other` given value using the parameter `t`.
    /// Note that the parameter `t` is not necessarily clamped to lie between `0` and `1`.
    /// When `t = 0.0`, `self` is recovered, while `other` is recovered at `t = 1.0`,
    /// with intermediate values lying between the two.
    fn interpolate_stable(&self, other: &Self, t: f32) -> Self;
}
```

This trait has a blanket implementation over `NormedVectorSpace`, where
`lerp` is used, along with implementations for `Rot2`, `Quat`, and the
direction types using variants of `slerp`. Other areas may choose to
implement this trait in order to hook into its functionality, but the
stringent requirements must actually be met.

This trait bears no direct relationship with `bevy_animation`'s
`Animatable` trait, although they may choose to use `interpolate_stable`
in their trait implementations if they wish, as both traits involve
type-inferred interpolations of the same kind. `StableInterpolate` is
not a supertrait of `Animatable` for a couple reasons:
1. Notions of interpolation in animation are generally going to be much
more general than those allowed under these constraints.
2. Laying out these generalized interpolation notions is the domain of
`bevy_animation` rather than of `bevy_math`. (Consider also that
inferring interpolation from types is not universally desirable.)

Similarly, this is not implemented on `bevy_color`'s color types,
although their current mixing behavior does meet the conditions of the
trait.

As an aside, the subdivision-stability condition is of interest
specifically for the [Curve
RFC](bevyengine/rfcs#80), where it also ensures
a kind of stability for subsampling.

Importantly, this trait ensures that the "smooth following" behavior
defined in this PR behaves predictably:
```rust
    /// Smoothly nudge this value towards the `target` at a given decay rate. The `decay_rate`
    /// parameter controls how fast the distance between `self` and `target` decays relative to
    /// the units of `delta`; the intended usage is for `decay_rate` to generally remain fixed,
    /// while `delta` is something like `delta_time` from an updating system. This produces a
    /// smooth following of the target that is independent of framerate.
    ///
    /// More specifically, when this is called repeatedly, the result is that the distance between
    /// `self` and a fixed `target` attenuates exponentially, with the rate of this exponential
    /// decay given by `decay_rate`.
    ///
    /// For example, at `decay_rate = 0.0`, this has no effect.
    /// At `decay_rate = f32::INFINITY`, `self` immediately snaps to `target`.
    /// In general, higher rates mean that `self` moves more quickly towards `target`.
    ///
    /// # Example
    /// ```
    /// # use bevy_math::{Vec3, StableInterpolate};
    /// # let delta_time: f32 = 1.0 / 60.0;
    /// let mut object_position: Vec3 = Vec3::ZERO;
    /// let target_position: Vec3 = Vec3::new(2.0, 3.0, 5.0);
    /// // Decay rate of ln(10) => after 1 second, remaining distance is 1/10th
    /// let decay_rate = f32::ln(10.0);
    /// // Calling this repeatedly will move `object_position` towards `target_position`:
    /// object_position.smooth_nudge(&target_position, decay_rate, delta_time);
    /// ```
    fn smooth_nudge(&mut self, target: &Self, decay_rate: f32, delta: f32) {
        self.interpolate_stable_assign(target, 1.0 - f32::exp(-decay_rate * delta));
    }
```

As the documentation indicates, the intention is for this to be called
in game update systems, and `delta` would be something like
`Time::delta_seconds` in Bevy, allowing positions, orientations, and so
on to smoothly follow a target. A new example, `smooth_follow`,
demonstrates a basic implementation of this, with a sphere smoothly
following a sharply moving target:


https://github.com/bevyengine/bevy/assets/2975848/7124b28b-6361-47e3-acf7-d1578ebd0347


## Testing

Tested by running the example with various parameters.
@mweatherley
Copy link
Author

Okay. After refactoring the draft library, I ended up with some shared interpolation interfaces which seem pretty useful for implementors. (I used them in the bevy_animation rewrite, for example.) I ended up adding a section to the RFC that describes them as a result; I think having something like that is pretty important to lower the implementation barrier if trait-dispatched interpolation is off the table.

I would say that now I'm reasonably happy with this in terms of completeness (with the new approach in mind), at least until someone changes my mind. :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants