-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A curve trait for general interoperation #80
base: main
Are you sure you want to change the base?
Conversation
The proposal looks great so far. Some thoughts:
|
Might have to have option/result samples. I think the general domain is worth considering. |
Also do we want this to be general enough to capture parametric surfaces? Because I think we can do this if T is not Ord (a Vec2 for example). |
Ah, I'm glad you brought this up. The basic form of As for (2) — good question. Once again, the animation RFC I linked suggested that sampling would always return a value, but that the way that out-of-bounds samples might be formed would be left implementation-specific. Perhaps @james7132 might chime in with some insight there. I think that it's tempting to believe that returning an error or |
I guess I'm a little confused by this since there are no |
I had no idea, that's pretty unfortunate. What if sampling outside was done by means of a wrapper? For example a |
A random thought on the range business: we could implement our own
I think something like that could work, but it would still be a little annoying to have to unwrap its sampling output all the time despite knowing it would always succeed. |
I was assuming fn clamp<T>(x: Result<T, OutOfBounds<T>>) -> T {
match x {
Ok(x) => x,
Err(out_of_bounds) => todo!(), // `out_of_bounds` has all the info we need to extrapolate out-of-bounds sampling
}
}
let curve = function_curve((0.0..=1.0), |x| x);
let out_of_bounds_sample = clamp(curve.sample(2.0)); // returns 2.0 |
FYI, this is slated to change over either the 2024 or 2027 edition boundary: https://github.com/pitaj/rfcs/blob/new-range/text/3550-new-range.md |
That's good to know! Honestly, I tinkered around and found that (This wouldn't be very heavy, but would enforce non-emptiness and include methods like |
Okay, here again to explain a couple changes: Firstly, I added
Secondly, with the addition of
Outside of this, I don't really care whether |
Read through the updates. Looks really good. The interval type makes sense, and I agree Range doesn't seem like quite the right fit. Among other things, allowing curves over arbitrary intervals could simplify thinking about segments of splines as curves in their own right. I think we may want to investigate functions for sticking/appending curves together based on their domains or splitting them apart. I do worry that requiring an interval domain will cause problems if we want to work with derivatives. It would be nice to be able to represent the first and second derivatives as "functional curves" where they can be determined analytically (eg. when working with splines), but most Bezier splines aren't generally C2 (and some aren't even C1). I suppose a tangent curve could be a One possible solution might be to allow curves over arbitrary measurable sets (or otherwise generalize Setting derivatives aside, I agree with you're points about domain checking. My preference would be for By the way, I can think of two other traits that also provide interpolation: Also, please disregard my earlier comment about |
Yeah, this could be a good idea; I am a bit wary of scope creep at this point, since this seems like something that can be adjudicated outside of the RFC proper, but I agree that it's definitely worth investigating and seriously considering things in this direction.
My thoughts on this have yet to materialize into concrete implementation details, but as far as I can tell, we are mostly in the clear; the main thing I would like to do with our spline code in order to adapt it to this interface is to differentiate classes of curves a bit more at the level of types. All of the spline constructions that we currently support, for instance, are globally C1; in my ideal world, this would mean that they naturally produce something that looks like Then, for instance, for the B-spline construction, the produced type would actually be slightly different from the previous one; in addition to implementing the I am unsure on the finer points of what that type-level differentiation would look like (kind of next on my list to investigate), but I guess my angle here is this: most of the things you would want to do with spline output are only going to care about the So, to be brief, my vision for actual implementation involves reifying the quality of spline constructions more strongly at the type level; I hope that makes sense. |
Quite right. There's very little I would add to the proposal at this point, and I'm not pushing for any of this to be added to the RFC. But I do think it's worth noting future work items here as well as evaluating the sorts of things the RFC lets us build. Please let me know if you think I'm derailing the discussion, that's not my intention.
Bezier splines are only C1 within curve segments, and may or may not have smooth transitions between segments depending on the position of the control points. I don't think we can or should assume all our curves will be C1. Part of the problem is that "tangents" can mean like four different things depending on the underlying curve: "Functional curves" are generally going to be either C1 or piecewise C1, which is the difference between a As I see it:
The idea of multiple implementations of fn foo<C>(curve: C)
where C: Curve<C1<Vec3>> + Curve<C2<Vec3>>
{
let (pos, acc) = <C as Curve<C1<Vec3>>>::sample(t)
let vel = <C as Curve<C2<Vec3>>>::sample(t)
} Would something like the following work? // These all have blanket implementations of curve and other Cn/Pn traits
trait C2C1Curve<T> { ... } // C2 + C1: Curve<(T, T, T)>
trait P2C1Curve<T> { ... } // Piecewise C2 + C1: Curve<T, T, Option<T>>
trait P2P1Curve<T> { ... } // Piecewise C2 + Piecewise C1: Curve<(T, Option<T>, Option<T>)>
trait C1Curve<T> { ... } // C1: Curve<(T, T)>
trait P1Curve<T> { ... } // Piecewise C1: Curve<(T, Option<T>)>
fn foo(curve: impl P2C1Curve<Vec3>) {
if let (pos, vel, Some(acc)) = curve::sample(t) {
...
}
let acc = curve.sample_acc(t);
let pos = curve.sample_pos(t);
} New curves which provide tangents/acceleration would implement the strongest trait they can. The types are a bit cumbersome but I think it would avoid the qualified syntax. We don't have to spec anything out as part of the RFC, but I'd like a better idea of what this would look like to make sure we aren't locking ourselves out of anything in the future. |
You're quite fine. :)
It seems I actually misspoke, since the cubic NURBS construction is only C0 (in full generality). What I was trying to get at was really that only the
Yeah, I agree here. My main thing is that I would prefer for, e.g., For instance, with how things are currently set up, something like
This is true, but I suppose I see "finite differences" as a "promotion" procedure, something like: fn numerical_derivative(position_curve: SampleCurve<Position>) -> SampleCurve<PositionAndVelocity> hence When you start with a
Agreed on all counts.
Having sat with it a little, I don't like it much either; I think this is a situation where explicit methods producing
I think this is reasonable, although I am a little wary of actually putting I think what I'd like to do now is to sit down when I have time and actually prototype this (at least to the point where we can convince ourselves we won't be stepping on our own toes); I think we are mostly on the same page, though. |
It seems to me like that is an issue with interpolation. We shouldn't try to interpolate across a discontinuity, and if we want to represent derivatives directly using curves we can only assume they are piecewise continuous. Maybe the concrete curve representations need to know about their discontinuities and treat them as boundaries for interpolation. Does that make sense? I'll try to expand on this more when I have time. |
It does make sense, and I am curious where this line of inquiry leads — especially in the matter of how such a thing would be distinct from a vector of curves. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really like the API, and see the need for this. I've left some further comments on areas that I feel could be improved.
While I fully agree with the need for precise mathematical language when talking about this domain, I think we can and should do a better job making this approachable, by sprinkling in more tangible examples, explaining concepts in simple language first and so on :)
Co-authored-by: Alice Cecile <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm happy with the state of this now :)
I highly approve of the technical and design aspects of this proposal. It should simplify learning and support, and encourage compatibility between ecosystem crates which I love. All in all, it's extremely strong and well-thought-out. That said, I feel we are lacking a broader plan for integration with the existing codebase. I would love to see:
Off the top of my head, we currently have:
VariableCurves are keyframe based and not supporting them is the main thing I think would halt this proposal at this point. |
Yeah,
In the first case, there isn't really any getting around storing interpolation mode metadata for the translation, rotation, and scale components, which really introduces a number of different kinds of curves (although the translation and scale components can presumably share everything). Actually, the main thing of interest in the first case is that cubic spline interpolation does not produce tangents (at least, in the GLTF spec they are not produced) but it does use them. This is an interesting wrinkle, because the keyframe data being stored is actually different from the data being interpolated, so it doesn't fit quite so neatly into the
My thoughts on these are as follows: The first is most closely in-line with the present vision of this RFC (but may have some negative performance implications worth investigating). The second is kind of sacrilegious, but only mildly so (after all, the way that a Now, for that As a closing thought on this, (I should have some more concrete proposals in the near future), I don't think there's really anything at all sacrosanct about the way that |
Yeah, that's absolutely true. I think our current animation system was the first thing merged for the 0.14 cycle, and I just want to make sure we articulate a concrete plan for animation without performance or support degradation. |
After reviewing this, the associated PRs, and the animation graph RFC: I'm happy with this. The trait itself is sound and user friendly. The prototype animation PR shows how this can be integrated across the wider engine, and the recent work on exposure curves demonstrates the need for curve abstractions. Everything remaining is just implementation details, and can be handed off to the dedicated working group. |
Furthermore, a poor implementation would lead to additional maintenance burden and compile times with | ||
little benefit. | ||
|
||
## Rationale and alternatives |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A key question that isn't addressed here is "why f32 everywhere". I know this has come up on Discord, but I would like to see a sentence or two of rationale.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remarkably clear and thought through. You've sold me on the value of this API, and the abstractions chosen seem natural and powerful. We will need so many examples to make this tangible and useful to non-mathematicians, but that's fine.
Before I approve, there are a couple of straightforward edits to be made. More importantly, I want to make sure that we record why f32
is used as the base numerical type for t
, rather than making this generic. I agree with that decision, but it's an important design consideration that should be documented as it has come up repeatedly in other contexts.
Co-authored-by: Alice Cecile <[email protected]>
This has now been substantially rewritten. The goals of this rewrite were as follows:
I think that this ends up pushing the Curve API to be more flexible and less closely married to the original machinations of |
# Objective Partially address #13408 Rework of #13613 Unify the very nice forms of interpolation specifically present in `bevy_math` under a shared trait upon which further behavior can be based. The ideas in this PR were prompted by [Lerp smoothing is broken by Freya Holmer](https://www.youtube.com/watch?v=LSNQuFEDOyQ). ## Solution There is a new trait `StableInterpolate` in `bevy_math::common_traits` which enshrines a quite-specific notion of interpolation with a lot of guarantees: ```rust /// A type with a natural interpolation that provides strong subdivision guarantees. /// /// Although the only required method is `interpolate_stable`, many things are expected of it: /// /// 1. The notion of interpolation should follow naturally from the semantics of the type, so /// that inferring the interpolation mode from the type alone is sensible. /// /// 2. The interpolation recovers something equivalent to the starting value at `t = 0.0` /// and likewise with the ending value at `t = 1.0`. /// /// 3. Importantly, the interpolation must be *subdivision-stable*: for any interpolation curve /// between two (unnamed) values and any parameter-value pairs `(t0, p)` and `(t1, q)`, the /// interpolation curve between `p` and `q` must be the *linear* reparametrization of the original /// interpolation curve restricted to the interval `[t0, t1]`. /// /// The last of these conditions is very strong and indicates something like constant speed. It /// is called "subdivision stability" because it guarantees that breaking up the interpolation /// into segments and joining them back together has no effect. /// /// Here is a diagram depicting it: /// ```text /// top curve = u.interpolate_stable(v, t) /// /// t0 => p t1 => q /// |-------------|---------|-------------| /// 0 => u / \ 1 => v /// / \ /// / \ /// / linear \ /// / reparametrization \ /// / t = t0 * (1 - s) + t1 * s \ /// / \ /// |-------------------------------------| /// 0 => p 1 => q /// /// bottom curve = p.interpolate_stable(q, s) /// ``` /// /// Note that some common forms of interpolation do not satisfy this criterion. For example, /// [`Quat::lerp`] and [`Rot2::nlerp`] are not subdivision-stable. /// /// Furthermore, this is not to be used as a general trait for abstract interpolation. /// Consumers rely on the strong guarantees in order for behavior based on this trait to be /// well-behaved. /// /// [`Quat::lerp`]: crate::Quat::lerp /// [`Rot2::nlerp`]: crate::Rot2::nlerp pub trait StableInterpolate: Clone { /// Interpolate between this value and the `other` given value using the parameter `t`. /// Note that the parameter `t` is not necessarily clamped to lie between `0` and `1`. /// When `t = 0.0`, `self` is recovered, while `other` is recovered at `t = 1.0`, /// with intermediate values lying between the two. fn interpolate_stable(&self, other: &Self, t: f32) -> Self; } ``` This trait has a blanket implementation over `NormedVectorSpace`, where `lerp` is used, along with implementations for `Rot2`, `Quat`, and the direction types using variants of `slerp`. Other areas may choose to implement this trait in order to hook into its functionality, but the stringent requirements must actually be met. This trait bears no direct relationship with `bevy_animation`'s `Animatable` trait, although they may choose to use `interpolate_stable` in their trait implementations if they wish, as both traits involve type-inferred interpolations of the same kind. `StableInterpolate` is not a supertrait of `Animatable` for a couple reasons: 1. Notions of interpolation in animation are generally going to be much more general than those allowed under these constraints. 2. Laying out these generalized interpolation notions is the domain of `bevy_animation` rather than of `bevy_math`. (Consider also that inferring interpolation from types is not universally desirable.) Similarly, this is not implemented on `bevy_color`'s color types, although their current mixing behavior does meet the conditions of the trait. As an aside, the subdivision-stability condition is of interest specifically for the [Curve RFC](bevyengine/rfcs#80), where it also ensures a kind of stability for subsampling. Importantly, this trait ensures that the "smooth following" behavior defined in this PR behaves predictably: ```rust /// Smoothly nudge this value towards the `target` at a given decay rate. The `decay_rate` /// parameter controls how fast the distance between `self` and `target` decays relative to /// the units of `delta`; the intended usage is for `decay_rate` to generally remain fixed, /// while `delta` is something like `delta_time` from an updating system. This produces a /// smooth following of the target that is independent of framerate. /// /// More specifically, when this is called repeatedly, the result is that the distance between /// `self` and a fixed `target` attenuates exponentially, with the rate of this exponential /// decay given by `decay_rate`. /// /// For example, at `decay_rate = 0.0`, this has no effect. /// At `decay_rate = f32::INFINITY`, `self` immediately snaps to `target`. /// In general, higher rates mean that `self` moves more quickly towards `target`. /// /// # Example /// ``` /// # use bevy_math::{Vec3, StableInterpolate}; /// # let delta_time: f32 = 1.0 / 60.0; /// let mut object_position: Vec3 = Vec3::ZERO; /// let target_position: Vec3 = Vec3::new(2.0, 3.0, 5.0); /// // Decay rate of ln(10) => after 1 second, remaining distance is 1/10th /// let decay_rate = f32::ln(10.0); /// // Calling this repeatedly will move `object_position` towards `target_position`: /// object_position.smooth_nudge(&target_position, decay_rate, delta_time); /// ``` fn smooth_nudge(&mut self, target: &Self, decay_rate: f32, delta: f32) { self.interpolate_stable_assign(target, 1.0 - f32::exp(-decay_rate * delta)); } ``` As the documentation indicates, the intention is for this to be called in game update systems, and `delta` would be something like `Time::delta_seconds` in Bevy, allowing positions, orientations, and so on to smoothly follow a target. A new example, `smooth_follow`, demonstrates a basic implementation of this, with a sphere smoothly following a sharply moving target: https://github.com/bevyengine/bevy/assets/2975848/7124b28b-6361-47e3-acf7-d1578ebd0347 ## Testing Tested by running the example with various parameters.
Okay. After refactoring the draft library, I ended up with some shared interpolation interfaces which seem pretty useful for implementors. (I used them in the I would say that now I'm reasonably happy with this in terms of completeness (with the new approach in mind), at least until someone changes my mind. :) |
RENDERED
This RFC describes a trait API for general curves within the Bevy ecosystem, abstracting over their low-level implementations.
It has a partial implementation in this draft PR.
Integration with
bevy_animation
has a proof-of-concept prototype in this draft PR.Integration with
bevy_math
's cubic splines has a proof-of-concept prototype in this draft PR.