Understand what’s possible with the Windows UI Animation Engine


In November of 2015, the Visual Layer was introduced as a series of new APIs in the Windows.UI.Composition namespace.  These new APIs marked the first opportunity for app developers to get direct access to many of the capabilities that have underpinned the UI Frameworks since Windows 8 (e.g., IE/Edge, XAML, & the Windows Shell).  One key aspect of the new Visual Layer is its new animation engine.    But after spending a bunch of time talking to developers at //build this year, it became clear that devs are still not clear on how the pieces of the animation system fit together.  To shed some light on what you can do with the animation system let’s walk through two questions:

  • Who’s responsible for starting animations?
  • What drives the animation to change values?

Implicit vs. Explicit – Who’s Starting the Animation?

The key difference between explicit and implicit animation is around who is responsible for triggering the animation.

Long story short: Explicit animations you trigger. Implicit Animations you configure.

Explicit Animations

Explicit animations are what most people think about when they think of animations, so they are likely familiar to you.  With explicit, you set up the animation and then you, as the developer, trigger the animation.

For example, in XAML you typically create the animation in your markup and trigger the animation in your code behind.


Code Behind:

The animation system in the Visual Layer also supports explicit animations – though only in your code behind.  This should help you to take what you know and use it to get started right away.

Code Behind:

In both cases, the model is the same.  You define your animation (i.e., the duration, shape of the motion, the target property and the value) and then you explicitly trigger it with start/begin.

Implicit Animations

In contrast to explicit animations, an implicit animation is triggered by the platform.  For example, the following code illustrates how to wire up the EntranceThemeTransition to a button via XAML:

This is all the code that is needed. When the button first appears, it will trigger the EntranceThemeTransition to animate it into its location.  Prior to the Visual Layer, you only had a handful of implicit animations  to select from (namely, the XAML Transitions) and virtually no ability to configure them.  The Visual Layer also supports implicit animations, but you get way more control:

This code sets it up so that anytime the Offset gets changed on myVisual, the _offsetKeyFrameAnimation will be triggered by the platform.  Notice that an ExpressionKeyFrame was used in the definition of the implicit animation.   It allows you to set up mathematical formula that will be calculated on each frame in the animation system via ExpressionAnimations,.  In our case, we used a trivial expression of “this.FinalValue” which simply evaluates to the value that triggered the animation in the first place. The animation was pretty basic for illustration but you have the ability to define any animation that you want.

The flexibility of the Visual Layer’s implicit animations let you separate the logic of your app from the motion and provides a powerful way to customize your experience.    For example, one great way to use implicit animations is to set up a trigger on “Offset”.  By doing so, you can create animated transitions from one layout to another that are triggered automatically when XAML’s layout engine runs.

A good place to start learning more is by watching this //build talk on Implicit Animations.

Tools of the Trade – What Drives the Animation?


Time Driven Animations

These are the classical animations that devs know and love.  The previous code snippets showed XAML Storyboards and Composition KeyFrameAnimations, which are both time driven animations.  The idea behind KeyFrame animations (the defacto standard) is that you specify what the value of the animation should be at specific points in time and describe how to transition between those values (often referred to as interpolation or easing functions).  XAML provides a bunch of built in easing functions to help you get aesthetically pleasing results fairly easily.  In the Visual Layer, the workhorse for easings is the CubicBezeirEasingFunction which takes two control points to shape the motion.  The control points allows to have you fine grained control and, since Beziers are so commonly used across animation engines, you can get pre-defined controls points that look good.  I typically head over to Easings.net to get the control points for the standard Penner’s Easing Functions.

Reference Driven Animations (Math Driven)


In the 10586 November Update to Windows, ExpressionAnimation were introduced in the Visual Layer’s animation engine.  ExpressionAnimations allow you to create a mathematical relationship between properties in the animation system that will get updated frame over frame.  The canonical ExpressionAnimation is the parallax animation:

The first thing that this snippet does is create the mathematical expression used to describe the relationship between some inputs and the resulting output of the animation.  It does this by defining a few parameters and references that are then filled in later.  Parameters help you to configure the relationship, but references are the thing that makes expressions interesting.  A parameter (e.g., “MyParallaxRatio”) is filled in by calling a type specific function (e.g., SetScalarParameter).  This informs the animation engine to evaluate all instances of the parameter to the value that you pass in.  This evaluation occurs once, prior to marshalling the animation over to the engine, so it is a good way to specify constant values.  In contrast, a reference (e.g., “MyForeground”) is actually evaluated inside the animation engine on each frame.  It’s the magic that actually makes ExpressionAnimations interesting.

There are a couple of other things to note.  First, you’ll notice that we are able to access the members of “MyForeground” and the “Y” sub-channel.  The expression syntax supports both member access as well as ‘swizzling’ or swapping components of a vector/matrix.  For example:

The other thing to note is that all animations in the Visual Layer are actually templates.  This means that you can use them to set the exact same animation on multiple objects or just reuse the structure of the animation and update parameters/references before starting the animation on the next object.  For example, if we wanted to extent the canonical parallax with multiple levels of depth we can do it with a single definition of the animation:

ExpressionAnimations are a powerful new way to express how things should move relative to one another.  They take away the pain of having to setup a series of complex animations in order to get different objects and properties to move in concert.  Go check them out and learn more with this build talk:

P486: Using Expression Animations to Create Engaging & Custom UI

Input Driven Animations

Richard Thornton / Shutterstock.com
Richard Thornton / Shutterstock.com

With touch going mainstream roughly five years ago, the need to create low latency experiences has become prevalent.  With a finger or pen moving across the screen, the human eye now has an easy reference point to evaluate the latency and smoothness of interactions.  To make these interactions smooth, the major OS companies have been moving more processing to the system and to the GPU (e.g., Chrome and IE).  In Windows, this was done via DirectManipulation, which is more or less a purpose built animation engine.  It solved the key latency challenge as well as how to naturally transition from input to time driven motion.  But with virtually no support for customizing the look and feel of inertia, the downside was that it was just like the Model T – “You can get it in any color you want so long as it is black.”

ElementCompositionPreview.GetScrollViewerManipulationPropertySet was the first step into letting you play with input driven motion.  It still didn’t allow you any additional flexibility to control the look and feel of the content being scrolled, but it did let you wire up secondary content via ExpressionAnimations.  For example, we can finally complete the canonical parallax code snippet:

With this technique, you can implement various forms of goodness: parallax, sticky headers, custom scrollbars, etc.  The only thing missing is customizing the look and feel of the interaction itself…

Enter the InteractionTracker.  It has been designed from the ground up to preserve the latency for stick to the finger experiences, while giving you the flexibility to control every aspect of the look and feel.  On the Window’s UI Platform, we often talk about Easy vs. Possible.  Common UX and calling patterns are often wrapped up into easy to use higher level controls and features.  This makes them really convenient to use, but at the cost of loosing some flexibility and control.  At the other end of the spectrum are things like the Graphics Layer.  Here you can literally control the very pixels being put on screen, but it comes at the cost of being much more complex.  In the schemes of input handling, the InteractionTracker lives more towards the possible end of the spectrum.  For the first time in the Windows UI Platform, you declarative map input to output to specify motion.

A simple illustration of the new flexibility is around how you can modify where inertia ends.  Previously, you could modify the XAML ScrollViewer’s inertia using snap-points by specifying one of four options.  With the InteractionTracker, you use ExpressionAnimations to define where inertia will end with InteractionTracker allowing you a much wider set of possibilities.  Here is an example of creating 3 different snap-points based on where the inertia will naturally come to rest:

Not only can modify where inertia will end like we showed here, but you can also modify the shape of inertia as well. The InteractionTracker enables you to get the precise look and feel you want for your signature experiences.  To learn more about what’s possible and how to use it, check out:

P405: Adding Manipulations in the Visual Layer to Create Customized & Responsive Interactive Experiences

Where to go next?

If you haven’t found your way to the WindowsUIDevLabs, you should certainly go check it out.  Here is how it describes itself:

Welcome to the Windows UI Dev Labs repository for the latest code samples, demos, and developer feedback for building beautiful and engaging Universal Windows Platform apps using Windows UI.

As the next flights come out, it should be a really interesting place to get samples of what can be done with the platform and get the accompanying code.

I always love connecting and hearing about what creative people are able to do with using these new tools, so leave a comment with what you hope to go build or connect on Twitter. Or better yet, share what new things you have been able to create.