A peek under the hood part 2

A place for App developers to hang out / post
Post Reply
jachin99

Posts: 1293
Joined: Wed Feb 24, 2016 3:36 pm
Location:

HTPC Specs: Show details

A peek under the hood part 2

#1

Post by jachin99 » Fri Mar 09, 2018 5:45 pm

A Look at Media Center's Rendering Engine



In Part One, we examined the high-level architecture of the Windows Media Center Presentation Layer, including the relationship between its User Experience Framework and Rendering Engine. In this installment, we’ll take a more detailed look at the Rendering Engine and its component parts.

The Rendering Engine is an internal component of Media Center. It is designed to be used exclusively via the Windows Media Center Presentation Layer’s Messaging System and requires a sophisticated client such as the User Experience Framework to drive it. It is written in C++ and places extreme emphasis on simplicity and performance.

To understand how the Rendering Engine works, we need to start by examining its foundation and work upward.

Underpinnings

The lowest layers of the Rendering Engine aren’t directly concerned with media processing tasks at all.



The Rendering Engine’s Core Services form a foundation runtime for all rendering features.
•The Scheduling layer is a simple work-queue system for handling incoming requests to the Rendering Engine. It also contains custom logic for integrating time-critical periodic work that bypasses normal queue processing.
•The Memory Management layer provides heap support optimized to the Rendering Engine’s threading and allocation patterns.
•The Message Transport layer implements an endpoint of the Messaging System. Pluggable transport implementations allow for flexibility in how the User Experience Framework and Rendering Engine are connected and deployed.
•The Object Management layer defines an object-oriented model for presenting Rendering Engine functionality to Messaging System clients. It provides identity and lifetime management services and includes facilities for resource partitioning and graceful cleanup of per-application resources.

Now that we’ve seen the foundation, we can start to look at layers where rendering actually happens.

Output Options

Pluggable Output Drivers allow for audio-visual rendering on a variety of technologies and hardware platforms.



Output Drivers do the “heavy lifting” to implement rendering functionality by providing peer implementations for many of the key objects in the Presentation Model. Current drivers include DirectX, Win32 and XBOX 360 implementations for graphics and sound.

Various driver implementations have existed in Media Center over the years. In addition to the obvious physical platform support (set-top boxes, PCs, game consoles), output drivers have been used to provide flexibility in the development process. For example, Windows XP Media Center Edition 2004 (a.k.a. Harmony) was based on DirectX 7. For Windows XP Media Center Edition 2005 (a.k.a. Symphony), we moved to DirectX 9 and significantly reworked some of our graphics algorithms. To keep this work from disrupting other parts of the project, we supported both the D3D7 and D3D9 graphics output drivers side-by-side for most of the project.

At startup time, the User Experience Framework communicates directly with Output Drivers in order to initialize and configure the Rendering Engine. Once things are up and running however, the conversation moves up a layer in the stack.

Painting a Picture

The Rendering Engine’s abstract Presentation Model defines building blocks that can be combined to create an audio-visual scene.



Once the Rendering Engine has been initialized, the User Experience Framework describes UI scenes using objects from the abstract Presentation Model.
•A Graphics Device exposes properties, capabilities and rendering configuration of a graphics technology (e.g. GDI, D3D…)
•A Render Operation implements an individual unit of work to be performed during a rendering pass. It can also track possible cleanup or error handling that may be required later in the pass.
•A Visual defines a unique coordinate space in the rendering hierarchy. Visuals are organized as a tree and expose UI-relevant states like transforms and constant alpha. These states are translated into rendering operations as needed during a rendering pass. Visuals may also contain rendering operations for drawing content as directed by the User Experience Framework.
•A Clip Gradient is a hybrid primitive that performs color channel modulation according to a specified ramp, optionally clipping. It permutes the output of other render operations. A variety of visual effects are possible. The most visible example is the “edge fade” effect used when scrolling lists and galleries in Media Center.
•A Surface is a drawable piece of pixel-mapped visual data (like an image or video frame).
•A Surface Pool is physical storage for one or more surfaces. On technologies where texture allocation is expensive enough to cause glitches, a Surface Pool may be sub-allocated to hold multiple Surfaces. For video playback, a Surface Pool may hold multiple frames of video data.
•A Sound Device exposes properties, capabilities and rendering configuration of an audio technology (e.g. Win32, DSound, XAudio…)
•A Sound Buffer is physical storage for audio data.
•A Sound is a logical instance of playback from a Sound Buffer.

With these features, the User Experience Framework can compose a static scene and send it coarse updates. This is enough to produce UI that looks like Media Center, but not yet enough to build UI that feels like Media Center. For that, we need animation.

A Measure of Independence

An important goal of separating the Rendering Engine from the User Experience Framework is to allow for loosely-coupled timing between them. From a rendering perspective, a continuous stream of new frames needs to get to the screen without involving the User Experience Framework very often.



In addition to creating or modifying states via the Presentation Model, the User Experience Framework can direct the Animation System to modify presentation states on a frame-by-frame basis according to a timeline. Any numeric property in the presentation model (single or composite) can be animated. Many effects are orchestrated from the User Experience Framework by creating a scene and adding animation to it.
•The Value Table is a set of individual values being computed for animation purposes.
•A Sequence is a keyframe-based timeline for modifying an individual value in the Value Table.
•An Interpolation is a function that can be applied to produce intermediate values between two keyframes in a sequence. Examples include Linear, Sine, Square, Bezier, etc.
•A Property Connector collects one or more values from the Value Table and combines them to update an object property. Also supports sampling from the target property to initialize keyframes before a sequence is played.

The Animation System also has a special callback registered with the scheduling layer for monitoring the passage of scene time. This allows various output driver implementations to synchronize animation updates with other media processing. For example, the DirectX driver for both PC and XBOX may prepare frames in advance based on upcoming presentation timestamps from a video stream.

Renderer Wrap-Up

It is easy to see why the Rendering Engine requires a sophisticated client to drive it. The design includes few graphical primitives and follows a strict philosophy of keeping complexity out of the rendering path. Complex scenes are achieved by composing many simple elements together. Orchestration of interactive UI scenes and transitions is a task left to the User Experience Framework, the topic of our next installment.

Francis

Post Reply