I've been toiling in the video player mines again, upgrading animoto's player to a new design that supports HD and resolution switching. Our player was originally based on the popular Jeroen Wigering Media Player, but has been branched and rewritten 101 times since. Still, the architecture of that player was solid enough way back when that it's more or less still intact at some level.
Check out this animoto of our crew partying in the sun last summer, a nice pick-me-up for an ugly slushy February.
A fun little side project of mine has been to write a rendering utility for Flash called FilmStrip. It lets you process a Flash 2D or Papervision3D animated scene into a filmic-looking frame sequence with natural motion-blur, that can then be converted into real video. The motion blur that I'm talking about here is not basic horizontal/vertical 'box' blur, it's created by drawing a series of subframes of the actual animation within each frame capture, so that blurs follow the trajectory and exact shape of the subjects very realistically. Filmic motion blur can be simulated in other ways but this is the easiest and most common, and can yield movie-camera-like results.
To achieve the most natural-looking blurs in complex scenes, FilmStrip blurs each object individually. By default the captureMode property of a FilmStrip is set to EACH_OBJECT. Under the hood this gets a lot more involved than the alternative mode WHOLE_SCENE, which recaptures the entire frame for each blur subframe. There are some very distinct differences and advantages to each that I'll describe here.
"Each Object" Mode
First, let's take a look at a frame generated using object-based blur:
You'll notice that the blurs of the two dice are totally independent of one another and actually overlap pretty nicely. You can even see the blur on the back one through the blur on the front one. I'm pretty amazed that Flash can produce such a high-quality result.
Aside: So why are there hard edges? Well, FilmStrip currently only animates blur subframes either before or after the primary frame, which often leaves portions of the primary frame's edges exposed as the blur pulls back across the object. A 'leading' or 'trailing' blur like this can look pretty good in motion, but I eventually hope to add a blur-both-ways or 'shutter angle' option.
In object mode, we can calculate a generalized delta value for each die's motion and then apply a different number of subframes to each one. This is great because processing power ends up allocated to the portions that need it the most in each frame. Fast-moving objects can draw many subframes, while ones with little motion can simply be captured once. In fact, I was surprised to find that in many cases object mode is actually quite a bit more efficient than frame mode, because of that ability to vary the number of captures per object.
Frame or "Whole Scene" Mode
Now let's take a look at a sequence of frame-based blurs. For this sequence I set the capture to use a fixed 12 subframes, and I exaggerated their visibility to clarify the next point. Look closely (click each image to enlarge it), and you'll see some strange problems where the dice actually seem to be partially intersecting each other as well as the green table surface:
Why does this happen? It's easiest to understand if you visualize a stack of old-fashioned animation cels with the dice painted on plastic transparencies. To build one frame in the object-based capture mode, we would paint a cel showing die 1, then stack a number of additional die 1 cels on top of it to simulate the motion blur, then on top of that stack we'd do the same thing for die2. (That is, blur subframes are localized to the z-depth of each object.) But in frame-based mode we've drawn both dice onto a cel, then laid another cel containing both dice over that, and so on. What happens is that when objects are moving in different directions, the overlapping of those objects reveals new areas of the dice edges on each cel, resulting in a sort of interwoven pattern. (Z-depths of all objects in the scene are repeated cyclically.)
Keep in mind that for the sake of this example I've increased subframe opacity and spread using the FilmStrip settings peakAlpha and subframeDuration. When subframes are blended back more and the final video is in motion, this z-depth problem is normally not noticeable. So unless you're worried about stills from the final video looking completely correct, frame-based blur can look pretty good.
Erring toward quality, for now
SInce this is just old-fashioned, single-threaded, non-GPU-capable ActionScript, all we have to work with is a sequential series of steps, including actually updating the animation many times per frame to simulate a blur. This results in object mode sometimes beating frame mode for efficiency, although full-frame mode is still a heck of a lot simpler.
I stuck with object-based blur to create a short video snippet of a classic John Grden xwing fighter being nailed by a laser beam and really like how it cleanly separates the content:
I decided to make object blur the default setting to err toward quality over speed, figuring that if FilmStrip were ever actually used, it would be to pre-render portions of a Flash scene to video, since it's nowhere near realtime rendering. I've made similar decisions at other points, such as deciding to make FilmStrip tween-engine-agnostic (you can plug it into TweenLite, Tweener, etc. pretty easily), whereas if it were being built for speed it would probably include its own custom animation system.
For the time being, FilmStrip provides a nice simple way to tinker with rendering and see some of the complexities involved in seemingly simple things like motion blur.
To get started, install Git (http://code.google.com/p/git-osx-installer/), open Terminal and navigate to the parent folder you'd like to put papervision, e.g. your general actionscript workspace folder. (Hint: you can type 'cd ' then drag and drop the folder from a finder window onto the Terminal window to instantly get the path.)
Now enter the clone url listed at their Github page followed by the name of the new folder you'd like it to appear in, like so: git clone git://github.com/Papervision3D/Papervision3D.git Papervision3D3
Instantly, it will create a directory called Papervision3D3, automatically git-initialize it, and pull the entire repository in a matter of seconds. That's right, using Git you get to work in the entire repository locally, create branches and so forth. Unlike SVN git works from a single .git folder within the main folder, instead of polluting all subfolders with .svn garbage -- you can even move the initialized folder wherever you like and it will still work fine.
You can't push any changes you make directly back to the official Papervision3D repository. To set up a public or private fork of their project get a Github account and click the button at their project that says "fork". Now you have a working copy at your own hosted page and can pull, commit, and push to your heart's content! Later it's easy to do diffs and merges with their project, and if you write something worth keeping they can do the same, easily bringing portions of code from around the community into their build selectively.
They call this "social coding".
Unless you have ambitious plans to contribute to their project right away though, to get started just do a clone as described above, which is more like checking something out of a googlecode public svn. You can import the folder into Flash Builder, Flex or FDT and run the file called Main to see a wireframe 3D example scene that already works. Go team!
Sometimes you need a Bitmap to capture nested containers from a flat top-down perspective, but you're really just interested in drawing one or more of the nested objects and not others. This utility works by quickly toggling the visibility of other children off, then restoring it after draw(). Use a SelectiveBitmapDraw instance with the standard display list or a SelectiveBitmapDraw3D instance to capture specific nested DisplayObject3D's in a PaperVision3D scene.
Like magic, transparent parts of a PNG in your MovieClip are ignored during mouse interactions. Check it out!
Normally the clear areas of a PNG are treated as solid, which can be especially frustrating when dealing with a lot of images that overlap each other because they tend to block mouse interactions on the clips below them.
This utility fixes that so that mouse events don't occur until you bump against a solid pixel, or a pixel of any transparency value besides totally clear. InteractivePNG lets you set an alphaTolerance level to determine what transparency level will register as a hit.
This was surprisingly tricky to write, so I'm releasing it open source in hopes that it helps someone out there.
I chose not to use a mask, because that would mean managing the displaylist outside the movieclip, I wanted this to work for any freestanding movieclip without any complicated management within the program. I've also heard of people creating an overlay bitmap with all the parts and running hit detection on that, but that is a little clunky – it adds filesize and makes it hard to update your layout.
I know it looks extremely simple, but if you're curious, here's what goes into it. First I detect & suppress mouse interactions at which time I toggle the clip's mouseEnabled flag off, and use an ENTER_FRAME event to detect when the mouse bumps into the edge of the image and reenable the mouse, toggling it off again during roll out. It uses the native method BitmpaData.hitTest. Finally when the mouse leaves the bounds of the movieclip, the tracking is turned off and the system is reset to listen for the mouse to knock again. It was particularly tricky to keep the cursor hand from flickering when the edge of the image is passed when buttonMode is turned on, which is done by temporarily caching that property on the initial round of suppressed events. Like I said, it looks simple, but...!
FZip is a smashing new utility that allows you to load zipfiles into SWFs at runtime, accessing each file as the archive progressively loads. The official page is here but the latest versions are often found at Claus Wahlers' blog.
My trouble with it was that archives created by Mac OSX are incompatible with it, and will make FZip throw an error, “Data descriptors are not supported.” The other trouble is that I'm not a command-line geek so it took me a while to figure out how to zip a file, then apply the included Python-based patch. Hopefully someone with real OSX skills will create some little app that batches these processes, they're a real pain but at least we Mac users can now stream ZIP files... Woohoo!
WordPress is erroring out every time I try and post this information but it can be found in this comment.