Working with doctors at Kaiser Permanente, Vectorform developed a game-like experience to help caregivers of children on the autism spectrum gauge progress in verbal and occupational skills. This involved a range of activities, from multiple choice questions to walking, hopping, and stacking blocks. All of these activities were monitored using the Microsoft Kinect, resulting in statistical data that could be monitored over time. To guide kids and diagnosticians through the process, a character on screen would invite kids to participate in the activities, give positive reinforcement, and help coach them.
As producer and artist on the project, I created spreadsheets to track every item, from user interface button animations to voice acting. These documents ended up being more critical than I ever imagined, allowing me to delve into the details of production without losing sight of the overall picture, and ensuring the entire project stayed on track.
The backgrounds, user interface elements, and initial character designs were created by James Anderson, who defined a soft cartoony style that I needed to match with all of the character animation. Working in Lightwave I created a custom shader system that integrated all of the necessary effects, allowing me to render directly to final files. No post work or compositing needed; the raw renders were ready to use. The shaders utilised subsurface scattering as a render basis, then processed the resulting values to create variable falloffs and cel-shading thresholds within the object boundaries. Using raytracing effects, I also created edge definition effects that helped add more depth while retaining the 2D illustration style. Each material was then further customised based on the feeling that object needed to have, be it soft, fuzzy, hard, or shiny.
With the wide variety of actions, I needed to develop a production pipeline that would allow for both efficiency and energy in the motion. On top of that, the character needed to be able to seamlessly transition between a wide range of actions and responses, which meant the animations all had to seamlessly loop and transition between each other. To facilitate this I built an animation rig that interpolated between motion capture data and keyframe data. Every single animation would start with keyframed animation, transition into mocap, then back to keyframed animation to get the character back to its loop position. The key framing controls were also used to correct egregious mocap data when necessary.
All of the voice acting was done in-house at Vectorform, utilising Reaper to manage all of the effects and organisation of audio files. Based on the documentation I created, every single asset (be it an image, video, or audio clip) was associated with a serial number. Reaper made it easy to process our raw recordings and output to individually serialised files that were compressed and ready to use in the final experience.
Since I was able to render finished frames directly out of Lightwave, no further processing was required except to prep the assets for the development team. This involved conversion of file formats, bit depth reduction, and scaling to the correct size. After testing available options, and running through a good deal of downsampling interpolation tests, I was able to automate all of this using command line tools like ImageMagick, packaging the toolset into a simple OSX service. As renders were completed, I simply had to select the files, click process, and commit the results to SVN for implementation. It saved a huge amount of time, and kept the production running efficiently.
This was a hugely gratifying project to work on, both as an artist and as a human. Though I wasn't able to observe the clinical trials in person, the doctors and clinicians allowed the Vectorform team to call in on some of their sessions and see the project in use. The opportunity to do something both fun and fulfilling doesn't always present itself, and I'm grateful I could be a part of this!