Come together (Right now)
Well this week has been really really cool. Why? Because all the pieces are finally starting to come together, and what we’re seeing is super exciting. But rather than tell you about it, we thought we’d just show you:
As you can see I (chip) have been putting Danny’s “Director” script to good use, framing our shots for our “scenes” and authoring the camera transitions.
Ali has made a lot of progress on networking this week. We are now able to send data of a puppet being controlled on one computer over the network to be displayed on a second pc. Cool! Now comes the hard part though, which involves networking voice (VOIP) and recording both players and video all at the same time. Hopefully we can get this running soon. We hope to make enough progress in the next few days to be able to include this feature for the Coder Challenge.
Danny followed up his awesome work on our “Director” script last week by getting the puppet code (finally!) hooked up in our main project. We’ve been able to control puppets for a while in our test projects, but this is the first time we can control a puppet within our scene with full art, transitions, etc. The puppets are parented to the main camera and as the scenes advance, the puppets move along with the camera. Essentially, the puppeteer will control his/her puppet relative to the camera.
Danny also wired up the voice recognition to the menu system. You can open the menu using the voice command “open menu” and close the menu by saying “close menu” or "resume". Once the menu has been activated, the menu system will start listening for other voice commands for navigating the menu. Danny actually started out trying to navigate a menu the traditional way by using “up”, “down”, and “select” voice commands to move to and select the individual options within the menu. At some point he realized this approach was not very elegant and took much too long to accomplish the simplest of adjustments. Changing the setup such that the user can simply say what he wants to do not only brought voice recognition to life, but also simplified the logic that needed to be handled within the code. On a related note, with this project and the work we did with Portal 2 for CES our experience with Nuance’s voice recognition software integrated into the Perceptual Computing SDK is that the system will learn your voice and will improve upon recognizing commands the more you use it.
Our wolf and pig puppets have come a long way too and they are now controllable, as you can see in the video above. This has really made our project feel “alive”.
Dan has been populating the scene with some final art assets. He got the bulk of our forest filled in with trees he made in Tree Creator, and his plant planes have been performing well when layered without any z-sorting issues.
Because we are using scale to fake distance, getting the ocean tide to rise and fall on the beach was a bit of an issue. Moving the entire ocean plane up and down looked alright on the beach but was also raising and lowering the horizon. To work around this, Dan ended up locking the plane at the horizon and rotating it ever so slightly. This worked perfectly, and had the additional desired effect of more water movement closer to the camera.
We needed water shaders on the beach and our river which were both curved surfaces. The pro version water shader doesn’t like to be on anything but a flat plane to properly calculate reflection and refraction. Applying the water script to a small hidden plane and the water material to the curved water surface gave us the effect we wanted.
Well that’s it for now, but we’re all really excited about the progress we’ve made this week. Perhaps next week we’ll have a playable demo, just in time to show at GDC...? We can hope...
Stay tuned, true believers! Look for us on the Intel stage at GDC, 11:30 AM on Wed, Mar 27th. Stop by and say hello!