Ultimate Coder Challenge: Sixense Studios - Week 2

Hey everyone, chip here with Sixense Studios. 

I’m sitting down and getting ready to watch the Oscars.  However, I’ll be watching the pre-show; The Red Carpet.  The part where everyone is judged on what they are wearing, and the main event is actually not what is on tv, but what is streaming on DListed’s live blog.  After that, a bunch of famous people pat each other on the back and hand out gold statues or something.  But by then I’ll be back to working on some set design for our puppet platform. Interestingly, the Oscars provided yet another example of the validity of our project. Check out the following clip from the Oscars:

This type of parody is the exact sort of thing we hope (and expect) our community will create with our puppet platform.

But it got me thinking:  Why is the red carpet such an event?  Why do we care so much about what other people are wearing, and how good (or bad) they look?  And how does that fit into our puppet platform?  The ability to interchange outfits, and even body parts, is one of the core elements of our platform.  But ultimately, what will users want to create?  Are they more willing to put themselves out there and come up with something crazy?  Are they willing to play the fool?  Or would they rather play it safe?  Will they,essentially be projecting themselves onto their puppet personas?  And like the red carpet, is it ok for others to laugh at them when they do?  The Oscars, it turns out, highlights some of the questions I’ve been struggling with this week.  What drives someone to want to be in the spotlight?  What will motivate someone to share their puppet show with the world?  And once they do, how will they (and we) deal with others who mock them? 

PlayStation 4

And speaking of live stage shows, did anyone catch the Playstation 4 announcement this week?  We were all casually streaming it in our office while working on Wednesday, watching it off and on as something grabbed our interest, but suddenly all of us were 100% focused when Media Moleclue took the stage.  They showed the following which is on Media Molecule's homepage.:

I have to admit, it freaked us out.  Here we are planning our little project that could, and then boom, there it is, live and onstage, being shown by one of the big publishers at a press event.  Initially we were crushed.  This aired towards the end of the day, so we all went home in bad moods and slept on it, telling ourselves we’d regroup the next day.   After letting it sit for awhile, I came to realize that it’s not necessarily who does it first, but who does it best.  We’re doing it for an entirely different platform that ultimately we feel will reach a lot more users.  And the fact that Sony has a virtual puppet show front and center at their press event only validates our project, and tells us that we’re definitely on the right track.  It’s a bit of a bummer that we won’t be able to be the ones to unveil a virtual puppeteering platform to the world, but I’m now more motivated than ever to build something awesome. 

After analyzing the Media Molecule footage more in-depth the next day (once it was posted in a non-live streaming form) we noticed that at it’s core, it seems to be animation driven.  In other words, slight movements you make cause the puppet to play pre-made animations.  This, in our opinion, does not allow the puppeteer to feel very in control of his/her puppet.  If you’ve played Double Fine’s “Once Upon a Monster” for instance, it’s not so engaging when small initial movements trigger animations that continue to play regardless of the motions with which you then follow up.  We’re planning to have a much more simulated experience that makes the puppeteer truly feel in-control in a way that only Perceptual Computing can provide. 

Adding Realism

Danny’s been experimenting with ragdoll limbs and driving the character position using forces on rigid bodies.He discovered that the benefits of using the physics approach vs manually adjusting the puppets 3d position gives us the ability to use the physics collision system. Movements of the controller will also affect the other child rigid bodies. In other words, the legs and arms can trail behind the puppet’s movement giving it a more life like feel. He’s currently struggling to get the ragdoll limbs and overall collision system to play nicely together. Ideally you’d be able to seat a puppet in a prop, and lock its position and rotation for a seated pose. We’ll keep you posted on how this progresses. 

 

Perceptual Input

Ali has been hooking up our test puppet to the Perceptual Computing SDK.  The first step he has taken is to simply get the user to control the puppet’s look and mouth openness with their hand.  He did this using the hand normal and openness provided by the SDK, similarly to how we controlled cubes in the Portal 2 CES demo. However, the difficulties initially apparent are the same as we had with Portal 2.  First, the hand normal is only reliable when your hand is fairly flat.  If you try to use the openness at the same time by having the user open/close their hand as they would a puppet mouth, this affects the normal.  So you end up having the mouth control affecting the puppet look direction in unintended ways.  One way around this is to change your openness control to that of either spreading your fingers or holding them together, while always keeping your fingers extended and your hand flat.  This might not be how one would control a puppet in the “real” world but under these conditions it performs much better.  While we will certainly be exploring how to do a traditional puppet mouth gesture with your hand to control the puppet, this is interesting in that we will also want to explore new gestures which might give us better control than motions tied to how one would "normally" do it.

This Week

This week is a big week for us.  For the first time since the project started, we’ll have all of us the same office in San Francisco.  Danny is coming down from Seattle, and Dan is joining us from our Los Gatos office.  I’ll be working on the set design aspect. It would be great to be able to swap in different backgrounds and foregrounds by the end of the week, at which point we want to have our first real demonstration put together.  Our first internal milestone!

We’ll have more for you next week!

 

For more complete information about compiler optimizations, see our Optimization Notice.