Thursday, April 3, 2014

My First GDC after 9 Years of Siggraph

I attended my first GDC this past March.  Coming from Pixar, Siggraph has been my career conference for the past 9 years.  In many ways they are similar: they have similar types of talks, bootcamps and tutorials and it’s often difficult to choose between the many interesting sessions available; the huge exhibition is packed with big industry marketing, job seekers, industry veterans and small up-and-coming tech vendors; unfortunately, the men’s restrooms are 10 times busier than the ladies’.

But I especially enjoyed the differences.  For many computer graphics academics, presenting at Siggraph is the culmination of their research and so their presentations tend to reflect the polish of many months' dedicated work.  At GDC, most of the presenters are from industry, and so presentations feel like they were slapped together in between aggressive game making milestones.  As a result, the GDC talks had an air of casualness and yet a gravity like you were sharing a beer with a veteran telling you to war stories from the front lines.

In the past few Siggraphs,  I’ve felt a malaise in the proceedings, as if the rot and anxiety that’s afflicting the visual effects film industry is beginning to erode the remaining shine on what used to be a tempting apple.  Big game studios are suffering from similar anxiety in that there are only so may 200 million plus budget games you can make that are constrained by the trappings of what 14-to-32 trigger happy males want.  Yet at GDC, there’s the indie game development culture and a redeeming feeling that there’s still unexplored frontiers.  I can’t remember a recent Siggraph where you could see something in the evolving tech exhibit that had an impact like the Oculus Rift and the Sony Morpheus.  Furthermore, there are still games coming out that are playing with the technology, mechanics, and how you tell a story with interaction.  And I think this feeling that we haven’t figure it all out yet is what makes GDC more exciting than Siggraph.

Now, I recognize that I saw this all with excitable newb tinted glasses, and so perhaps my disillusionment with the film industry colors my opinion.  Take it all with this disclaimer.

Here are 3 select personal highlights of my GDC conference:
  1. Game rendering engines are gorgeous and impressively fast.  The next gen stuff coming out of Unity and Unreal, especially their global illumination and physically based shading models, is seriously closing the quality gap with the software renderers of film.  At Pixar, a single frame of lit and shaded animation could easily take 20+ hours to render.  Compared to what games are doing in 1/60th of a second, film making can’t justify the *6* orders of magnitude difference.  Considering this next gen tech, the only technical gaps I can see that separate a Pixar quality from game rendered cut scenes are quality motion blur, better pixel sampling, depth of field and better animation - solvable problems if you relax the need for realtime.  Even a Pixar quality frame that takes 1 second to render is still game changing.  The reason I think this will take a longer time than closing the tech gap is that the film making pipeline is used to a way of doing things.  Many modeling and shading artists in film have gotten used to the forgiving nature of software renderers like RenderMan and so to enforce game engine requirements will feel like tying their hands.  It will require a political effort as much as an engineering one.

    Look what Unity 5 can do ... on your iPad!

  2. The buzz around the Oculus Rift and the Sony Morpheus is reaffirming that VR and AR are the next forms of medium in which we will play and tell our stories.  I think this is a cool new space not only because it could be a rich environment for better games, but because there are so many unknowns with VR.  We’ll need to pull from different art forms to figure it out.  Just like the art of film evolved out of the experiments from artists and engineers skilled in theater and photography, the art of VR will evolve from game makers and film/tv makers playing to find what works and what doesn’t.

    I ordered my DK2 model the moment I heard the announcement at GDC (had to hit the "Buy" button a few times because the servers were getting slammed with requests):

  3. The revelation that Object Oriented Design has made programmers lose sight of how to best use our computing architecture blew my mind.  I knew that cache coherency wasn’t as optimal when using instances and inheritance, but the numbers Mike Acton threw up felt like they smacked my coding keyboard from my hands.  Thinking of programming as a way to manipulate data, coined as Data Oriented Design since 2009, will lead to code that can have order of magnitude better performance than objects.

    A video describing Data Oriented Design vs. Object Oriented Design (a bit heavy on the text based slides and I recommend listening at at least 1.5 speed up):



     I hope to talk more about this in an upcoming post, but for now, I recommend you look at these two links if you're interested in diving deeper into the rabbit hole:

    A surprisingly self explanatory deck of slides explaining the benefit of data oriented design over object oriented design: http://dice.se/wp-content/uploads/Introduction_to_Data-Oriented_Design.pdf
    The definitive book written on the subject: http://www.dataorienteddesign.com/dodmain/node3.html.