Zona para desarrolladores Intel®:
SIGGRAPH 2013

Own Your Pixels!

Intel’s platforms, tools and technologies allow you to own your pixels! Whether it’s in the cloud, at your workstation or on the go with an Ultrabook™, Intel is your partner of choice for high fidelity visualization. We will be showcasing software from Adobe, Autodesk, Codemasters and more on our latest hardware. By working with Intel, you can own your rendering destiny.

Intel Sponsored Technical Sessions – Room 201A

We will have 3 days of technical presentations from Intel product experts as well our partners. These sessions will cover topics including Perceptual Computing, how Intel helped Codemasters integrate advanced rendering features into GRID 2, the latest from Intel Science and Technology Center for Visual Computing and the Intel Visual Computing Institute and many more.

Intel Studio Lounge – Room 201B

Join us off the show floor for engaging interviews of industry luminaries, fantastic demonstrations and to get your score on the Intel “Own Your Pixels” Quiz posted to our leaderboard so you will be eligible to win some very cool prizes. Presentation and interview schedule will be available in the Intel booth on the show floor, in the Intel Studio Lounge and online ahead of the show at http://waskul.tv/siggraph-2013-schedule/.

Intel Booth – Exhibit Hall C

Stop by to learn more about how companies like Solidworks, Autodesk and others use our latest hardware and tools for their applications. Play the Intel “Own Your Pixels” Quiz for the chance to win some very cool prizes*.

2013 SIGGRAPH – Intel Sponsored Technical Sessions

Join Intel experts and our partners in room 201A for 3 days of cutting-edge graphics research deep-dives, technical discussions and exciting demos.

Tuesday, July 23 Room 201A

From Research to Production, How AVSM and AOIT made their way into GRID 2.
Richard Kettlewell - Codemasters Software Co. Ltd, Axel Mamode - Intel
Tuesday 9:00 AM -10:00 AM

In games, volumetric shadows and proper transparency have long been difficult to implement when rendering naturally occurring features like smoke, foliage, and fences. Intel Research contributed to the field with Adaptive Order Independent Transparency and Adaptive Volumetric Shadow Maps. Join us to see how these technologies were implemented in actual games; how they leveraged Intel extensions for Iris Graphics; and how we solved the integration issues that were found along the way.

Presentation Material

Fast Volumetric Shadows Using Epipolar Sampling and 1D Min-Max Binary Trees
Egor Yusov - Intel
Tuesday 10:45 AM -11:45 AM

Volumetric shadows greatly enhance realism of virtual scenes and are desirable for many applications from interactive simulations to computer games. Precisely computing the effect requires evaluation of a complex light transport integral in a participating medium. To achieve high performance, we combine two recent approaches, namely epipolar sampling with 1D min/max binary trees, and use simple and efficient semi-analytical solution to a scattering integral due to a point light source. Our technique has a number of parameters that allow trade quality for performance, which makes it suitable for a wide range of hardware, from Intel HD graphics to high-end discrete GPUs.

Presentation Material

Maximize Application Performance On the Go and In the Cloud with OpenCL* on Intel® Architecture
Ben Ashbaugh (Intel) , Dave Helmly — Adobe Arnon Peleg — Intel
Tuesday 12:15 PM -1:45 PM

Do you need faster, better, pixels on the go or pixels in the cloud? With Intel platforms you can render big data from the cloud, on your workstation, and even render on the go with Ultrabook™ devices. And now, you can use accelerate your performance on those platforms by accessing the power of each device in a standard manner using OpenCL.

In this session you’ll learn not only what OpenCL is, how Intel supports it, and which developer tools are available, but in an informative demo session you’ll learn how to use OpenCL to match the right device to the right task on Intel® Xeon® and Intel® Xeon Phi™ processor-based servers and workstations and on Ultrabook devices with Intel® Core™ processors and Intel® Iris™ Graphics and Intel® HD Graphics.

During this talk, Adobe Systems will talk about how Adobe applies OpenCL* technology in their software and will demonstrate a technology preview of Adobe Premiere Pro CC* accelerated with Intel Iris Graphics.

Presentation Material

Faster Content Creation with Higher Productivity using Intel® Developer Tools and OpenCL*
Arnon Peleg (Intel) , Raghu Muthyalampalli — Intel
Tuesday 2:00 PM – 3:00 PM

This session will demonstrate the power and performance of the 4th generation Intel® Core™ processors that a developer can utilize for video content creation workloads. We will demonstrate a highly optimized video decode and real time video processing pipeline that’s accelerated on the Intel® Iris™ Graphics engine. The solution not only brings better performance, but also improves developer productivity by using programmable OpenCL* with accelerated libraries like the Intel® Integrated Performance Primitives (IPP) and the Intel® Media SDK. We will discuss how these tools work seamlessly together to deliver faster media workloads faster.

Presentation Material

Journey of Pixels in Adobe Photoshop on Intel HD Graphics
Murali Madhanagopal(Intel), Jerry Harris(Adobe), Yuyan Song(Adobe), Joseph Hsieh (Adobe)
Tuesday 3:15 PM – 4:15 PM

With the introduction of Intel HD Graphics in 2010 and graphics moving in to the CPU, processor graphics has seen a 50X performance improvement on GPU workloads.

In this session, Adobe and Intel engineers will cover some of the optimizations done as they worked together to improve performance of Photoshop using OpenGL and OpenCL on Intel HD Graphics covering Intel platforms from entry level workstations, desktops laptops, ultrabooks and tablets. As the pixels now move to the Cloud with advent of Creative Cloud, Intel platforms provide an end to end solution.

Presentation Material

Performance tuning applications for Intel GEN Graphics for Linux* and Google* Chrome OS
Ian Romanick — Intel
Tuesday 4:30 PM – 5:30 PM

Most application developers come from one of two backgrounds. They either come from a discrete graphics background where the CPU and GPU do not share memory, or from a mobile graphics background dominated by tile-based renderers. Intel HD Graphics architecture differs from these models. It is a unified memory architecture that is not a tile-based renderer.

After reviewing Intel’s unique architecture, we will discuss a number of application optimization strategies. We will focus on techniques for dynamic buffer management on a unified memory architecture, strategies to avoid (or predict) shader recompiles due to non-orthogonal state changes, and ways to optimize shaders. We will demonstrate a number of software technologies, both inside the driver and in external tools. The tools available for finding and resolving performance issues on Linux* and Google* Chrome OS for Intel graphics are different than the tools for, say, a discrete GPU on Windows. We will shed some light on what tools are available and how they can be integrated in a developer's work flow.

Presentation Material

Wednesday, July 24 Room 201A

Perceptual Computing
Wednesday 9:00 AM - 11:45 AM

Join Intel’s Seth Gibson, Robert Cooksey and team to learn about Intel’s Perceptual Computing initiative and how it is bringing natural, intuitive and immersive modes of interactivity to modern computing devices. You’ll learn why graphics and game developers all over the world are entering Intel’s $1M Perceptual Computing Challenge to create amazing new applications which defy the boundaries of the mouse and keyboard. By attending this class you can too; you’ll learn to create magic by exploiting the new “perceptual” technologies you’ll have at your fingertips, including 2D and 3D Position Tracking, Gesture Detection, Facial Tracking and Recognition, Speech Recognition and Augmented Reality libraries. During informative in-depth technical demo sessions you’ll learn how you can get started creating perceptual applications using C, C++, C#, Java, openFrameworks, Cinder, Unity, Havok, Bullet and other technologies that take advantage of this new game changing interface to the PC. So, don’t miss this great opportunity to see why Intel Perceptual Computing puts the pixels in your hands.

Presentation Material

New visual services on distributed displays and the internet
Oliver Grau Thorsten Herfet Philipp Slusallek
Wednesday 12:15 PM – 1:00 PM

This presentation gives an overview on recent research projects carried out by the Intel Visual Computing Institute on new distributed display and internet services. The Institute focuses on Visual Computing research, meaning the acquisition, modeling, processing, transmission, rendering and display of visual and associated data. This presentation focusses on techniques that enable new forms of rendering on distributed, heterogeneous platforms and new rich 3D services for the future internet.

Presentation Material

The Intel Science and Technology Center (ISTC) for Visual Computing
Oliver Grau Thorsten Herfet Philipp Slusallek
Wednesday 1:00 PM - 1:45 PM

The ISTC - Visual Computing engages in open, collaborative, and exploratory research in visual computing to bring modern trends in computing (the cloud, crowd sourcing, hand-held computing) to bear on hybrids of computer graphics, animation, image understanding, and large-scale gaming. We will give a quick summary of the projects underway in the center will to be followed by a survey of recent results, with a focus on papers being presented at SIGGRAPH 2013.

Presentation Material

High Definition Volume Rendering : Advantages of Multi-core CPUs vs. GPUs for Volumetric Ray Casting
George Buyanovsky — Fovia, Inc. Ken Fineman — Fovia, Inc. , David Wilkins - Fovia, Inc.
Wednesday 2:00 PM - 3:00 PM

Imaging acquisition technologies have improved dramatically in recent decades, enabling exponentially increasing amounts of data to be captured. During the same period, the advancements required for efficient visualization and distribution of such data have been slower to materialize, therefore presenting significant challenges to traditional workflows. Fovia’s innovative, CPU-based High Definition Volume Rendering® solution, an advanced technology for real-time visualization and analysis of large, three-dimensional datasets, addresses this disparity. HDVR® leverages the scalability and flexibility of Intel’s multi-core CPU processors, creating a combination that exceeds the capabilities of GPU-based imaging systems. By using CPU-based volumetric ray casting to minimize computational costs and maximize quality and performance, HDVR overcomes the many limitations of currently available imaging technologies. Fovia’s HDVR uses sub-voxel super-sampling to achieve superior, high fidelity pixel output that can be deployed locally, enterprise-wide and via the cloud, including on mobile devices.

Presentation Material

Visualization on Stampede
Paul A.Navrátil
Wednesday 3:15 PM – 4:15 PM

The age of petaflop computing enables science at unprecedented scales, but with such capabilities comes additional complexities. These complexities must be managed to run simulations efficiently and to analyze and visualize the voluminous output that they produce. This talk will describe how the Texas Advanced Computing Center addresses these challenges with Stampede, a 6400 node Intel Xeon and Xeon Phi cluster that provides world-class simulation, analysis and visualization capabilities to the national open-science community. We will detail the Stampede architecture, present initial results generated with Stampede, and describe ground-breaking graphics and visualization work, achieved in partnership between Intel and TACC, that enables interactive photo-realistic visualization and rendering on both Xeon and Xeon Phi architectures. We will also discuss how interested researchers in the public and private sectors can obtain access to Stampede to support their own work.

Presentation Material

From Virtual To Reality – how high fidelity visualization based on Autodesk Rapid RT technology is accelerating product design decisions
Peter Rundberg — Autodesk, Mehmet Adalier — Intel
Wednesday 4:30 PM – 5:30 PM

Experience tremendous workflow optimizations and productivity gains when you use high fidelity interactive Rapid RT visualization tools with leading edge Autodesk Products running on Intel® Xeon® and Xeon Phi™ based workstations and servers. Through innovative use of Advance Ray Tracing Technologies coupled with the rich instruction set of Xeon and Xeon Phi transform your digital prototypes and architectural designs into compelling imagery and immersive presentations for interactive design reviews, and stunning marketing materials

Presentation Material

Thursday, July 25 Room 201A

Intel® Iris™ Graphics Quick Sync Video
Kao Wen-fu — Intel, Ryan Lei — Intel
Thursday 9:00 AM -10:00 AM

The 4th Generation Intel Core Processor introduces new Intel Iris/Iris Pro and HD Graphics, including the new Quick Sync Video technology. Its predecessors set a milestone in performance to deliver a blazingly fast transcode experience. The new Quick Sync Video brings up to 12x of real-time transcode speed in best performance mode. Converting a 10-min edited video to Internet sharing ready practically takes less than 1 min. The Iris/Iris Pro series easily enables multi-session transcode and 4K video editing solution in real-time. This talk provides a high-level overview of the Intel media architecture, video quality improvement, and the energy efficient high performance solution with demonstration of Intel Quick Sync Video technology performance and quality.

Presentation Material

Bring out the Best in Pixels: Video Pipe in Intel Graphics Processor
ictor H. S. Ha(Intel), Yi-Jen Chiu (Intel)
Thursday 10:45 AM -11:45 AM

The graphics processor in the 4th generation Intel Core Processors is equipped with a full set of video processing modules in its video processing pipe. In this talk, we introduce the video pipe and highlight its pixel enhancement capability. To bring out the best in pixels from various input sources under different network connectivity and mobility environment, the video pipe is designed to be adaptive to input format, content, and quality. We demonstrate how the individual HW modules can share information with one another and work together in an optimized fashion within the pipe. We discuss possibilities for SW applications to take advantage of the HW statistics collected from the input images for performance acceleration and quality improvement to the visual experiences delivered to the Intel customers and end users.

Presentation Material

ISPC: a SPMD Compiler for Xeon Phi & CPU – Tutorial
Matt Walsh - Intel
Thursday 12:15 PM -1:45 PM

ispc - the Intel SPMD Program Compiler - provides robust vectorization and corresponding SIMD utilization vs. traditional methods (e.g. autovectorized C++) while preserving efficiency in invocation and data binding vs. approaches such as OpenCL. This talk will illustrate – from a semantics / language standpoint – why ISPC’s approach can produce compelling SIMD code vs. C++-based solutions. Next, a brief tutorial is presented using example ISPC code to introduce usage patterns and features. Lastly, ISPC performance capabilities are demonstrated with more examples and highlights from the ISPC-based Intel Embree 2.0 ray-tracing library.

Presentation Material

Intel and the ISTC-VC researchers will be giving a number of peer reviewed talks during the conference. We hope you will join us! Details below.

Monday, July 22

MeshGit: Diffing and Merging Meshes for Polygonal Modeling
Jonathan Denning – Dartmouth College, Fabio Pellacini - Sapienza-Università Di Roma, Dartmouth College
9:00 AM

MeshGit is a practical algorithm for diffing and merging polygonal meshes typically used in subdivision and low-polygon modeling workflows.

Color & Compositing
User-Assisted Image Compositing for Photographic Lighting

Ivaylo Boyadzhiev - Cornell University, Sylvain Paris - Adobe Research, Kavita Bala - Cornell University
9:00 AM - 10:30 AM

A new approach that assists photographers to compose images of static scenes, viewed under dynamic lighting. The approach introduces a set of basis lights that combine several of the input images and provides controls to achieve effects photographers typically need; for example, accentuating the color or edges of objects.

Color & Compositing
Probabilistic Color-by-Numbers: Suggesting Pattern Colorizations Using Factor Graphs

Sharon Lin - Stanford University, Daniel Ritchie - Stanford University, Matthew Fisher - Stanford University, Pat Hanrahan - Stanford University
9:00 AM - 10:30 AM

A probabilistic factor graph model for automatically coloring patterns. The model is trained on example patterns and can be sampled to generate diverse colorings for a target pattern. Results are demonstrated on a variety of coloring tasks. In a study, participants preferred sampled colorings to other automatic baselines.

Color & Compositing
Example-Based Video Color Grading

Nicolas Bonneel - Harvard University, Kalyan Sunkavalli - Adobe Systems Incorporated, Sylvain Paris - Adobe Systems Incorporated, Hanspeter Pfister - Harvard University
9:00 AM - 10:30 AM

Color palettes of production movies are often adjusted by skilled colorists through a color-grading process that is difficult for amateurs to reproduce. This paper addresses this problem with an an example-based method. It solves temporal inconsistencies with a novel differential-geometry-based scheme akin to curvature flow.

Geometry & Topology
MeshGit: Diffing and Merging Meshes for Polygonal Modeling

Jonathan Denning - Dartmouth College, Fabio Pellacini - Sapienza-Università Di Roma, Dartmouth College
9:00 AM - 10:30 AM

MeshGit is a practical algorithm for diffing and merging polygonal meshes typically used in subdivision and low-polygon modeling workflows.

Rods & Shells
Folding and Crumpling Adaptive Sheets

Rahul Narain - University of California, Berkeley, Tobias Pfaff - University of California, Berkeley, James O'Brien - University of California, Berkeley
3:45 PM - 5:35 PM

A technique for simulating plastic deformation in sheets of thin materials such as crumpled paper, dented metal, and wrinkled cloth. - The simulation uses adaptive mesh refinement to dynamically align mesh edges with folds and creases, allowing efficient modeling of sharp features and avoiding bend-locking artifacts.

Tuesday, July 23

Perception
Exposing Photo Manipulation With Inconsistent Shadows

Eric Kee - Dartmouth College, James O’Brien - University of California, Berkeley, Hany Farid - Dartmouth College
9:00 AM - 10:30 AM

A forensic technique for determining if cast and attached shadows in an image are physically consistent. When inconsistencies are detected, the technique provides objective evidence of photo tampering.

Perception
Gloss Perception in Painterly and Cartoon Rendering

Adrien Bousseau - INRIA, James P. O'Shea - University of California, Berkeley, Frédo Durand - Massachusetts Institute of Technology CSAIL, Ravi Ramamoorthi - University of California, Berkeley, Maneesh Agrawala - University of California, Berkeley
9:00 AM - 10:30 AM

This first study of material perception in stylized images (specifically painting and cartoon) uses non-photorealistic rendering algorithms to evaluate how such stylization alters the perception of gloss. The study estimates the function that maps realistic gloss parameters to their perception in a stylized rendering.

Perception
Understanding the Role of Phase Function in Translucent Appearance

Ioannis Gkioulekas - Harvard School of Engineering and Applied Sciences, Bei Xiao - Massachusetts Institute of Technology, Shuang Zhao - Cornell University, Edward H. Adelson - Massachusetts Institute of Technology, Todd Zickler - Harvard School of Engineering and Applied Sciences Kavita Bala - Cornell University
9:00 AM - 10:30 AM

This generalization of scattering phase-function models demonstrates an expanded translucent appearance space and discovers perceptually meaningful translucency controls by analyzing thousands of images with computation and psychophysics.

Perception
A New Grid Structure for Domain Extension

Bo Zhu - Stanford University, Wenlong Lu - Stanford Univeristy, Matthew Cong - Stanford University, Byungmoon Kim - Adobe Systems Incorporated, Ronald Fedkiw - Stanford University
10:45 AM - 12:15 PM

An efficient grid structure that extends a uniform grid to create a significantly larger far-field grid and allows simulation of significantly larger domains than a uniform grid, thus supporting capture of far-field boundary conditions while maintaining the same resolution in regions of interest.

Fluid Grids & Meshes
Simulating Liquids and Solid Liquid Interactions With Langragian Meshes

Pascal Clausen - University of California, Berkeley, Martin Wicke - University of California, Berkeley, Jonathan R. Shewchuk - University of California, Berkeley, James F. O'Brien - University of California, Berkeley
10:45 AM - 12:15 PM

This Lagrangian finite-element method simulates liquids and solids in a unified framework. Local mesh improvement operations maintain a high-quality tetrahedral discretization as the mesh is advected by fluid flow.

Image-Based Reconstruction
Structure-Aware Hair Capture

Linjie Luo - Princeton University, Hao Li - University of Southern California, Industrial Light & Magic, Szymon Rusinkiewicz - Princeton University
2:00 PM - 3:30 PM

A system that reconstructs coherent and plausible wisps with awareness of the underlying hair structures from a set of images of complex hairstyle. The reconstructed wisps can synthesize hair strands that are plausible for hair simulation and animation.

Shape Analysis
Learning Part-Based Templates From Large Collections of 3D Shapes

Vladimir Kim - Princeton University, Wilmot Li - Adobe Systems Incorporated, Niloy Mitra - University College London, Siddhartha Chaudhuri - Princeton University, Stephen DiVerdi - Adobe Systems Incorporated, Google Inc., Thomas Funkhouser - Princeton University
2:00 PM - 3:30 PM

An analysis framework to derive structure from large, unorganized, diverse collections of 3D shapes. The automatic algorithm starts with an initial template model that jointly optimizes for part segmentation, point-to-point surface correspondence, and a compact deformation model to best explain the input model collection.

Data-Driven Animation
Near-Exhaustive Precomputation of Secondary Cloth Effects

Doyub Kim - Carnegie Mellon University, Woojong Koh - University of California, Berkeley, Rahul Narain - University of California, Berkeley, Kayvon Fatahalian - Carnegie Mellon University, Adrien Treuille - Carnegie Mellon University, James O'Brien - University of California, Berkeley
3:45 PM - 5:35 PM

Using over several thousand hours to perform a massive exploration of the space of secondary clothing effects on a character animated through a large motion graph, this method successfully samples the complex dynamical space to low visual error and compresses the resulting dataset to enable real-time animation.

Interactive Authoring of Simulation-Ready Plants
Yili Zhao, Jernej Barbič - University of Southern California
3:45 PM

A system for converting botanical triangle meshes into a format suitable for simulations using domain decomposition and model reduction. The method scales to complex realistic geometry of adult trees. A simulator supports large-deformation dynamics, fracture with pre-specified patterns, and inverse kinematics for plant posing.

Data-Driven Animation
Flow Reconstruction for Data-Driven Traffic Animation

David Wilkie - University of North Carolina at Chapel Hill, Jason Sewall - Intel Corporation, Ming Lin - University of North Carolina at Chapel Hill
3:45 PM - 5:35 PM

A novel algorithm to reconstruct and display continuous virtual traffic flows based on discrete spatio-temporal traffic-sensor data for a dynamic virtual environment.

Design & Authoring
Parsing Sewing Patterns Into 3D Garments

Floraine Berthouzoz - University of California, Berkele, Akash Garg - Columbia University, Danny Kaufman - Columbia University, Eitan Grinspun - Columbia University, Maneesh Agrawala - University of California, Berkeley
3:45 PM - 5:35 PM

Techniques for automatically parsing existing sewing patterns and converting them in 3D garment models, and two applications that take advantage of the collection of parsed sewing patterns and allow users to explore the space of garment designs.

Video & Warping
Phase-Based Video Motion Processing

Neal Wadhwa - Massachusetts Institute of Technology CSAIL, Michael Rubinstein - Massachusetts Institute of Technology CSAIL, Frédo Durand - Massachusetts Institute of Technology CSAIL, William T. Freeman - Massachusetts Institute of Technology CSAIL
3:45 PM - 5:35 PM

An approach for processing motion in videos based on analyzing local phase variations in complex-valued steerable pyramids, which improves significantly the results for motion magnification of our previous Eulerian method and introduces new capabilities.

Wednesday, July 24

Global Illumination
Gradient-Domain Metropolis Light Transport

Jaakko Lehtinen - NVIDIA Research, Tero Karras - NVIDIA Research, Samuli Laine - NVIDIA Research, Miika Aittala - Aalto University, NVIDIA Research, Frédo Durand - Massachusetts Institute of Technology, Timo Aila - NVIDIA Research
9:00 AM - 10:30 AM

This novel Metropolis rendering algorithm first computes image gradients along with a low-fidelity approximation of the image, and then reconstructs the final image by solving a Poisson equation. As an extension of path-space Metropolis light transport, the algorithm is well suited for difficult transport scenarios.

Global Illumination
Axis-Aligned Filtering for Interactive Physically Based Diffuse Indirect Lighting

Soham Uday Mehta - University of California, Berkeley, Brandon Wang - University of California, Berkeley, Ravi Ramamoorthi - University of California, Berkeley, Frédo Durand - Massachusetts Institute of Technology
9:00 AM - 10:30 AM

This new algorithm for interactive rendering of physically based global illumination, based on a novel frequency analysis of indirect lighting, combines adaptive sampling by Monte Carlo path tracing with real-time reconstruction of the resulting noisy images. Analysis assumes diffuse indirect lighting, with general receiver BRDF.

Advanced Rendering
Asynchronous Adaptive Anti-Aliasing Using Shared Memory

Rasmus Barringer - Lund University, Tomas Akenine-Möller - Lund University and Intel Corporation
10:45 AM - 12:15 PM

Edge aliasing continues to be one of the most prominent problems in real-time graphics (for example, in games). This paper presents a novel algorithm that uses memory shared between the GPU and the CPU so that these two units can work in concert to solve the edge-aliasing problem rapidly.

Advanced Rendering
5D Covariance Tracing for Efficient Depth of Field and Motion Blur

Laurent Belcour - Grenoble Université, Cyril Soler - INRIA Rhône-Alpes, Kartic Subr - University College London, Nicolas Holzschuch - INRIA Rhône-Alpes, Frédo Durand - Massachusetts Institute of Technology CSAIL
10:45 AM - 12:15 PM

Covariance tracing is a generic solution to analyze the frequency content of the temporal local light field along light paths. In this method, this information is used to determine sampling rates and reconstruction filters for efficient image reconstruction of costly effects such as defocus and motion blur.

Materials
OpenSurfaces: A Richly Annotated Catalog of Surface Appearance

Sean Bell - Cornell University, Paul Upchurch - Cornell University, Noah Snavely - Cornell University, Kavita Bala - Cornell University
2:00 PM - 3:30 PM

OpenSurfaces is a large database of thousands of annotated surfaces created from real-world consumer photographs. Its annotation pipeline draws on crowdsourcing to segment surfaces from photos, and then annotates them with rich surface-appearance properties, including material, texture, and contextual information.

Surface Reconstruction
Dense Scene Reconstruction With Points of Interest

Qian-Yi Zhou - Stanford University, Vladlen Koltun - Stanford University
2:00 PM - 3:30 PM

An approach to detailed reconstruction of complex real-world scenes with a hand-held range camera. The user moves the camera through the environment. A registration and integration pipeline produces a detailed scene model. The results demonstrate that detailed reconstructions of complex scenes can be obtained with a commodity camera.

Wave-based Sound Propagation in Large Open Scenes Using an Equivalent-Source Formulation
Ravish Mehra - University of North Carolina at Chapel Hill, Nikunj Raghuvanshi - Microsoft Research,, Lakulish Antani, Anish Chandak, Sean Curtis, Dinesh Manocha -University of North Carolina at Chapel Hill
3:45 PM

A novel technique that accurately models realistic acoustic effects such as diffraction, scattering, focusing, and echoes in large, open scenes at real-time rates.

Example-Guided Physically Based Modal Sound Synthesis
Zhimin Ren, Hengchin Yeh, Ming C. Lin - University of North Carolina at Chapel Hill
3:45 PM

A physically based sound synthesis framework that takes one example recording and automatically determines the material parameters for modal synthesis, capturing the inherent material quality of the recording. Automatically adding sound effects to virtual environment applications is simplified!

Artistic Rendering & Stylization
RealBrush: Painting with Examples of Physical Media

Jingwan Lu - Princeton University, Connelly Barnes - Adobe Systems Incorporated, Stephen DiVerdi - Google Inc., Adobe Systems Incorporated, Adam Finkelstein - Princeton University
3:45 PM - 5:35 PM

Conventional digital painting systems rely on procedural rules and physical simulation to render paint strokes. This paper presents an interactive, data-driven painting system that uses scanned images of real natural media to synthesize both new strokes and complex stroke interactions, obviating the need for physical simulation.

Sounds & Solids
Wave-Based Sound Propagation in Large Open Scenes Using an Equivalent-Source Formulation

Ravish Mehra - University of North Carolina at Chapel Hill, Nikunj Raghuvanshi - Microsoft Research, Lakulish Antani - University of North Carolina at Chapel Hill, Anish Chandak - University of North Carolina at Chapel Hill, Sean Curtis - University of North Carolina at Chapel Hill, Dinesh Manocha - University of North Carolina at Chapel Hill
3:45 PM - 5:35 PM

A novel technique that accurately models realistic acoustic effects such as diffraction, scattering, focusing, and echoes in large, open scenes at real-time rates.

Sounds & Solids
Example: Guided Physically Based Modal Sound Synthesis

Zhimin Ren - University of North Carolina at Chapel Hill, Hengchin Yeh - University of North Carolina at Chapel Hill, Ming C. Lin - University of North Carolina at Chapel Hill
3:45 PM - 5:35 PM

A physically based sound synthesis framework that takes one example recording and automatically determines the material parameters for modal synthesis, capturing the inherent material quality of the recording. Automatically adding sound effects to virtual environment applications is simplified!

Thursday, July 25

Implicit Skinning: Real-Time Skin Deformation With Contact Modeling
Rodolphe Vaillant - Université de Toulouse, Loïc Barthe - Université de Toulouse, Gael Guennebaud – INRIA, Marie-Paule Cani - Grenoble Universités, INRIA Grenoble, Brian Wyvill - University of Bath, Damien Rohmer - - École supérieure de chimie physique électronique de Lyon, INRIA, Olivier Gourmel - Université de Toulouse, Mathias Paulin - Université de Toulouse
9:00 AM

A geometric skinning method handling elbow collapse and skin contact effects in real time. Starting from a geometric skinning, the method exploits the advanced composition ability of volumic representations to adjust and control the skin deformation at bone joints while naturally handling contact and preventing any loss of detail.

Precomputed Rendering
Interactive Albedo Editing in Path Traced Volumetric Materials

Milos Hasan - Autodesk Inc., Ravi Ramamoorthi - University of California, Berkeley
10:45 AM - 12:15 PM

This editing algorithm for setting the local (single-scattering) albedo coefficients of a dense volume interactively produces an immediate update of the emergent appearance in the image.

Precomputed Rendering
Modular Flux Transfer: Efficient Rendering of High-Resolution Volumes With Repeated Structures

Shuang Zhao - Cornell University, Milos Hasan - Autodesk Inc., Ravi Ramamoorthi - University of California, Berkeley, Kavita Bala - Cornell University
10:45 AM - 12:15 PM

A precomputation-based approach to accelerate renderings of very complex and highly scattering materials built from a small set of exemplars. The algorithm separates low-order and high-order scatterings and approximates the latter, which is usually smooth but expensive to compute, using a modular flux-transfer framework.

Hardware Rendering
A Sort-Based Deferred Shading Architecture for Decoupled Sampling

Petrik Clarberg - Intel Corporation, Robert Toth - Intel Corporation, Jacob Munkberg - Intel Corporation
2:00 PM - 3:30 PM

This paper presents a novel hardware architecture for efficient stochastic rendering and shading with decoupled sampling of shading and visibility in future GPUs. The architecture reduces off-chip memory bandwidth to less than 50% of previous solutions, while avoiding overdraw and rasterizing the scene only once.

Laplacians, Light Field & Layouts
Synthesis of Tiled Patterns Using Factor Graphs

Yi-Ting Yeh - Stanford University, Katherine Breeden - Stanford University, Lingfeng Yang - Stanford University, Matthew Fisher - Stanford University, Pat Hanrahan - Stanford University
2:00 PM - 3:30 PM

A method for synthesizing tilings that are similar in appearance to a set of example tilings. The algorithm, BLOCKSS, can efficiently generate tilings with hard and soft constraints.

Bi-Scale Appearance Fabrication
Yanxiang Lan -Tsinghua University, Yue Dong - Microsoft Research Asia, Fabio Pellacini - Sapienza Università Di Roma, Dartmouth College, Xin Tong - Microsoft Research Asia
3:45 PM

A system for fabricating surfaces with desired spatially varying reflectance, including anisotropic ones, and local shading frames.

Appearance Fabrication
Fabricating BRDFs at High Spatial Resolution Using Wave Optics

Anat Levin - The Weizmann Institute of Science, Daniel Glazner - The Weizmann Institute of Science, Ying Xiong - Harvard University, Frédo Durand - Massachusetts Institute of Technology CSAIL, William Freeman - Massachusetts Institute of Technology CSAIL, Wojciech Matusik - Massachusetts Institute of Technology CSAIL, Todd Zickler - Harvard University
3:45 PM - 5:15 PM

Fabrication of surfaces with spatially varying BRDFs at a high spatial - resolution: 220dpi. Existing geometric optics models fail at this small scale, so this analysis uses wave optics instead. Based on this analysis, the paper introduces surface designs that can be realized using current photolithographic techniques.

Appearance Fabrication
Bi-Scale Appearance Fabrication

Yanxiang Lan - Tsinghua University, Yue Dong - Microsoft Research Asia, Fabio Pellacini - Sapienza Università Di Roma, Dartmouth College, Xin Tong - Microsoft Research Asia
3:45 PM - 5:15 PM

A system for fabricating surfaces with desired spatially varying reflectance, including anisotropic ones, and local shading frames.

Appearance Fabrication
Fabricating Translucent Materials Using Continuous Pigment Mixtures

Marios Papas - ETH Zürich, Disney Research Zürich, Christian Regg - Disney Research Zürich, Wojciech Jarosz - Disney Research Zürich, Bernd Bickel - Disney Research Zürich, Steve Marschner - Cornell University, Philip Jackson - Walt Disney Imagineering, Wojciech Matusik - Massachusetts Institute of Technology CSAIL, Markus Gross - ETH Zürich, Disney Research Zürich
3:45 PM - 5:15 PM

A method for practical physical reproduction and design of homogeneous materials with desired subsurface scattering.

Booth Demos

Join us on the Expo Show floor, at Booth 201, right near the front door, to see the latest Intel tools and technologies as well as software and hardware for our partners.




 


Studio Lounge Demos

Come upstairs to Room 201B to see addition demos as well as live recording from the Intel Studio Lounge, being broadcast through www.waskul.tv.


Graphics... What's the connection in software opportunities at Intel?

As software applications continue to become more visual in nature, we at Intel think it's important to deliver compelling graphics solutions. The graphics industry consistently delivers content that exploits cutting edge technology advances and best showcase visual computing and affiliated technologies. We're very interested in that. And we think you'll find a career at Intel just as interesting. Explore your options; you'll be glad you did.

Ready to solve some of the most complex software challenges? Our developers and software engineers work across multiple operating systems and a broad range of platforms to enable cutting edge features and functions for everything from smartphones, tablets and ultrabooks to smart TV, cloud computing and other top-secret new products we have up our sleeve.

Find out more about Intel Careers in Software, join our Software Talent Network or click on any of the featured positions below to review and apply directly.

Featured Graphics Software Career Opportunities

Senior Software Debug Engineer (#705863) Hillsboro, OR
Architecture Validation Engineer (#713527) Santa Clara, CA; Hillsboro, OR
Senior Graphics Software Engineer (#711612) Folsom, CA
Graphics Hardware Engineer (#711818) Folsom, CA; Santa Clara, CA; Hillboro, OR
Senior Staff Media Architect (#712785) Santa Clara, CA; Folsom, CA
Senior Software Engineer (#706502) Santa Clara, CA
Systems Engineer (#713169) Santa Clara, CA

More into hardware than software? Check out our hardware job opportunities at our Jobs At Intel website.

If you are at SIGGRAPH, stop by the Intel booth to Join our Software Talent Network! Meet with our recruiters and hiring managers and get updates on current job opportunities.

Featured Videos

No se encontró contenido
Suscribirse a Videos

Terms and conditions apply. Details available during SIGGRAPH 2013 in the Intel booth 201 in Exhibit Hall C.