Guide to Porting an OpenGL* ES 2.0 Application from iOS* to Windows* 8

Downloads

Download Guide to Porting an OpenGL* ES 2.0 Application from iOS* to Windows* 8 [PDF 697KB]
Download ios-to-windows-8-sample-app-release.zip [ZIP 131KB]

Contents

Introduction

iOS continues to be a popular platform for application developers, and many iOS applications utilize OpenGL ES to handle their 3-D graphic chores. OpenGL ES, or OpenGL for embedded systems, is a subset of the OpenGL 3D graphics API designed for embedded devices such as mobile phones. OpenGL is also available on Windows 8. But just how easy is it to move an OpenGL ES application, written in Objective-C* for the iOS platform to the ever popular Windows 8 where the dominant language for implementation of native applications is C#?

This document walks through a simple OpenGL ES 2.0 application and discusses the in’s and out’s of porting an app running on iOS to Windows 8 desktop.

The Demonstration Application

To show the basic structure and components a typical OpenGL ES application utilizes, we provide a simple application that uses OpenGL ES 2.0 to draw a three-dimensional cube with a texture image mapped onto each surface. The demo app functionality also incorporates a simple single source lighting model for the cube.


Figure 1. iOS* version of simple OpenGL* ES application

Users can manipulate the cube using common gestures:

  • Pinch to zoom in on the cube
  • Stretch to zoom out of the cube
  • Use a single finger (or mouse) to manipulate the cube using a virtual track ball.[1]

We will use this application to highlight the differences between working with OpenGL ES 2.0 on iOS vs. Windows 8.

Demonstrated OpenGL ES concepts

The application demonstrates the following OpenGL features:

  • WPF and OpenGL interoperability (Windows 8 version)
    • Creating an OpenGL context inside a WPF application
    • Rendering to the WPF-provided window surface
  • How to manage the OpenGL viewport and projection matrix inside WPF
  • Geometry definition and vertex specification
    • How to prepare vertex data for rendering. Including vertices, surface normals, and texture coordinates
    • Set up vertex parameters for rendering
  • Working with the programmable pipeline
    • Compiling and linking shader source into shader program
    • Setting up shader input attributes and uniforms for rendering
    • Working with textures
  • Basic ambient and diffuse components from the ADS light model
  • Supporting touch manipulation
    • Pinch object scaling
    • Manipulating the object’s rotation using an arc ball implementation

Development Environments

The iOS application described in this document was developed using the standard iOS development environment from Apple, XCode*. The application was entirely written in Objective-C and uses the native iOS OpenGL implementation and supporting frameworks included with the iOS SDK.

Our Windows 8 development was done using Visual Studio* Express 2012 for desktop apps. The application was written in C# using Windows Presentation Foundation and OpenTK library (http://www.opentk.com/). The OpenTK toolkit wraps OpenGL, OpenCL[2]*, and OpenAL APIs for the C# language, thus providing a convenient way to use them from .NET and applications written with WPF.

OpenGL ES

Our OpenGL ES application consists of the following basic steps:

  1. Context and window initialization
  2. Viewport setup
  3. Setting up vertex and fragment shaders
  4. Creating geometry buffers
  5. Draw call

Initializing OpenGL window and Context

iOS

The portion responsible for window handling in iOS is provided by the GLK framework. The view class responsible for window presentation is GLKView, and its backend operations are implemented by extending GLKViewController.

Following a typical iOS Model-View-Controller pattern, an Instance of GLKView is defined on a single window inside a storyboard and supported by the ViewController class, which extends GLKViewController. Our controller, therefore, must implement the following:

ViewController.m:

#import "ViewController.h"
#import "Cube.h"


@interface ViewController () { 
...
}
@property (strong, nonatomic) EAGLContext *context;


- (void)setupGL;
...
@end


@implementation ViewController


- (void)viewDidLoad
{
   [super viewDidLoad];
   
   self.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
   if (!self.context) {
       NSLog(@"Failed to create ES context");
   }
   
   GLKView *view = (GLKView *)self.view;
   view.context = self.context;
   view.drawableDepthFormat = GLKViewDrawableDepthFormat24;
   
   [self setupGL];
}


- (void)setupGL 
{
   [EAGLContext setCurrentContext:self.context];
   _cube = [[Cube alloc] init];
   [_cube initialize];
   
   glEnable(GL_DEPTH_TEST);
   self.preferredFramesPerSecond = 60;
...
}
...
@end

After the runtime finishes loading the view, the viewDidLoad function is called, which we re-implemented as part of our ViewController. Here the OpenGL context is initialized by instantiating a EAGLContext object and then selecting the desired OpenGL context, in this case kEAGLRenderingAPIOpenGLES2, for OpenGL ES 2.0. We also created an additional convenience function, setupGL, which is not a part of the original GLKViewController API, that contains the scene initialization code.

Finally, in iOS it is important to explicitly set the desired frame rate for the application. By default the window redraw is triggered by an event, but setting this property will force a redraw with a constant frame rate.

Windows 8

To properly initialize an OpenGL context on Windows 8, which we can then use for rendering inside a WPF window control, we need to register a Window_Loaded event handler and use it to get the window’s handle.

//Window loaded handler
       private void Window_Loaded(object sender, RoutedEventArgs e)
       {
           //Get the windw Hwnd 
           HwndSource hwndSource = PresentationSource.FromVisual(this) as HwndSource;
           HwndTarget hwndTarget = hwndSource.CompositionTarget;


           //Turn off DX rendering for WPF 
           hwndTarget.RenderMode = RenderMode.SoftwareOnly;


           //Get the window information object
           m_windowInfo = Utilities.CreateWindowsWindowInfo(hwndSource.Handle);
           //Create a new OpenGL context and make it current
           m_context = new OpenTK.Graphics.GraphicsContext(OpenTK.Graphics.GraphicsMode.Default, m_windowInfo);
           m_context.MakeCurrent(m_windowInfo);
           //Load all OpenGL entry points
           (m_context as OpenTK.Graphics.IGraphicsContextInternal).LoadAll();
		
	...
       }

Since we don’t want to use DirectX* for WPF hardware-accelerated rendering, we need to turn it off; after all, we want to use OpenGL to render the application’s contents.

Next, we create a Window Information object and use that to create a GraphicsContext instance. The last thing we need to do is make sure to load all OpenGL entry points; otherwise, we will not be able to call any of the newer OpenGL functions, for example, to create shader programs.

Application porting guidelines

While the initialization portions of the iOS and WPF applications use platform-dependent functionality, the steps themselves are very similar. Both Window_Loaded and viewDidLoad are event handlers called when the respective window system finishes loading the window we will use for our OpenGL scene. Within the event handlers, an OpenGL context is created and stored for later use. Note that under Windows 8 the OpenTK library provides convenience functions that are native to iOS.

Viewport handling

iOS

Our iOS example application operates in full screen mode; therefore, the viewport is only resized by a change in device orientation. This is handled nicely by the iOS internal framework—GLKView, derived from NSView, which sets the new OpenGL viewport automatically. However, developers can still implement functions such as didRotateFromInterfaceOrientation if it is necessary to adjust any internal application states, such as calculations related to perspective.

Windows 8

Unlike a device running iOS, it is possible for the user to change the window’s size, so we need to handle this accordingly; otherwise, our scene will just get clipped.

        //Size changed handler
       protected override void OnRenderSizeChanged(SizeChangedInfo sizeInfo)
       {
           base.OnRenderSizeChanged(sizeInfo);
           //Resize the virtual arc ball 
           m_arcBall.setSize(sizeInfo.NewSize.Width, sizeInfo.NewSize.Height);


           //Check if we have an active OpenGL context
           if (m_context != null)
           {
               //Produce a bounding rectangle
               Rect bounds = new Rect(new Point(0, 0), new Point(this.ActualWidth, this.ActualHeight));
               //Set the OpenGL viewport
               GL.Viewport((int)(bounds.X + 0.5), (int)(bounds.Y + 0.5), (int)(bounds.Width + 0.5), (int)(bounds.Height - 0.5));
               //Update projection
               UpdateProjection();
           }
       }

After calling the base implementation we first have our arcball helper-class adjust its size. We then check to see if we have a valid OpenGL context and use the window’s new size to produce a bounding rectangle.

When looking at the call to set the new viewport, notice that, instead of having the familiar glFunctionName format, all OpenGL calls done with the help of OpenTK are in the form of GL.FunctionName. Finally, we update our scene’s projection with the help of an OpenTK convenience function that calculates a view frustum and that is provided by OpenTK’s 4D matrix implementation:

//A helper used to update the scene's projection matrix
       private void UpdateProjection()
       {
           //Creat a new projection matrix using the windows size.
           m_projection = OpenTK.Matrix4.CreatePerspectiveFieldOfView((float)Math.PI / 4, (float)(this.Width) / (float)(this.Height), 0.1f, 100.0f);
       }

Application porting guidelines

Because it is possible to resize windows on a Windows-based platform you must account for changes in the viewport and projections.

Working with the programmable pipeline

To simplify things a bit, all shader handling code was moved into a helper class called ShaderProgram. This means setting up a shader program is as simple as calling the class’s constructor, which takes care of abstracting all the OpenGL calls necessary to compile and link a shader program.

iOS

The iOS version of the code responsible for preparing the shader program is embedded inside the init method, which is the overloaded constructor for the ShaderProgram class.

-(ShaderProgram*) init {
   
   if(!(self == [super init]))
       return nil;
   
   GLuint vertShader =0;
   GLuint fragShader = 0;
   NSString *vertShaderPathname;
   NSString *fragShaderPathname;
   
   // Create shader program.
   _programHandler = glCreateProgram();
   
   // Create and compile vertex shader.
   vertShaderPathname = [[NSBundle mainBundle] pathForResource:@"Shader" ofType:@"vsh"];
   if (![self compileShader:&vertShader type:GL_VERTEX_SHADER file:vertShaderPathname])
       @throw @"Failed to compile vertex shader";
   
   // Create and compile fragment shader.
   fragShaderPathname = [[NSBundle mainBundle] pathForResource:@"Shader" ofType:@"fsh"];
   if (![self compileShader:&fragShader type:GL_FRAGMENT_SHADER file:fragShaderPathname])
       @throw @"Failed to compile fragment shader";

   
   // Attach shader to program.
   glAttachShader(_programHandler, vertShader);
   glAttachShader(_programHandler, fragShader);
   
   // Bind attribute locations.
   // This needs to be done prior to linking.
   glBindAttribLocation(_programHandler, GLKVertexAttribPosition, "position");
   glBindAttribLocation(_programHandler, GLKVertexAttribNormal, "normal");
   glBindAttribLocation(_programHandler, GLKVertexAttribTexCoord0, "texCoord");
   
   // Link program.
   if (![self linkProgram:_programHandler]) {
       if (vertShader) glDeleteShader(vertShader);
       if (fragShader) glDeleteShader(fragShader);
       if (_programHandler) glDeleteProgram(_programHandler);

       @throw @"Failed to link program";
   }
   
   [self validateProgram:_programHandler];

   
   // Release vertex and fragment shaders.
   if (vertShader) {
       glDetachShader(_programHandler, vertShader);
       glDeleteShader(vertShader);
   }
   if (fragShader) {
       glDetachShader(_programHandler, fragShader);
       glDeleteShader(fragShader);
   }
   
   // Get uniform locations.
   _matParam.textureUniformLocation = glGetUniformLocation(_programHandler, "texture");
   _matParam.ambientColor4fvUniformLocation = glGetUniformLocation(_programHandler, "ambientColor");
   _matParam.diffuseColor4fvUniformLocation = glGetUniformLocation(_programHandler, "diffuseColor");
   _matParam.specularColor4fvUniformLocation = glGetUniformLocation(_programHandler, "specularColor");
   _matParam.emissiveColor4fvUniformLocation = glGetUniformLocation(_programHandler, "emissiveColor");
   _matParam.shininess1fUniformLocation = glGetUniformLocation(_programHandler, "shininess");
   
   _transfParam.modelMatrixUniformLocation = glGetUniformLocation(_programHandler, "modelMatrix");
   _transfParam.worldMatrixUniformLocation = glGetUniformLocation(_programHandler, "worldMatrix");
   _transfParam.viewMatrixUniformLoaction = glGetUniformLocation(_programHandler, "viewMatrix");
   _transfParam.perspectiveProjectionMatrixUniformLocation = glGetUniformLocation(_programHandler, "projMatrix");

   return self;
}

- (BOOL)compileShader:(GLuint *)shader type:(GLenum)type file:(NSString *)file {
   GLint status;
   const GLchar *source;
   
   source = (GLchar *)[[NSString stringWithContentsOfFile:file encoding:NSUTF8StringEncoding error:nil] UTF8String];
   if (!source) {
       NSLog(@"Failed to load shader");
       return NO;
   }
   
   *shader = glCreateShader(type);
   glShaderSource(*shader, 1, &source, NULL);
   glCompileShader(*shader);
   
//#if defined(DEBUG)
   GLint logLength;
   glGetShaderiv(*shader, GL_INFO_LOG_LENGTH, &logLength);
   if (logLength > 0) {
       GLchar *log = (GLchar *)malloc(logLength);
       glGetShaderInfoLog(*shader, logLength, &logLength, log);
       NSLog(@"Shader compile log:n%s", log);
       free(log);
   }
//#endif
   
   glGetShaderiv(*shader, GL_COMPILE_STATUS, &status);
   if (status == 0) {
       glDeleteShader(*shader);
       return NO;
   }
   
   return YES;
}

- (BOOL)linkProgram:(GLuint)prog
{
   GLint status;
   glLinkProgram(prog);
   
   GLint logLength;
   glGetProgramiv(prog, GL_INFO_LOG_LENGTH, &logLength);
   if (logLength > 0) {
       GLchar *log = (GLchar *)malloc(logLength);
       glGetProgramInfoLog(prog, logLength, &logLength, log);
       NSLog(@"Program link log:n%s", log);
       free(log);
   }
   
   glGetProgramiv(prog, GL_LINK_STATUS, &status);
   if (status == 0) {
       return NO;
   }
   
   return YES;
}

- (BOOL)validateProgram:(GLuint)prog
{
   GLint logLength, status;
   
   glValidateProgram(prog);
   glGetProgramiv(prog, GL_INFO_LOG_LENGTH, &logLength);
   if (logLength > 0) {
       GLchar *log = (GLchar *)malloc(logLength);
       glGetProgramInfoLog(prog, logLength, &logLength, log);
       NSLog(@"Program validate log:n%s", log);
       free(log);
   }
   
   glGetProgramiv(prog, GL_VALIDATE_STATUS, &status);
   if (status == 0) {
       return NO;
   }
   
   return YES;
}

In this implementation the object is bound to a specific shader source and exposed to the input parameters used in the program. The vertex and fragment shader sources are compiled from Shader.fsh and Shader.vsh files attached to the project. Before the program is linked we assigned predefined attribute indexes for convenience. The GLK framework provides the handy predefined constants: GLKVertexAttribPosition, GLKVertexAttribNormal and GLKVertexAttribTexCoord0. Once successfully linked, we extract all uniform application indexes and expose them via two structures: ShaderMaterialParam and ShaderSpaceTransformationsParam.

Windows 8

Setting up shaders is done by calling the class constructor, passing the shader’s source code.

   public ShaderProgram(string vertexSource, string fragmentSource)
   {
       //A variable used to store shader program state check results
       int result;
        ...
        //Create shaders
       m_vertexShader = GL.CreateShader(ShaderType.VertexShader);
       m_fragmentShader = GL.CreateShader(ShaderType.FragmentShader);
       //Upload vertex shader code
       GL.ShaderSource(m_vertexShader, vertexSource);
       //Upload fragment shader code
       GL.ShaderSource(m_fragmentShader, fragmentSource);
       //Compile vertex shader
       GL.CompileShader(m_vertexShader);
       GL.GetShader(m_vertexShader, ShaderParameter.CompileStatus, out result);
      if (result == 0)
       {
               System.Diagnostics.Debug.WriteLine("Vertex shader compile error:");
               System.Diagnostics.Debug.WriteLine(GL.GetShaderInfoLog(m_vertexShader));
       }

       //Compile fragment shader
       GL.CompileShader(m_fragmentShader);
       GL.GetShader(m_fragmentShader, ShaderParameter.CompileStatus, out result);
       if (result == 0)
       {
           System.Diagnostics.Debug.WriteLine("Vertex fragment compile error:");
           System.Diagnostics.Debug.WriteLine(GL.GetShaderInfoLog(m_fragmentShader));
       }

       //Create a shader program
       m_program = GL.CreateProgram();
       //Attach shaders
       GL.AttachShader(m_program, m_vertexShader);
       GL.AttachShader(m_program, m_fragmentShader);
       //Link shader program
       GL.LinkProgram(m_program);
       GL.GetProgram(m_program, ProgramParameter.LinkStatus, out result);
       if (result == 0)
       {
	System.Diagnostics.Debug.WriteLine("Failed to link shader program!");
	 System.Diagnostics.Debug.WriteLine(GL.GetProgramInfoLog(m_program));
       }
   }

The first order of business is to create our two shader stages by calling GL.CreateShader with ShaderType.VertexShader and ShaderType.FragmentShader, respectively. With the two empty shader objects created, it is now time to upload their source code using a GL.ShaderSource call and compile them using GL.CompileShader. We also make sure to retrieve the shaders’ compilation state for debugging purposes.

Once the shader stages are compiled and in place, we can create a shader program object using a GL.CreateProgram call. Attach the shader stages using GL.AttachShader and call GL.LinkProgram to link the program. If there are no shader compile or link errors, we now have ourselves a shiny new shader program instance that we can use to render our scene.

Application porting guidelines

The procedure is generally the same for both versions. The differences may be in the resource loading or the fact that OpenTK is wrapping the OpenGL C API into C# calls.

Geometry definition, vertex specification, and textures

iOS

The geometry in the example is a cube represented by the Cube object. Internally, the data is stored in the vertex buffer object as a 1-dimensional array in the following format:

{{posX posY posZ normalX normalY normalZ texCoordX texCoordY}..) 

All elements are of type GLfloat.

To speed up the rendering, we store the whole vertex definition state in the Vertex Array Object (VAO). This greatly improves rendering performance by eliminating the need for multiple binding calls for every element switch. It also reduces the vertex setup during the render call to just selecting the proper VAO. This translates to a single function call on the CPU side.

-(void)initialize {


   _shaderProgram = [[ShaderProgram alloc] init];
   
   //define  arbitrary material properties
   GLKEffectPropertyMaterial *materialProperties = [[GLKEffectPropertyMaterial alloc] init];
   materialProperties.ambientColor = GLKVector4Make(0.2f, 0.2f, 0.2f, 1.0f);
   materialProperties.diffuseColor = GLKVector4Make(0.7f, 0.7f, 0.7f, 1.0f);
   
   
   _material = [[Material alloc] initMaterialProperties: materialProperties andTextureFile:@"texture" ofType:@"png"];
   
   glGenVertexArraysOES(1, &_vertexArray);
   glBindVertexArrayOES(_vertexArray);
   
   glGenBuffers(1, &_vertexBuffer);
   glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer);
   glBufferData(GL_ARRAY_BUFFER, sizeof(gCubeVertexData), gCubeVertexData, GL_STATIC_DRAW);
   
   glEnableVertexAttribArray(GLKVertexAttribPosition);
   glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 32, (GLvoid*)(0));
   glEnableVertexAttribArray(GLKVertexAttribNormal);
   glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 32, (GLvoid*)(12));
   glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
   glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, 32, (GLvoid*)(24));
   
   glBindVertexArrayOES(0);
   
   glUseProgram([_shaderProgram getProgramHandler]);
   
   _lightPosUniformLocation = glGetUniformLocation([_shaderProgram getProgramHandler], "lightPosition");
   _lightColorUniformLocation = glGetUniformLocation([_shaderProgram getProgramHandler], "lightColor");
   
   ShaderSpaceTransformationsParam transf = [_shaderProgram getSpaceTransformationParam];
   glUniform4fv(transf.modelMatrixUniformLocation, 1, GLKMatrix4Identity.m);
   glUniform4fv(transf.perspectiveProjectionMatrixUniformLocation, 1 , GLKMatrix4Identity.m);
   glUniform4fv(transf.viewMatrixUniformLoaction, 1, GLKMatrix4Identity.m);
   glUniform4fv(transf.worldMatrixUniformLocation, 1, GLKMatrix4Identity.m);
   
   [_material bindToProgram:_shaderProgram];
   
   _initialized = 1;
   
}

Here is the OpenGL workflow:

  1. We create a vertex array object and bind it; this causes all of the following calls to be saved as a state in the VAO.
  2. We create a vertex buffer object and bind it to make it current; we also bind it to the context as active. Doing this means all buffer-related operations are performed on the currently bound one.
  3. The vertex data is transferred from main memory into graphics ram using the glBufferData function.
  4. We enable and set the vertex parameters, which are “unique per single shader invocation” chunks of data. For example, we set GLKVertexAttribPosition by calling glEnableVertexAttribArray with a pointer to the data set using glVertexAttribPointer. In our case the call looks like:
     glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 32, (GLvoid*)(0));
    

    Here is a description of the parameters:

    • Attribute location id
    • How many values of the given type are passed in this single chunk
    • The type of the value
    • Whether fixed-point data values should be normalized
    • Stride, roughly, the spacing between the vertex data in the data array. In our case the data is tightly packed, meaning the vertex data is followed by information about normals and texture coordinates, which we want ignored. By specifying the gap between consecutive vertex parameters, the API can take this into account when parsing the data.
    • The last parameter is the offset from the beginning of the buffer to the location of the first parameter element. In the case of vertex data, the offset is 0 because the vertex is the first parameter. If we were passing in the normals, we need to jump (3 * sizeof(GLfloat)) from the beginning, skipping the vertex data.
  5. After the vertex data is properly set up, we unbind the VAO. From now on it can be reused when needed to restore the vertex state.

The textures themselves are one of the biggest and most complex topics in OpenGL, but iOS provides a set of classes that abstract most of the required steps, such as loading the image to main memory or uploading it to the graphics memory. In our implementation we load the texture from a .png file and get the instance of the GLKTextureInfo class. This reduces the task to a single line of code.

-(Material*) initMaterialProperties:(GLKEffectPropertyMaterial*)materialProperties andTextureFile:(NSString*) texFileName ofType:(NSString*)type {
   
   if(!(self = [super init]))
       return nil;
   
   _materialProperties = materialProperties;
   
   NSError* err = nil;
   NSString *path = [[NSBundle mainBundle] pathForResource:texFileName ofType:type];


   _texture = [GLKTextureLoader textureWithContentsOfFile:path options:nil error:&err];
   if(err) {
       NSLog(@"Problem with loading texture: %@", [err localizedDescription]);
       return nil;
   }
   
   return self;
}

The texture is now ready to bind to a texture unit during rendering. We will cover that topic in the rendering call section.

Windows 8

Geometry management is just plain old OpenGL code. To keep things simple, we work with the regular vertices, normals, texture coordinates, and index buffers.

//Generate vertex buffer objects
GL.GenBuffers(4, m_vbos);
//check if we had any errors while creating vbos
if (GL.GetError() == ErrorCode.NoError)
{
//Set the initialized flag to true
      m_initialized = true;
      //Setup Vertex buffer
      float[] vertices = {
                          ...
                         };
      GL.BindBuffer(BufferTarget.ArrayBuffer, m_vbos[0]);
      GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(vertices.Length * sizeof(float)),
	 vertices, BufferUsageHint.StaticDraw);

      //Setup Normal buffer
      float[] normals = {
                         ...
                        };
      GL.BindBuffer(BufferTarget.ArrayBuffer, m_vbos[2]);
      GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(normals.Length * sizeof(float)), 
	normals, BufferUsageHint.StaticDraw);

      //Setup Texture coordinate buffer
      float[] texCoords = {
                           ...
                          };
       GL.BindBuffer(BufferTarget.ArrayBuffer, m_vbos[3]);
       GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(texCoords.Length * sizeof(float)),
	 texCoords, BufferUsageHint.StaticDraw);

       //Setup Index buffer
       ushort[] indices = {
                          ...
                          };

       GL.BindBuffer(BufferTarget.ElementArrayBuffer, m_vbos[1]);
       GL.BufferData(BufferTarget.ElementArrayBuffer, (IntPtr)(indices.Length * sizeof(ushort)),
	 indices, BufferUsageHint.StaticDraw);

       //Unbind buffers
       GL.BindBuffer(BufferTarget.ArrayBuffer, 0);
       GL.BindBuffer(BufferTarget.ElementArrayBuffer, 0);

First, we create new buffer objects using the GL.GenBuffers call and go through all the buffers, binding them and setting their data using GL.BufferBind and GL.BufferData. Finally, we simply clear any buffer bindings.

We also use a vertex array object to make the draw simpler by reducing repetitive GL function calls. Here is the code we use to generate and set up a vertex array object.

//The object's vao creation method
       private void makeVao()
       {
           if (m_vao == 0)
           {
               GL.GenVertexArrays(1, out m_vao);
               if (m_vao != 0)
               {
                   GL.BindVertexArray(m_vao);
                   //Get the vbo attribute locations
                   int normalAttrLocation = m_program.GetAttributeLocation(m_program.NormalAttrName);
                   int vertexAttrLocation = m_program.GetAttributeLocation(m_program.VertexAttrName);
                   int textureCoordAttrLocation = m_program.GetAttributeLocation(m_program.TexCoordsAttrName);

                   //Bind vertex buffer
                   if (vertexAttrLocation >= 0)
                   {
                       GL.BindBuffer(BufferTarget.ArrayBuffer, m_vbos[0]);
                       GL.EnableVertexAttribArray(vertexAttrLocation);
                       GL.VertexAttribPointer(vertexAttrLocation, 3, VertexAttribPointerType.Float, false, 0, 0);
                   }
                   else
                   {
                       System.Diagnostics.Debug.WriteLine("No vertex attribute found.");
                       m_program.Release();
                       return;
                   }

                   //Bind normals buffer
                   if (normalAttrLocation >= 0)
                   {
                       GL.BindBuffer(BufferTarget.ArrayBuffer, m_vbos[2]);
                       GL.EnableVertexAttribArray(normalAttrLocation);
                       GL.VertexAttribPointer(normalAttrLocation, 3, VertexAttribPointerType.Float, false, 0, 0);
                   }
                   else
                   {
                       System.Diagnostics.Debug.WriteLine("No normals attribute found.");
                   }

                   //Bind texture coordinate buffer
                   if (textureCoordAttrLocation >= 0)
                   {
                       GL.BindBuffer(BufferTarget.ArrayBuffer, m_vbos[3]);
                       GL.EnableVertexAttribArray(textureCoordAttrLocation);
                       GL.VertexAttribPointer(textureCoordAttrLocation, 2, VertexAttribPointerType.Float, false, 0, 0);
                   }
                   else
                   {
                       System.Diagnostics.Debug.WriteLine("No texture coordinates attribute found.");
                   }

                   //Bind index buffer
                   GL.BindBuffer(BufferTarget.ElementArrayBuffer, m_vbos[1]);
                   GL.BindVertexArray(0);
               }
           }
       }

We first create a new vertex array object by calling GL.GenVertexArrays and then bind the new vertex array object using the GL.BindVertexArray function, proceeding to set up the rendering pipeline as we would without a vertex array object.

We then retrieve the shader program’s attribute locations using the wrapper’s GetAttributeLocation method. Once known, we can bind the respective buffers, enabling their attribute locations using a GL.EnableVertexAttribArray function call and setting that attribute’s data using a GL.VertexAttribPointer function call. When complete, we have a vertex array object we can use to restore our object’s rendering state with just a single OpenGL function call.

The material definition, including textures, for the Windows 8 version is encapsulated in the material class. Unlike the iOS version, the texture is loaded using the wrappers of the native OpenGL function calls, except for the Bitmap and BitmapData classes, which abstract loading the image from the hard drive into main memory.

private static int LoadTexture(string filename)
       {
           //A variable used to store the texture's id
           int id = -1;
           //Check if we have a file name
           if (filename != "")
           {
               //Generate a new texture
               id = GL.GenTexture();
               //Check if we got a valid texture id
               if (id >= 0)
               {
                   //Bind the new texture
                   GL.BindTexture(TextureTarget.Texture2D, id);

                   //Open the image file using the Bitmap class
                   Bitmap bmp = new Bitmap(filename);
                   //Get the bitmaps data
                   BitmapData data = bmp.LockBits(new Rectangle(0, 0, bmp.Width, bmp.Height), ImageLockMode.ReadOnly, 
		System.Drawing.Imaging.PixelFormat.Format32bppArgb);

                   //Upload the image data to the texture
                   GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, data.Width,
                       data.Height, 0, OpenTK.Graphics.OpenGL.PixelFormat.Bgra, PixelType.UnsignedByte, data.Scan0);
                   //Lock the bitmap
                   bmp.UnlockBits(data);
                   //Generate mipmaps for the texture
                   GL.GenerateMipmap(GenerateMipmapTarget.Texture2D);
               }
           }
           return id; 
       }

The draw call

iOS

The drawing procedure in iOS is divided into two main parts that originate from the GLKViewController:

  • The update function, which is called before the draw call and is used to update any scene data and for non-visual tasks.
  • The glkView:(GLKView*) drawInRect:(CGRect) function, which handles the drawing.

In the iOS implementation the update function gathers the cube-position-parameters generated by user interaction events and produces a world-transformation matrix. This matrix is passed in the second call as a uniform to our shader program.

- (void)update
{
   GLKMatrix4 scaleMatrix = GLKMatrix4MakeScale(scaleFactor, scaleFactor, scaleFactor);
   _worldMatrix = GLKMatrix4MakeTranslation(0.0, 0.0, 0.0);
   GLKMatrix4 rotationMatrix = GLKMatrix4MakeWithQuaternion(arcCurrentRotation);
   _worldMatrix =GLKMatrix4Multiply(scaleMatrix, _worldMatrix);
   _worldMatrix = GLKMatrix4Multiply(_worldMatrix, rotationMatrix);
   
}

Once the update function has completed, the rendering function is done. In this implementation, we render a single object on the scene that has a single source of light. The object itself contains the functionality to render itself using information passed to it, in particular, four transformation matrices (model, world, view, and perspective), plus the light position and color vectors.

- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
   glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
   glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
   
   [_cube renderWithLightPosition: &_lightPos lightColor: &_lightColor modelMatrix: &_modeMatrix worldMatrix: &_worldMatrix viewMatrix: &_viewMatrix perspectiveMatrix: &_projectionMatrix];
}

-(void)renderWithLightPosition:(GLKVector3*)lightPos
           lightColor:(GLKVector3*)lightColor
          modelMatrix:(GLKMatrix4*)modelMatrix
          worldMatrix:(GLKMatrix4*)worldMatrix
           viewMatrix:(GLKMatrix4*)viewMatrix
    perspectiveMatrix:(GLKMatrix4*)perspectiveMatrix {


   glUseProgram([_shaderProgram getProgramHandler]);


   ShaderSpaceTransformationsParam transf = [_shaderProgram getSpaceTransformationParam];
   
   glBindVertexArrayOES(_vertexArray);
   [_material activateTextureForProgram:_shaderProgram andTextureUnitId:GL_TEXTURE0];
   
   glUniform3fv(_lightPosUniformLocation, 1, lightPos->v);
   glUniform3fv(_lightColorUniformLocation, 1, lightColor->v);
   
   glUniformMatrix4fv(transf.modelMatrixUniformLocation, 1, GL_FALSE, modelMatrix->m);
   glUniformMatrix4fv(transf.perspectiveProjectionMatrixUniformLocation, 1, GL_FALSE, perspectiveMatrix->m);
   glUniformMatrix4fv(transf.viewMatrixUniformLoaction, 1, GL_FALSE, viewMatrix->m);
   glUniformMatrix4fv(transf.worldMatrixUniformLocation, 1, GL_FALSE, worldMatrix->m);

   glDrawArrays(GL_TRIANGLES, 0, 36);
}

Thanks to the VAO, the draw function is quite simple. It boils down to:

  1. Bind the VAO, which has been set up with saved vertex state
  2. Bind texture to the current shader program
  3. Upload the uniform’s data
  4. Call the draw function. In this case, the primitives drawn are triangles (GL_TRIANGLES). To construct those triangles, we pass information for each of the 36 vertices as a single collection of positions, normals, and texture coordinates passed in the buffer object for each vertex.

Windows 8

To render our scene, we simply call the render method on every object present, providing it with the current projection matrix, light location, and light intensity vectors. The cube object’s rendering method looks like this.

 //The object's render method
       public override void Render(OpenTK.Matrix4 projection, OpenTK.Vector3 lightPosition, OpenTK.Vector3 lightIntensity)
       {
           base.Render(projection, lightPosition, lightIntensity);
           //Check if the object has been initialized
           if (m_initialized) 
           {
               //Activate the shader program
               m_program.Bind();

               makeVao();
               if (m_vao != 0)
               {
                   GL.BindVertexArray(m_vao);
                   //Set uniforms
                   m_program.SetUniformMatrix4(m_program.ViewMatrixAttrName, false, m_viewMatrix);
                   m_program.SetUniformMatrix4(m_program.ProjectionMatrixAttrName, false, projection);
                   m_program.SetUniformMatrix4(m_program.NormalMatrixAttrName, false, m_normalMatrix);
                   m_program.SetUniformVector3(m_program.LightPositionAttrName, lightPosition);
                   m_program.SetUniformVector3(m_program.LightIntensityAttrName, lightIntensity);

                   //check if we have a valid material instance and set it up
                   if (m_material != null) m_material.Setup();

                   //Draw elements
                   GL.DrawElements(BeginMode.Triangles, m_indexCount, DrawElementsType.UnsignedShort, 0);

                   GL.BindVertexArray(0);
               }
               //Deactivate the shader program
               m_program.Release();
           }
           else
           {
               System.Diagnostics.Debug.WriteLine("Cube not initialized");
           }
       }

We start by checking if the object was initialized (are its vertex buffers set up, etc.). If so, we simply use the shader program wrapper object to bind the object’s shader program instance. We make sure the object’s vertex array object is in place by calling the makeVao helper method, verifying the vertex array object’s instance and binding.

Next we set the shader program’s uniforms using the SetUniformType methods and set up the material instance. The material object is responsible for setting up the object’s texture and other uniforms values that might be needed to define a specific material.

The last thing we need to do is make the OpenGL draw call to have the geometry rendered; we do this with a simple call to GL.DrawElements. Finally, we need to make sure to unbind the vertex array object and release the shader program.

Touch Input

iOS

In the iOS version we have two possible ways for the user to interact with the application: pinch to adjust the zoom of the cube and drag to rotate it.

The pinch is implemented using the pinch gesture recognizer. The gesture recognizer is placed on top of the view in the xib file, and the callback function is implemented in the ViewController that is bound to its selector action.

- (IBAction)onPinch:(UIPinchGestureRecognizer *)sender {
   if(scalePreviousIteration == 0) {
       scalePreviousIteration = sender.scale;
   } else {
       float dScale = sender.scale - scalePreviousIteration;
       scaleFactor += dScale;
       if(scaleFactor > __SCALE_MAX_)
           scaleFactor = __SCALE_MAX_;
       else if(scaleFactor < __SCALE_MIN_)
           scaleFactor = __SCALE_MIN_;
       
       scalePreviousIteration = sender.scale;
   }

This adjusts the scale and places it in the scaleFactor variable, which is later read by the update function.

The second interaction method, which is drag to rotate, is handled by the ViewController’s built-in functions.

-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
   UITouch *touch = [[touches allObjects] objectAtIndex:0];
   currentPoint = [self maptoSphereCoords:[touch locationInView:self.view]];
   scalePreviousIteration =0.0f;
}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
   UITouch *touch = [[touches allObjects] objectAtIndex:0];
   GLKVector3 touchPointMapped = [self maptoSphereCoords:[touch locationInView:self.view]];
   GLKQuaternion rotationQuaternion = [self calculateRotationQuaternionWithOrigin:currentPoint andDestination:touchPointMapped];
   
   arcCurrentRotation = GLKQuaternionMultiply(rotationQuaternion, arcCurrentRotation);
   currentPoint = touchPointMapped;
}

The touchesBegan function is always called first and is used to initialize all touch-related data. The second function touchesMoved is called on every single finger movement, and we use it to perform the rotation calculations. It sets a Quaternion that describes the current object rotation. The Quaternion is later used in the update function as well to construct the rotation matrix.

Windows 8

For touch input we simply use the built-in WPF touch events, TouchDown, TouchMove, and TouchUp.

//Touch down event handler
       void MainWindow_TouchDown(object sender, TouchEventArgs e)
       {
           //Get the touch point relative to the main window
           TouchPoint p = e.GetTouchPoint(this);
           //Set the acr ball start location
           m_arcBall.startPoint(p.Position.X, p.Position.Y);
           //increment the touch point
           ++pointCount;
           //Check the used touch point count
           if (pointCount == 1)
           {
               //Set the 1st touch device id
               p1InputId = p.TouchDevice.Id;
               //Clear the pinch manipulation distance
               pointDistance = -1.0;
               p1 = new Point(-1.0, -1.0);
               p1Valid = false;
           }
           else if (pointCount == 2)
           {
               //Set the 2nd touch device id
               p2InputId = p.TouchDevice.Id;
               //Clear the pinch manipulation distance
               pointDistance = -1.0;
               p2 = new Point(-1.0, -1.0);
               p2Valid = false;
           }
       }

       //Touch move event hanlder
       void MainWindow_TouchMove(object sender, TouchEventArgs e)
       {
           //Get the touch point relative to the main window
           TouchPoint p = e.GetTouchPoint(this);

           //Check the used touch point count
           if (pointCount == 1)
           {
               //Move the arc ball to the new position
               m_arcBall.movePoint(p.Position.X, p.Position.Y);
               //Check if we have a scene object to work with
               //Update the object's rotation using the arc ball provided rotation matrix
               if (m_object != null) m_object.SetRotation(m_arcBall.RotationMatrix);
           }
           else
           {
               //Check the device id and update the correct pinch touch point
               if (p1InputId == p.TouchDevice.Id)
               {
                   p1 = p.Position;
                   p1Valid = true;
               }
               else if (p2InputId == p.TouchDevice.Id)
               {
                   p2 = p.Position;
                   p2Valid = true;
               }
               if (p1Valid && p2Valid)
               {
                   //update the pinch distance
                   double newDistance = PointDistance(p1, p2);
                   //check if we have a pinch distance set
                   if (pointDistance != -1.0)
                   {
                       //Calculate the manipulation scale change
                       double scale = newDistance / pointDistance;
                       //Check if we have a scene object to work with
                       //set the object's scale
                       if (m_object != null) m_object.Scale((float)scale);
                   }
                   //Store the updated pinch distance
                   pointDistance = newDistance;
               }
           }
       }

       //Touch up event hanlder
       void MainWindow_TouchUp(object sender, TouchEventArgs e)
       {
           //Get the touch point relative to the main window
           TouchPoint p = e.GetTouchPoint(this);
           //decrement the used touch point counter
           --pointCount;
           //Check the touch device id
           if (p1InputId == p.TouchDevice.Id)
           {
               //Clear the 1st touch point id
               p1InputId = -1;
               //Clear the last pinch distance
               pointDistance = -1.0;
               p1 = new Point(-1.0, -1.0);
               p1Valid = false;
           }
           else if (p2InputId == p.TouchDevice.Id)
           {
               //Clear the 2nd touch point id
               p2InputId = -1;
               //Clear the last pinch distance
               pointDistance = -1.0;
               p2 = new Point(-1.0, -1.0);
               p2Valid = false;
           }
       }

Vertex, Fragment Shaders, and Scene Lighting

The actual shader code is written entirely in GLSL, which is fully portable between platforms running OpenGL. For the interested reader and completeness, we will briefly describe our pipeline implementation. For detailed information, refer to reference volumes such as “OpenGL Programming Guide, 4th edition” (aka the Red Book).

The OpenGL ES 2.0 programmable pipeline contains two types of shader programs: vertex and fragment shaders.

  • Vertex shaders operate on a single vertex and handle tasks like transforming the vertex position in space
  • Fragment shaders are invoked right after the rasterizer stage, which follows the vertex shader stage. Fragment shaders produce the output color of a single pixel usually accounting for texturing, lightning, and so forth.

Vertex Shader:

attribute vec4 position;
attribute vec3 normal;
attribute vec2 texCoord;

uniform mat4 modelMatrix;
uniform mat4 worldMatrix;
uniform mat4 viewMatrix;
uniform mat4 projMatrix;

varying vec3 vs_normal;
varying vec2 vs_texCoord;

void main()
{
   vs_normal = normalize((worldMatrix * modelMatrix * vec4(normal,1.0)).xyz);
   vs_texCoord = texCoord;
   
   gl_Position = projMatrix * viewMatrix * worldMatrix * modelMatrix *position;
}

The vertex shader takes as its input the vertex position, normal vector, and texture coordinates. The shader calculates the normal vector under the current projection and stores the value as output to the fragment shader. Texture coordinates are simply handed through. Finally, world and view matrices are combined with the current projection matrix, and the transformed vertex position is calculated.

Fragment shader:

varying highp vec3 vs_normal;
varying highp vec2 vs_texCoord;

//material color properties
uniform highp vec4 ambientColor;
uniform highp vec4 diffuseColor;
uniform sampler2D texture;
uniform highp vec3 lightPosition;
uniform highp vec3 lightColor;

void main()
{
   highp vec4 surfaceColor = texture2D(texture, vs_texCoord.xy);
   highp vec4 ambientColorCoefficient = vec4(lightColor, 1.0) * ambientColor;
   highp vec4 diffuseColorCoefficient = vec4(0,0,0,0);

   // for simplicity i am assuming reversed light vector is pointing from origin point into the light position
   highp vec3 lightDirection = normalize(lightPosition);
   highp float diffuseFactor = clamp(dot(vs_normal, lightDirection),0.0,1.0);
   if(diffuseFactor > 0.0) {
       diffuseColorCoefficient = vec4(lightColor,1.0) * diffuseColor * diffuseFactor;
   }
   gl_FragColor = surfaceColor * (ambientColorCoefficient + diffuseColorCoefficient);
}

The fragment shader fetches the pixel color from the texture and calculates ambient and diffuse light. It uses parameters passed from the vertex shader, which are now interpolated over all three vertices composing the triangle.

Closing

OpenGL ES 2.0 is a powerful tool for rendering 2- and 3-dimensional objects on a variety of computing platforms, and it is widely used on the iOS platform. Fortunately, this common graphics platform forms a bridge for developers looking to move OpenGL-based applications from iOS to Windows 8.

While the application used in this white paper is fairly simple, it illustrates the key concepts in working with OpenGL ES 2.0 and how those concepts are implemented on iOS and Windows 8. As demonstrated, the concepts are consistent between iOS and Windows 8, and moving from one to the other can be done in a straightforward manner.

[1] A virtual hemisphere, centered in the middle of the screen. See http://www.opengl.org/wiki/Trackball

[2] OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos.

Intel, the Intel logo, Atom, and Core are trademarks of Intel Corporation in the U.S. and/or other countries.
Copyright © 2013 Intel Corporation. All rights reserved.
*Other names and brands may be claimed as the property of others.



For more complete information about compiler optimizations, see our Optimization Notice.