Google Chrome OS*

Software vs. GPU rasterization in Chromium*

This article is a general overview of the ways that web browsers can rasterize website information into actual pixels you can see. When a web browser downloads a page, it parses the source code and creates the DOM. Then it needs to figure out what images/text/frames to show where. This information is represented internally as layer trees.

  • Google Chrome OS*
  • Chromium
  • rasterization
  • Graphics
  • Rendering
  • Coloring with Beignet: Performing Color Management on Intel® HD Graphics with OpenCL

    Co-authored by Alina Chera

    Brief introduction to color management

    This article presents a proof of concept implementation that accelerates the computation of color profile transformations using OpenCL on Intel® HD Graphics. To understand the context of this document, the reader must first be aware of some basic concepts about colors, color profiles, and their importance to color management.

  • Developers
  • Google Chrome OS*
  • OpenCL*
  • Graphics
  • Call for Papers zur Developer Week 2016 zum Thema Cross-Plattform Entwicklung

    Am 20.-23. Juni 2016 findet wieder die große Developer Week Konferenz in Nürnberg statt. Gesucht werden aktuell Vorträge zu unterschiedlichen Entwickler-Themen. Ich selbst wurde als Programmverantwortlicher zum Thema Cross-Plattform Entwicklung auserwählt und suche daher Top-Sprecher mit Vorträgen aus der Praxis.

    Community Roadshow: Top 5 JavaScript Tools und Best Practices – mit Microsoft, aber ohne Internet Explorer

    Die letzte Community Roadshow ist gerade vor ein paar Monaten zu Ende gegangen. Das Thema behandelte Apache Cordova und das Intel XDK mit über 900 Teilnehmern. Einen Rückblick habe ich hier zusammengestellt: Rückblick zur Roadshow: Einstieg in die Hybrid App Entwicklung. Jetzt packe ich gerade meine Koffer für die neue Tour 2015/2016, für die ich einiges zum Thema JavaScript-Entwickler zusammengestellt habe:

    Zero-copy texture uploads in Chrome OS*

    Synopsis

    This whitepaper introduces the web and Chrome* graphics rendering pipeline, discusses how to explore Intel® architecture advantages, and talks about the work we have done to solve the texture upload problem and the benefits we have found in doing so. This whitepaper is intended for a broad, enthusiastic technical audience.

  • Developers
  • Google Chrome OS*
  • Accelerating texture compression with Intel® Streaming SIMD Extensions

    Improving ETC1 and ETC2 texture compression

     

    What is texture compression?

    Texture compression has been used for some time now in computer graphics to reduce the memory consumption and save bandwidth on the graphics pipeline. It is supported by modern graphics APIs, such as OpenGL* ES and DirectX*. The process of compressing a texture is lossy. Existing algorithms must not only achieve the best speedups but also preserve as much of the original information as possible.

  • Developers
  • Google Chrome OS*
  • Intel® Streaming SIMD Extensions
  • WebGL* in Chromium*: Behind the scenes

    Chromium uses a multi-process1 architecture. Each webpage has its own rendering process, which runs in a sandbox and is very restricted in what it can access. This makes it much harder for malicious web content to mess with your computer. However, this is bad news for GPU acceleration since the renderer doesn't even have access to the GPU. This is solved by adding an extra process just for the GPU commands—which sounds horrible at first, as it introduces a lot of interprocess communication, and the textures also have to be copied between the processes, but it's not as bad as you'd imagine. For example, the textures usually only have to be copied once at initialization, and modern OpenGL is designed to minimize the number of commands that have to be sent to the GPU. This separation actually improves performance because WebGL can execute independently of all the other rendering and parsing.
  • Google Chrome OS*
  • OpenGL*
  • The JITter Conundrum - Just in Time for Your Traffic Jam

    In interpreted languages, it just takes longer to get stuff done - I earlier gave the example where the Python source code a = b + c would result in a BINARY_ADD byte code which takes 78 machine instructions to do the add, but it's a single native ADD instruction if run in compiled language like C or C++. How can we speed this up? Or as the performance expert would say, how do I decrease pathlength while keeping CPI in check? There is one common short cut around this traffic jam of interpreted languages. If you were going to run this line of code repeatedly, say in a loop, why not translate it into the super-efficient machine code version?

    Portación de aplicaciones de Chrome* a plataformas móviles con Crosswalk y Chrome Apps for Mobile

    Las aplicaciones de Chrome funcionan en todas las plataformas para las que existe una versión actual de Chrome, entre las que quedan abarcadas casi todas las plataformas de escritorio; este no es el caso de las plataformas de dispositivos móviles, que a menudo usan un modelo diferente de hosting de aplicaciones y tienen sus propias tiendas digitales. Como las aplicaciones de Chrome se crean a partir de tecnologías basadas en el navegador, técnicamente podríamos reempaquetarlas para móviles con Apache Cordova* y enviarlas a las tiendas.

    Subscribe to Google Chrome OS*