I want to install Intel Inde 2015 update 2 on a computer with windows 8.1 and Intel SDK for OpenCL Applications 2014.
On my computer is installed Intel SDK for OpenCL Applications 2014. I tried to uninstall it from Windows control panel but the installation was stuck. After waiting more than 10 hours I turned off the computer. (Cancel button was not helpful).
I invoked the installation of Intel INDE 2015 Update 2 but it was stuck in the removing phase of Intel SDK for OpenCL Applications 2014. The same behavior was observed on 2 different computers.
I've installed Visual Studio 2012 express edition under Windows7 64-bit and installed Intel INDE Starter edition with code_builder_22.214.171.124.
Next I opened a Visual Studio x64 command prompt, which points initially to "C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC", changed
directory to "C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC" and ran VCVARSALL.BAT. Without this it would not find <iostream> etc.
Is there a opencl 1.2 CPU driver for a "Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz" that support khr_gl_sharing with the HD2000 GPU?
If I install the OpenCL™ Runtime 15.1 I get a opencl 1.2 cpu driver but it has no khr_gl_sharing
The opencl 1.1 cpu driver that comes with the graphics driver does support khr_gl_sharing
Does Intel support opencl CPU khr_gl_sharing with older Intel GMA graphics chipsets (for opencl 1.1? opencl 1.2?)
What about non-intel graphics chipsets (Nvidia, AMD etc)
Is there a table of what is supported on what?
For some while I keep finding around me things related to Makers, Quadcopters, and algorithms. At first I thought that it is just by chance... That IoT is nice, and Makers are having fun, and algorithms are just another way of saying parallel programming and so on... Apparently there is something very unique that connects all these seemingly unrelated areas. You know, it takes a while to realize it, but: if everyone at work speaks Martian, and your barman speaks Martian, and you go back home and your wife speaks Martian, then you probably live on Mars!
Managing a fleet of IoT devices and deploying code is no easy task. Resin.io changes the workflow by leveraging Git and Docker technology!
How It Works
When you have new code for your end devices, all you need to do is simply perform a "git push". Resin.io builds your code into a Docker container and deploys it onto the device if/when it's online! Below is an image describing the process, found on Resin.io's website:
I am working on Decode-OPENCL-Encode pipeline on intel processor. There is a sample code provide by intel for media interop which is attached.
I am integrating the encoder into same.
If we look at the DecodeOneFrame() function below:
mfxStatus CDecodingPipeline::DecodeOneFrame(int Width, int Height, IDirect3DSurface9 *pDstSurface, IDirect3DDevice9* pd3dDevice)
mfxStatus stsOut = MFX_ERR_NONE;
if(m_Tasks[m_TaskIndex].m_DecodeSync || m_Tasks[m_TaskIndex].m_OCLSync || m_Tasks[m_TaskIndex].m_EncodeSync)
I'm curious if there are any circumstances that will result in an implicit increase in a kernel workgroup's shared memory requirements?
For example, do the workgroup (or subgroup) functions like scan or reduce quietly "reserve" SLM?
If there are any circumstances where this might happen on SB, IVB, HSW or BDW then could you list them?