My computer has only 1 CPU. It has 4 cores (8 threads). But when I use err = clGetPlatformIDs(0, NULL, &numPlatforms);, I get 2 platforms.
One platform contains 1 CPU and 1 GPU, the other one contains 1 CPU. The 2 CPU are the same CPU.
I don't know why.
The platform number : 2
PlatformId=0 deviceNums=2 vedor:Intel(R) Corporation
PlatformId=1 deviceNums=1 vedor:Intel(R) Corporation
Hi to everyone,
I have a problem with the SDK plugin for Visual Studio 2010. In my kernel I add several MACROS using the -D flag inside the options argument of the clBuildProgram function. However, this is not recognized by the Intel OpenCL SDK plugin, therefore it throws several "use of undeclared identifier" errors and I am not able to run my program.
About this Document
The intention of this guide is to provide quick steps to create, build, debug, and analyze OpenCL™ applications with the OpenCL™ Code Builder, a part of Intel® Integrated Native Development Environment (Intel® INDE)
I have some code that was developed for CUDA and it relies heavily on 32-wide warps.
Is there a way to force Intel GPU compiler to compile for SIMD32?
Hi, I'm new to Intel Iris graphics 5100 and Intel tool set, if my question is duplicated, please point me to original ones ;-)
Q1, how to generate kernel statistics? Code-Builder exposes II and ASM, but I haven't found stats, such as # of full ALU/half ALU, Nops, global/local st, etc. This would be helpful in my kernel tuning. I would assume Intel exposes it, since the ASM are exposed.
I have Dell Venue 11 Pro 7140 tablet with Intel Core M (Intel HD Graphics 5300). I have installed OpenCL SDK 2014 Release 2.
When I run the sample code intel_ocl_svmbasic\SVMBasicFineGrained from https://software.intel.com/en-us/articles/opencl-20-shared-virtual-memory-code-sample.
It shows my Graphics 5300 doesn't support fine-grained SVM. From the reference manual, it says it do support. Anything wrong?
Does Intel have an equivalent to nVidia's jitPTX?
I have a very simple kernel with printf statement like:
printf("id: %d\n", id);
I am using the latest Intel SDK, on a 64-bit Windows system.
When I run the kernel on the CPU, the "\n" results in a single 0x0A byte at the end of the printed string.
When I run the kernel on GPU (Intel as well), the "\n" results in the two bytes 0x0A 0x0D.
Is this the expected behavior? Is there any way to make both devices produce the same output?