Intel® Advanced Vector Extensions

Speedup with bulk/burst/coupled streaming write?

  Hello togehther,

I've some very simple question. I hope, this is really simple.

As I read and done already, bulk (coupled) streamin read/write should give some till significant speedup.

After some more profiling, I've found one very small older method im our software that takes to much time in my opinion. The most time is spent to the last instruction - wtite data. For the future question - there is no guarantee by design, that destination memory fits in some cache and, more, the cache is not overwritten so far - so there are really some access penalties.

PCIe Root Complex and the PCH

Hello All,

First of all, sorry this is not in the appropriate forum but I was directed to post this here.

I have a question that's been bugging me regarding the PCIe Root Complex and the PCH and I'm hoping someone will be able to help clear things up a bit.

I've always presumed that the PCIe Root Complex was a combination of the CPU and the PCH as they both contain PCIe Root Ports, thereby connecting PCIe devices to CPU/memory. 

Early indicators of AVX512 performance on Skylake?

Hi all,

Looking ahead, what can we expect from the first generation of AVX512 on the desktop - or when should we expect an announcement?

In the past:

- The first generations of SSE CPUs didn't have a full-width engine, they broke 128-bit SSE operations in to two 64-bit uOps

- The first AVX CPUs (Sandy Bridge / Ivy Bridge) needed two cycles for an AVX store - the L1 cache didn't have the bandwidth to perform a store in one cycle

So what I'd like to know is:

- Will the AVX512 desktop CPUs be able to handle a full-width L1 load and store per cycle?

pmovzxbd using memory operands

Is there a way to use pmovzxbd with a memory operand from intrinsics currently I have either

_mm_cvtepu8_epi32(_mm_cvtsi32(ptr[offset])); //(movd)

_mm_cvtepu8_epi32(_mm_insert_epi32(_mm_setzero_si128(),ptr[offset],0));  //(pinsrd)

The movd or pinsrd should not be needed; in assembly I can write something like


pmovzxbd xmm0,[rax+rdx*4]


Is there a way I can make this call using intrinsics instead of assembly.

Benefits of SSE/AVX processing when an integrated GPU is missing?

Some Intel processors have an on-chip GPU (e.g. Intel Core i/-4770K using a HD Graphics 4600 GPU) whilst others don't have this (e.g  Intel Core i7 3930K). I'm wondering what implications this will have on SSE/AXV SIMD processing when such an integrated GPU is missing on the CPU. Even though there is support for SSE/AVX on many processor not having the embedded GPU, I wonder if this will reduce the benefit of using SSE/AVX significantly compared to CPUs with an embedded GPU? 

Does VPMASKMOV require an aligned address?

The online for VPMASKMOV says that "mem_addr must be aligned on a 32-byte boundary or a general-protection exception may be generated."  But the documentation in the Intel Instruction Set Reference Guide does not mention an alignment requirement, and seems to imply that it is not required: "Faults occur only due to mask-bit required memory accesses that caused the faults.".  

Assine o Intel® Advanced Vector Extensions