Simple question about single and double float terminology

Simple question about single and double float terminology

On a 32 bit system, a program loop will take the same time to process a loop using double floating point arithmetic as it does with single floating point arithmetic. The double float calculations are done done in hardware, as opposed to using some sort of software emulation, as is done on most GPUs. The GPU takes more than twice as long to process  a loop of double floats than it does a single float loop.

Please exclude all thought of SSE or AVX registers or calculations for the moment.

I understand how the calculation of single(32 bit) floating point values is performed. How is it that the use of double precision values(64 bit) does not use more time on the same hardware. Must the processor ALU be based on 64 bit architecture to achieve this, despite being a 32 bit operating system ?

What hardware mechanism is used to achieve this / Does anyone have a good description ?

 

 

26 post / 0 nuovi
Ultimo contenuto
Per informazioni complete sulle ottimizzazioni del compilatore, consultare l'Avviso sull'ottimizzazione

On Intel processors there are the following floating point instruction sets: FPU (8087 emulation), SSE and AVX. All three have access to an internal, very fast, internal floating point processor (engine). The FPU supports 4-byte, 8-byte, and 10-byte floating point formats as single elements (scalars). The SSE and AVX support 4-byte, 8-byte floating point formats as scalars (single variable) or small vectors (2 or more elements). Ignoring the multiple element formats in SSE and AVX, the latency of a floating point multiply is on the order of 4 clock cycles (this will extend for memory references). Throughput can be as little as 1-2 clock cycles.

When the problem involves a large degree of RAM reads and writes, the program is waiting for the memory as opposed to waiting for the floating point operations.

Note, when small vectors can be used, the computation time can be significantly reduced (1/2, 1/4. 1/8) memory subsystem overhead can be reduced per floating operation, but the demands on memory subsystem may increase.

Jim Dempsey

www.quickthreadprogramming.com

>>>The double float calculations are done done in hardware, as opposed to using some sort of software emulation, as is done on most GPUs. The GPU takes more than twice as long to process  a loop of double floats than it does a single float loop.>>>

IIRC Nvidia Kepler architecture has support for double precision calculations.Not sure about the Fermi design.

I have a question regarding that statement:

>>...How is it that the use of double precision values(64 bit) does not use more time on the same hardware...

Do you have a test case which demonstrates that performance is the same, that is for SP and DP floating-point types?

>>>ow is it that the use of double precision values(64 bit) does not use more time on the same hardware. Must the processor ALU be based on 64 bit architecture to achieve this>>>

I suppose that recent Intel processors use one or two execution ports for ALU integer (single) operations and ALU vector operations and this data can be vectorized and send to SIMD execution engine.In case of Vector ALU it is up to 4 32-bit int scalar components processed at once.

When using vectorized simd instructions, single float throughput is roughly double that of double, just as on your GPU.  This is because twice as many operations, using the same total number of bytes of data, may be performed per cycle.

when considering a single operation, performance of single and double may be similar.  This may be true of the GPU as well.  Some of the ads compare vector-parallel operation on a GPU against serial host CPU operation.  This is in line with your idea that simd parallelism should not be considered on host, even though you are discussing the equivalent on the GPU.

I think that there cannot be direct comparision between CPU fp peak performance speed and GPU peak performance speed.I suppose that at infinitesimal time (more than one cpu cycle) the peak performance will be a function of scheduled to execute fp code , interdependecies in that code and available to this code execution units per single core.GPU has a lot of more available resources albeit operating at lower speed.

Thanks for all of those neat comments. Attached is single and double sample code with builder in vs2010 for Sergey.

This question originated because someone asked me why the single and double computational performance of a program on an i5 processor was the same, whereas on the GTX 480 GPU this is not the case. I glibly answered that the double and single times were the same  because the i5 does the double scalar arithmetic in hardware. I thought about this afterwards and realised that I did not really understand how the processor hardware did this so efficiently. Thanks for the answer Jim.

This question is not about the SSE or AVX. I get v good performance with most of my code sets using these devices. SSE typically x 2.5, AVX typically x 5 : all single precision implementations of course.

 

The focus of the question is how contemporary CISC processors handle double precision computation. The answer to this is that the FP engine circuitry does the computation.

 

Regards.

Allegati: 

AllegatoDimensione
Download sample.zip28.3 KB

Hi, thanks for all the answers. Sample programs attached for Sergey.

I am satisfied with answer that the maths is done by the fp engine.

 

Regards.

Allegati: 

AllegatoDimensione
Download sample.zip28.3 KB

>>>I am satisfied with answer that the maths is done by the fp engine>>>

Do you mean integer math?

>>>I am satisfied with answer that the maths is done by the fp engine>>>

>>>>Do you mean integer math?

By fp engine I meant floating point engine.

Hi,

>>...Sample programs attached...

I'll take a look at what it does. Thank you!

>>>This question originated because someone asked me why the single and double computational performance of a program on an i5 processor was the same, whereas on the GTX 480 GPU this is not the case>>>

Probably because of either lack of double precision support or locked support for double precision support for non Tesla cards.

Citazione:

iliyapolak ha scritto:

>>>I am satisfied with answer that the maths is done by the fp engine>>>

Do you mean integer math?

I do not know if the same engine is processing integer math.

>>>>...Sample programs attached...
>>
>>I'll take a look at what it does.

I'll be able to do verifications on three systems, that is, Ivy Bridge, Atom and Pentium 4, and I'll report my results.

>>>>...I'll be able to do verifications on three systems, that is, Ivy Bridge, Atom and Pentium 4, and I'll report my results...

Hi Bob, I will follow up by Monday and sorry for the delay.

Further to this discussion where I found that the performance of the single floating point was the same as the double floating point performance for computationally intense programs running on the i5 4440. No SIMD, no AVX, SSE, etc. Plain x87 implementation as far as I know.

Some time ago I tried the same on a variety of other processors and found that this was indeed the case. i.e. Sparc and other intel desktop CPUs all process float and double precision with the same performance penalty.

I am now building for the Xeon E5-2640 and X7350 and find that the identical programs using double precision are now taking twice as long to complete. I repeat, no AVX, SSE or optimization flags used with gcc and icc. What is causing this behaviour ? With the FPU functionality, should the processor be using the same number of instructions to complete ? I would have thought that the use of either float or double will make very little difference in performance.

Also; how can one use Vtune to determine where the difference in float/double performance arises. Is this possible?

>>>Further to this discussion where I found that the performance of the single floating point was the same as the double floating point performance for computationally intense programs running on the i5 4440. No SIMD, no AVX, SSE, etc. Plain x87 implementation as far as I know>>>

Which x87 instruction are you talking about? Can you provide more details?

For using VTune to troubleshoot the lack of performance you can code two versions of the same program one with single precision FP and second with double precision FP and start analysis by looking at pipeline front and back end stalls.

 

From what I understand, if you're compiling for 32bit, then you are probably using x87 floating point instructions. In which case aren't all floating point numbers, (floats and doubles) expanded to 80bits internally anyway ?  So that's why they take the same amount of time to process.

The only time you will see a performance difference is when you're processing large data sets, and you become limited by ram bandwidth speeds.  At this point you will process through the smaller floats at twice the speed to doubles.

There is no connection between whether you are compiling for 32-bit or 64-bit and whether you're using the original x87 floating-point instructions or newer instructions like SSE, SSE2, or AVX. Whether you're using SSE2 or not is determined by compiler flags. To use SSE2, you need to tell the compiler that you are OK with 64-bit double precision (which is what SSE2 uses) as opposed to 80-bit (which is what x87 uses). 

As to the speed of single precision vs. double precision multiplies and adds, they are the same, and have been for a long time (see Agner Fog's tables of instruction latencies and throughput). The FPU hardware is natively double precision. As mentioned earlier, however, you can get double the throughput with single precision because you can operate on twice as many of them at the same time using the SSE instruction set. Of course, the compiler has to recognize that vectorization is possible to achieve this speedup.

You might want to actually determine whether the compiler is producing x87 or SSE object code by looking at the disassembly. This will give you a much better idea of what is going on. It's possible that the compiler is vectorizing the code, in which case it would make sense that the throughput for floats was higher than for doubles.

tomk, I dont believe that is true, at least not for MSVC.  When you compile for 64bit, x87 instructions are no longer generated. The floating point model uses SSE in scalar mode instead. (As all 64bit cpus support at least SSE)

"The x64 Application Binary Interface " ... "All floating point operations are done using the 16 XMM registers."

http://msdn.microsoft.com/en-us/library/ms235286.aspx

Actually the only "advantage" of x87 ISA is extended 80-bit precision.The question is does your code really benefit from this.

IIRC  _MSC_VER == 1800 even on 32-bit builds does not emit x87 code.In order to be sure I need to check the .cod file.

Hi Richard, you're right, and thanks for fixing my error. In 32-bit mode you can choose whether to use SSE or not, based on compiler flags. As iliyapolak points out, there isn't really a reason to use x87 anymore unless you really need the extra precision (which is unlikely). x87 is scalar only. All 64-bit CPUs have SSE2 support.

Another situation in which 32-bit arithmetic provides twice the performance of 64-bit arithmetic for scalar code is provided by memory-access-limited codes operating on contiguous data.  One cache line transfer provides 16 32-bit values, but only 8 64-bit values, so as long as the rate of cache line transfers is the same, the rate at which computations can be performed will be twice as fast for the 32-bit version.

The STREAM benchmark version 5.10 (http://www.cs.virginia.edu/stream/FTP/Code/stream.c) can easily be configured to use 32-bit arithmetic rather than 64-bit arithmetic -- simply add "-DSTREAM_TYPE=float" to the compiler options to override the default type of "double".  There are few recent published submissions with 32-bit arithmetic, but it is trivial to demonstrate for yourself that bandwidth in GB/s is almost the same whether using 32-bit or 64-bit data types on most recent computers, so the arithmetic rate using 32-bit data types is twice that obtained when using 64-bit data types.

John D. McCalpin, PhD
"Dr. Bandwidth"

 

One of the reason for using extended precision can be for example more precise approximation of special or trigo functions.

Lascia un commento

Eseguire l'accesso per aggiungere un commento. Non siete membri? Iscriviti oggi