I am a bit lost. I try to find information on how the "quad-precision" (REAL*16) is implemented on new Intel CPU (like Xeon 54xx) and Intel 10.x compiler...
It is hardware-supported or only software supported (or a mix of the two)?
What is the accuracy (in digit) we can expect?
What kind of performance we can expect in comparison to typical a double-precision (Linpack for example) ?
How the Intel CPU compare in quad precision with the IBM POWER6 architecture?
For example, a quote from the POWER6 description:
"[On Power6 ... ]The unit is effectively quad precision, offering up to 36 digit
accuracy in 144 bits, although results are compressed to 128 bits to
fit in two floating point registers and then decompressed before
consumption. Basic operations are somewhat slower than ALU operations,
with single cycle throughput, but 2 cycle latency."