I am a bit lost. I try to find information on how the "quad-precision" (REAL*16) is implemented on new Intel CPU (like Xeon 54xx) and Intel 10.x compiler...
It is hardware-supported or only software supported (or a mix of the two)?
What is the accuracy (in digit) we can expect?
What kind of performance we can expect in comparison to typical a double-precision (Linpack for example) ?
How the Intel CPU compare in quad precision with the IBM POWER6 architecture?
For example, a quote from the POWER6 description: