# Multiprecision needed?

## Multiprecision needed?

do we have to deal with arbitrary precision numbers

15 posts / 0 new
For more complete information about compiler optimizations, see our Optimization Notice.

I don't see anything in the PDF that would indicate that. It indicates that the we should map the parser types to standard types such as signed integer and signed double (not that there's an unsigned double ;-))

Hi,
We don't need you to support multiprecision math.

Thanks
-Rama

Quoting Rama Kishan Malladi (Intel)Hi,
We don't need you to support multiprecision math.

Thanks
-Rama

Rama,
See my other post.

Your .PDF rule states:
"For example, output(50.323467) should send 50.3234670000 to the output"

50.323467 happens to NOT be a number that is exactly representable by IEEE 754 double precision. Therefore, the above rule would imply (requier) decimal math. Please comment.

Jim Dempsey

@Jim despite being not-exactly-representable, rounding at the precision level will (mostly) solve the issue by adding +/- 0.00000000005 to the value before formatting and truncate at the 10th decimal. However, long-enough numbers will cause issues, like: 12345678901234567890.1000000000Are there shorter numbers where explicit rounding does not overcome the inexactness?john

Quoting jimdempseyatthecoveQuoting Rama Kishan Malladi (Intel)Hi,
We don't need you to support multiprecision math.

Thanks
-Rama

Rama,
See my other post.

Your .PDF rule states:
"For example, output(50.323467) should send 50.3234670000 to the output"

50.323467 happens to NOT be a number that is exactly representable by IEEE 754 double precision. Therefore, the above rule would imply (requier) decimal math. Please comment.

Jim Dempsey

I think you are splitting the hairs a bit fine here. The value given will round-trip (string->double->string) andunderin mostcases provide an accurate decimal representation. When used in an expression that causes the result (or sub-result) to exceed the precision of a double (~16 digits) significant rounding canoccur. No matter what representation is chosen precision loss can occur (or memory can be exceeded). Since almost all real numbers are irrational it's reasonable to assume that rounding is a fact of life. The difference between binary and decimal arithmeticisn't a factor. Granted that any base (other than e) is going to accumulate small rounding differences that can lead to differing results. For this application differences due to error accumulation arevery likely to occur(regardless of base) because the handling of the partial products could easily lead to variation over the serial version. For that matter the results of two different serial implementations could vary due to partial product handling.

Jim,

Think about the output function and the example:

output(50.323467)

You receive a string: 50.323467

You convert that string and you store it in a double
variable. The variable will have the following value: 50.323467000000001

You have to convert it back to a string but you have to
truncate it and you will return:

50.3234670000

Thus, you wont return the last 5 characters.

50.3234670000 ##### 00001

If we dont simplify data types, each participant
might chose a wrong data type.Cheers,Gaston

I believe this is covered this post: Input clarification

6. We are not looking for IEEE support in the mathematical expressions. Output correctness is required but precision of the output will be relaxed to keep it simple.

Exactly! That's the idea.

>>Output correctness is required but precision of the output will be relaxed to keep it simple.

Good.

Gaston,

On the flip side, the double could be down by 1/2 bit too.

You receive a string: 50.323467

You convert that string and you store it in a double variable. The variable will have the following value: 50.323466999999999

You have to convert it back to a string but you have to truncate it and you will return:

50.3234669999

Now the output doesn't match the "should output 50.3234670000"

The rule as stated imply the correct answer is an exact text match rather than a numerical value approximately equal to the result as if calculated to infinite precision. I do not think it unreasonable to ask if approximately equivilent values will be acceptible.

Note, you cannot simply add .00000000005 to the internal double value because the numeric value may have many digits to the left of the ".". And in this case you cannot round up the 1/2 bitundervalued variable.

Jim

Quoting jimdempseyatthecove

Gaston,

On the flip side, the double could be down by 1/2 bit too.

You receive a string: 50.323467

You convert that string and you store it in a double variable. The variable will have the following value: 50.323466999999999

You have to convert it back to a string but you have to truncate it and you will return:

50.3234669999

Now the output doesn't match the "should output 50.3234670000"

Pre-round by adding +/- 0.00000000005

Jim and John,There is no need to either pre-round or post-round.The variables have to be represented by a C double, the conversion to string should truncate and add 0s as specified in the problem spec.If you use a different internal representation, you'll have different results. However, we don't want that to happen.We want the interpreter to produce the results without any rounding. Just as the problem spec says.In fact, if you add either pre-round or post-round, you are running unnecessary additional instructions and you'll lose performance.Hope it clarifies.Cheers,Gaston

This seems to me like it's oversimplifying. I'm coding in Java, which makes a C double inaccessible - Java does have a "double" type but with semantics defined by the language/vm not by the underlying hardware. Other languages will also face similar problems of how floating point ops are mapped to the hardware and the level of precision used.
Clearly a straight text comparison of the output cannot be used.If the judges will take these differences into account when comparing output, then there is no issue here, but if the judges are planning to use text comparison to compare results then some smoothing of the output is needed.

Gaston, I don't think you've clarified matters :-)There are many numbers representable exactly in decimal but which have a downward error in binary floating point. I don't know which right now, but suppose that3.14is really represented as3.139999999999999output with truncation will most naturally produce3.1399999999not3.1400000000It is easy to get either one, but which is the correct output? Or will either be accepted.

double dVal1 = 3.14;

double dVal2 = 3000000.14;

printf("%.10f\n", dVal1);

printf("%.10f\n", dVal2);

Out:

3.1400000000

3000000.1400000001

When Value > 3000000, lack of floating-point precision