Checking for undesired single precision constants

Checking for undesired single precision constants

On some legacy code I have been using the compiler settings to promote floating point single precision constants to double and have default real as double.

I want to make the code standard by declaring constants as double but there are so many hidden away it is difficult to know if I have gotten them all.

I would like is a compiler setting to warn me of any constants that are single precsion. I use so few single precision on purpose that a global project setting would be fine.

Does anyone else see this idea as useful ?

9 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

Short of using a "find in files" search for "." and going through the resulting list by hand, I'm not sure it is possible.
It won't catch the "X = 3" type of statement for a start, nor will it catch " Root =( -B+sqrt(B**2 -4*A*C)/(2*A)".

Other than a potential for performance issues, integer values will promote to double/real without error provided that the magnitude of the integer lies within the precision capability of the mantissa. Therefore, you can constrict your search for:

{seperator}{zero or more digits}.{one or more digits}

You will have some false positives, but this may be a good starting filter.

I am not sure grep (regular expression) is comprehensive enough. Look at AWK or other text mashing tools.

I personally use TECO for this type of task.

Jim Dempsey

What you say may be true of Intel compilers (though I have had differences in results due to some single precision used), but I like to make sure they are all listed as double precision constants in case other compilers may be used.


You stated: "It won't catch the "X = 3" type of statement for a start, nor will it catch " Root =( -B+sqrt(B**2 -4*A*C)/(2*A)"."
Both the examples you have highlighted do not need catching as the values 2,3 & 4 should all be converted to full precision 64-bit values by the compiler.
I think the main problem is having a 32-bit real constant as a subroutine, function or intrinsic argument. A static analyser should find these problems, although that might not support the use of a /dreal compiler option. Examples are 1.1 (which can be found with a . search), but also 75e6, which might not be found.
Another consideration is if a value, such as 1.1 is accurate to 64 bits. Chasing greater precision highlights the problem of what precision can be obtained from the numerical model that is used. Often this precision masks the approximation in the modelling approach.

Tools such as Grep, Awk and text editors can help, but few of them can perform the type of context-analysis that is needed. One really should use a tool that understands the syntax of Fortran.

I urge the OP to consider using the free software FTNCHECK ( to list instances of single precision constants where double precision versions should have been given. This tool performs static analysis, and can help to catch many other types of errors than just wrong precision in constants. It is, however, limited to operating on Fortran-77 source code.

Unfortunatly all my code is f90 otherwise I might have tried FTNCHECK.

I do agree that a tool that understands the sytax is needed, hence my request for an enhancement to the compiler. Maybe the Intel Fortran Static analysis would be a better place.

>>FTNCHECK... performs static analysis

This would be a good time to request a diagnostic feature (option), e.g. -warn:promotion_demotion whereby whenever the compiler generates a promotion or demotion a diagnostic warning is issued. This should be a relatively simple feature to impliment.

Jim Dempsey

ftnchek does have an experimental Fortran 90 version - It is not ready for prime time as I understood from
the developers, though.



Leave a Comment

Please sign in to add a comment. Not a member? Join today