Variable initialization in 6.6a

Variable initialization in 6.6a

Has anyone else noticed that uninitialized variables appear to now take on the largest value for a given type? If this is really the case, then this is also a significant improvement over what the compiler used to do. With being initialized to the largest possible value, this generally causes over/underflows during runtime which can then be caught in the debugger. This would then allow CVF to approach the capabilities of Lahey and Salford for catching runtime uninitialized variables. If this was indeed implemented in 6.6a, it is a very clever approach to allowing the compiler to find uninitialized variables during runtime. If it was not implemented, it would be a rather easy thing to do and would provide the compiler with much improved runtime diagnostics for debugging.

Tom

5 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

Sorry, if you're seeing this, it's just a coincidence. Such initialization is not a good way to catch uninitialized variables.

Steve

Steve - Intel Developer Support

Steve,

I don't understand why you would say that as it works better than what is currently done in CVF. The fortuitous initialization to a number large enough to cause over/undeflows in my program allowed me to catch several variables that were not initialized. John Appleyard makes a very convincing argument that current FORTRAN compilers can do a whole lot more to make life much easier for the programmerr. I simply don't understand this reluctance on CVF's part to have the compiler do more to catch programming errors thus making people using CVF more productive.

I understand that you pride yourself in producing the fastest executables, but computer speeds are doubling about every year now. A decade ago being able to say that a compiler produced the fasted executable might have meant something, but the fact that CVF produces executables that are 10-30% faster than other compilers is trivial in the overall scheme of things nowadays. What compler designers should be striving for nowadays is being able to claim that "compiler x allows you to write the same program in half the time". As a consequence, anything the compiler can do to find programming errors is, as Martha likes to put it, "a very good thing". I don't mean to open this can of worms again, but I would like to see CVF focus more on development that allows one to write "correct" programs faster. Anyone else feel this way? The only way for CVF to take notice is to say something about this. If no one else feels this way, then I won't ever raise this issue again.

Tom,

Don't get me wrong - we are all in favor of making the compiler as helpful to the programmer as possible. Everything has a cost, however, if in development resoirces if nothing else. Additional run-time checks of various sorts are on our list of things we want to do, but it is not a goal for VF to rate 100% on the Polyhedron error tests, as the cost to get there is just too high for us.

Look for steady improvements over coming releases, but I can't promise anything specific.

Steve

Steve - Intel Developer Support

Steve,

Understand, but I don't understand your statement that initializing floating point and integer variables to the largest possible value is not a very good solution to finding uninitialized variables at runtime. It's not as complete as what Lahey and Salford do, but it sure seems like it would take minimal effort in development costs, and I know for certain that it would be helpful. As I stated previously, the fortuitous case where variables were initialized to a value that guaranteed over/underflow helped me find several uninitialized variables that had been floating around in the program for sometime, or am I missing something here?

Tom

Leave a Comment

Please sign in to add a comment. Not a member? Join today