Has anyone else noticed that uninitialized variables appear to now take on the largest value for a given type? If this is really the case, then this is also a significant improvement over what the compiler used to do. With being initialized to the largest possible value, this generally causes over/underflows during runtime which can then be caught in the debugger. This would then allow CVF to approach the capabilities of Lahey and Salford for catching runtime uninitialized variables. If this was indeed implemented in 6.6a, it is a very clever approach to allowing the compiler to find uninitialized variables during runtime. If it was not implemented, it would be a rather easy thing to do and would provide the compiler with much improved runtime diagnostics for debugging.