"insufficient virtual memory" at runtime - command line environment for compiling/linking

"insufficient virtual memory" at runtime - command line environment for compiling/linking

Hello, and good afternoon!

Concerning: Compiler mogration to your product / program worked well with other compiler (Lahey) for many years - here "insufficient virtual memory" at runtime -

To introduce ourselves: We are a team of 3 developers, in the process of testing Intel Fortran for migration of existing code to it (using evaluation version of compiler). As you probably hear often with Fortran, it has a lot of legacy code (F77) included.

We are planning for purchase of 2 or 3 licenses of the intel fortran compiler as soon as the main issues, which I'll outline below, are resolved, possibly with your help.

Platform: Windows, currently testing on both x32 and x64 -- and it ist NOT from within visual studio - but essentially commandline -- so far we work with Eclipse so commandline invoking from Eclipse appears as a quick way do it. Setting up with the prerequisites (free Microsoft studio and SDKs installed as needed for linking especially) and usage of scripts as provided (ifortvars.bat calling cascade of others) is in place --
as is evidenced in the fact that it compiles (small changes to the code itself were necessary such as how to process commandline arguments etc.

If there is any great way for setup in Eclipse, setting the environment there, we'd certainly appreciate that also.


The current plan is to see whether we will be able to manage the conversion -- as we have a run time error that looks like a menory leak of sorts (I dunno whether it actually is but it looks like it). That wasn't ever observed with an exe compiled with the old compiler.
This appears everytime the program runs *long enough* - and not right after start but after a few minutes of running (numerical calculations).
If you just start the gui (see below details what we use) and use it or let it sit there open, nothing bad happens with the memory (there are window redraw issues, but I save this for another time or get it worked out - unless you tell me this may give some clues for the crash problem at hand).
So, no program crash just from the gui. Also, if you start a "calculations" job (number crunching job) and if it is short enough to finish, there will also be no program crash.
I can observe that the "RAM" used by the process grows rapidly, up and until the breakdown. For comparison: A "Healthy" version of the application uses about 18MB RAM just when the gui is started; 170MB RAM wenn doing a "calculations" job, as per taskmgr. Used RAM with the intel compiled version is now gradually growing up to 1.4GB (on my machine) before the crash. GUI alone is just as small as it should be. This happens with both x32 and x64 versions. If a "calculations" job is short enough then the program does not crash. If you do not close the application (gui) and start another job it will add memory and then crash, i.e.; the same job as before (that went through) may now crash real soon.

Some code details that I want to mention: From the "old age" there are still quite a number of common blocks, which generate warnings at compile time. I may send it to you, see below. Also, I studied the forums and it is believed that ALLOCATE statements can, at times, cause problems. We do have them in the code but I believe they are not executed the way I test right now.

I have prepared additional information;

Please find below the error message. Also find the output of the "set" command in cmd window and one compiler log.

We use Intel Composer XE 2013; also Microsoft Visual Studio 10.0 and Windows SDK 7.1, as visible in the environment variables output if you need it.
The environment as for the information I am forwarding to you is x32. In the end, we plan to to a x64 application though.
We also use a toolkit called Winteracter for graphical user interface programming of our application. The linked libraries are recongnizable in the compile messages.
No other libraries are linked into the executable. Other than what the compiler asks us to. We had "unresolved externals" initially and I researched the libraries that need to be linked. I added them manually into the makefile and they are visilble in the compile protocol.
No DLLs are created or called at this stage (That is planned and was done before but we first want to get the basics right).

If you could get back to us and possibly direct us in the right direction, suggest tests we should perform, compiler flags and such, please let us know.

I did read through the forums but so far found no answer to my issues. If this is Steve who will answer I have read up on how you've helped others as well.
It seems to me that our setup here is slightly different from what many people use (Eclipse rather than Visual Studio).

Also, I would like to say thank you in advance for your help.

If we get our program converted successfully, we intend to become loyal customers; we were with Lahey for 17 years; they seem to have come to some sort of end with compiler development, especially on Windows.
By the way we also consider Linux. We are regular users of Linux (for other tasks at the moment) but it may also be an option for the development work in question. It seems to be far less compicated to set up the environment there (not using something as Visual Studio). I think I played with a trial version of the compiler once. We would consider switching if, for example, you told us that this avoids issues that can cause the present problem on Windows that I described.

Best Regards,

Olaf Lieser


Here is how the error is announced. 2 version of the message. I am curious why there are 2 ways by the system to report it. I have not yet nailed which of the 2 messages appears when. Are there even 2 general problems present?

forrtl: severe (41): insufficient virtual memory
Image PC Routine Line Source
adcos_0162.exe 004F4934 Unknown Unknown Unknown
adcos_0162.exe 004C2E06 Unknown Unknown Unknown
adcos_0162.exe 004AB1B2 Unknown Unknown Unknown
adcos_0162.exe 004772A8 Unknown Unknown Unknown
adcos_0162.exe 001786D1 Unknown Unknown Unknown
adcos_0162.exe 0017374F Unknown Unknown Unknown
adcos_0162.exe 00113715 Unknown Unknown Unknown
adcos_0162.exe 002E2C1C Unknown Unknown Unknown
adcos_0162.exe 0027CCBF Unknown Unknown Unknown
adcos_0162.exe 004A86F3 Unknown Unknown Unknown
adcos_0162.exe 0046A832 Unknown Unknown Unknown
kernel32.dll 752F33AA Unknown Unknown Unknown
ntdll.dll 77609EF2 Unknown Unknown Unknown
ntdll.dll 77609EC5 Unknown Unknown Unknown


forrtl: severe (41): insufficient virtual memory

Stack trace terminated abnormally.

UPDATE: I noticed that the makelog.txt file gives only standard output; warnings appear elsewhere (standard error?), therefore - here is a snippet of it. This repeats as the common blocks are called from several routines. This snipped may all be unimportant - but here it is, just in case.

G:\daten\ae\ADCoS_j162_intelWinter>make adcos
ifort -Qvec-report0 -include:"c:\winteval\lib.if8" /heap-arrays /assume:byterecl /c fed_0264_1.f90
Intel(R) Visual Fortran Compiler XE for applications running on IA-32, Version Build 20120731
Copyright (C) 1985-2012 Intel Corporation. All rights reserved.

fed.inc(16): remark #6375: Because of COMMON, the alignment of object is inconsistent with its type - potential performance impact [VOLALL]
real*8 volAll ! Volumen (Verdraengung) jedes Elementes (basierend nur auf Aussenquerschnitt)
fed.inc(93): remark #6375: Because of COMMON, the alignment of object is inconsistent with its type - potential performance impact [FXAERONABE]
real*8 fxAeroNabe,fyAeroNabe
fed.inc(93): remark #6375: Because of COMMON, the alignment of object is inconsistent with its type - potential performance impact [FYAERONABE]
real*8 fxAeroNabe,fyAeroNabe
fed.inc(94): remark #6375: Because of COMMON, the alignment of object is inconsistent with its type - potential performance impact [FXAERONABE_OLD]
real*8 fxAeroNabe_old,fyAeroNabe_old
fed.inc(94): remark #6375: Because of COMMON, the alignment of object is inconsistent with its type - potential performance impact [FYAERONABE_OLD]
real*8 fxAeroNabe_old,fyAeroNabe_old
fed.inc(91): remark #6375: Because of COMMON, the alignment of object is inconsistent with its type - potential performance impact [T_H_EIN]
real*8 cw_turm,cw_turm_lokal,t_h_ein


Downloadtext/plain environment.txt4.93 KB
Downloadtext/plain makelog.txt8.48 KB
30 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

Your project is rather complex, but here is an immediate suggestion: add the flag /traceback to the compiler options, rebuild and run again. When you do that, when the program aborts the traceback will contain subprogram names and line numbers, instead of just program-counter values. With a linker map output, one could convert PC values to routine names (but not line numbers), but human beings stopped doing such work long ago.

mecej4's suggestion is good. Yes, if you are compiling from the command lines, some diagnostics go to standard error. If you are redirecting compiler output, use 2>&1  to merge the streams. My guess is that your prorgam has exceeded the 2GB address spzce limit of 32-bit Windows. You'll need to identify the place where the error occurs, as mecej4 suggests, for more details.

We do not support using Eclipse but you can use Eclipse and the PHOTRAN FDT if you want.  Visual Studio will probably give you a more robust environment, especially for debugging.

My thinking is that Linux is not going to solve any problems you're having on Windows, and we don't offer integration with any kind of IDE on Linux, though again you can use Eclipse with PHOTRAN on your own.

Please let us know if we can be of further assistance.

Retired 12/31/2016

Hello again and thank you both for your quick answer, I do appreciate it.

To give you the latest updates:
In short
/traceback gives no additional info on crash

Winteracter library has been removed so as to try to narrow down search for cause - no change of effect
x64 and x86 both compile and run and essentially crash -- with the exception that x64 just adds memory, anything it
can get including pagefile until it has - so to speek - the machine on its knees.

Now this: I ran on Linux/x64 (no gui) - it is healthy and runs almost twice as fast as both out old compiler and Intel/Windows

Recompiling everything with /traceback still gives no better message on the runtime error. It just says

forrtl: severe (41): insufficient virtual memory

Stack trace terminated abnormally.

x86 or x64 does not generally make any difference -- as for the possible memory leak --
except that x64 can address far more memory than my machine has available -- so it uses up everything it can get
including pagefile then I still get to kill it or complete paralysis results.


I have now compiled a version that does NOT use any of the GUI(Winteracter) libraries; serves the purpose
of narrowing down the trace to "culprit", to be sure whether I need to look there or not
(as I have no trial version for this gui library in x64 so that is gui code free anyway)
I also went through the environment variables (as per "set" command) to make sure no reference to intel64 is included for the ia32 compile.
Just in case. The only thing it now gives as X64 is the processor architecture but I assume that is my running machine, is it not? Like so
PROCESSOR_IDENTIFIER=AMD64 Family 15 Model 107 Stepping 2, AuthenticAMD

So the answer is that it (the gui) is not the problem. I have included a full make output; maybe you can take a look at it.
Have you ever had issues with common blocks like that (all these warnings)? They have worked for literally decades, but what do I know what ist maybe just luck ....


Regarding the environment we develop with: We are actually using Eclipse/Photran on Windows. But this is merely for the "Fortran View"; project management including make targets
source code editing and mamangement, as well git/Version management.
Environment variables are not controlled by Eclipse (in our setup anyway just yet).
It essentially invokes a shell/commandline.


I managed to compile and run our code in intel64/linux compiler. The adjustments necessary were not too many:
Leaving out all the GUI libraries; path adjustment; makefile adjustment -- not much more that that.
Result: The same job only needs about 55% of the time compared to our old compiler and IntelWindows
(though I am sure the latter would have potential for improvement via setting compile flags best suited for the case - something we worry about later).
MOST IMPORTANTLY: It is healthy! If you can believe this. The memory usage stays at the same level and does not increase and in runs right through

Essentially you said there is no obvious reason why that should be the case - I guess there is still not - but the situation is there.
I just tried this out of curiosity - just the fact that
code conversion was very quick made me give it a try.
Of course we are now left to try to find a reason why - and how we get it running under Windows.

As I said, we would consider Linux, but Windows remains our first priority, due to some of our customers' concerns.

If I may, I would once more like to ask you to take a look maybe at the makefile and logfile (Windows);
possibly make suggestion as to where we should look next.


I am sending the linux makefile and -log alongside with the Windows ones; I have cleared out the Windows makefile and removed all references to Windows SDK and other resources,
which are needed for the gui. Also just for your information: the INTELENV and ARCH in the makefile/Windows normally are for Eclipse
to call the environment scripts (e.g. ifortvars ia32) because it invokes a new shell. For these trials I call directly from commandline
which has been "intelized" once at the beginning, therefore it is empty.

Both versions compile and run like that - up to their respective end of runs -;)
Remark: Both makefiles needed to be renamed because your forum interface needs a valid file extension
(at least that's what it told me when I uploaded).

As we are still somewhat puzzled as to what is going on we would very much appreciate any advice what to approach next.
Maybe other ways of compiling (w/new flags) to try and get some trace back (that the intended flag did not provide).
Especially in light of the fact that the code is seemingly OK - not only because of another vendors compiler
but it does compile and run well with some compiler out of your house (the linux one).

Thank you indeed for your help and Best Regards,



This is not likely to have anything to do with how the program is compiled. On Linux, are you using the 32-bit or 64-bit compiler?

If you can come up with an example program that demonstrates the problem, please attach a ZIP of it here or report it through Intel Premier Support and we'll be glad to take a look.

Retired 12/31/2016

The build log files that you attached are, despite being voluminous, of little utility -- they are almost entirely filled with warnings about misaligned variables in COMMON. You can disable the warning (using -diag-disable 6375) or, better yet, rearrange the items in COMMON such that small items such as INTEGER, LOGICAL and CHARACTER items (which are not multiples of 8 or 16 bytes in length) come after items such as COMPLEX and REAL*8 arrays. If you then recompile, we shall be able to see the needles in the haystack -- the warning messages that may offer a clue to the memory consumption problem.

On some architectures such as IA64, misalignment can cause quite a performance hit, so your fixing this issue is well worth the small errort required.

Which is it on Linux (x86 or x64)? Guess what -  It was a "quick try"  and I do not even know.  I basically jut called ifort, as in the makefile. I dunno whether the installed version is only one of x86 or x64 or both combined and it would then take the default (x86?).... ifort --version gives  ifort (IFORT) 12.1.3 20120212 

The output of env (on linux) also gives no clue. It does not seem to be set there either.

Regarding the voluminous makefile: I did know that, and the warnings repeat on each invocation of the common block. I did not expect these to be the core of the problem but I thought you never know. What you told us regarding performance enhancement is copied and will be useful in any event.  Regarding other makefile details, we will ignore the formatted output warnings (can you say the "-diag-disable" no. for those?), and then there is the "result is in the denormalized range" warning left....

Actually, now here is a new output (Attachment) with the 6375 flag

Commons will all be changed (this is dangerous if I miss one that may not be in the include files, that is my experience, so I will check it is done consistently)

I will take a look and see what could be done about sending you a program. I will probably get back to you in the next few days on that. In the meantime we are open for new ideas....

Thank you again for your help



Downloadapplication/octet-stream makelinux.log4.46 KB

Add the -V flag to ifort to get it to tell you what it is using.


fed_0264_2.f90(10227): remark #7920: The value was too small when converting to REAL(KIND=4); the result is in the denormalized range.   [1.0E-40]
      REAL,parameter :: EPS = 1.0E-40

This isn't going to go well for you....

However, none of this tells us anything about your runtime issue.  We'd need to see a buildable and runnable test case.

Retired 12/31/2016

On linux, "ifort -V" will tell you the version number as well as the target architecture. Likewise, "file adcosrun_0264.x" will tell you whether the executable produced is 32-bit or 64-bit, and "size adcosrun_0264.x" will give some information on the memory requirements of the program.

The build log is now readable, and I agree that you may ignore the warnings about the some formats Ew.dEe not satisfying W>=D+E+5.

However, the other messages, regarding 1E-40, are troublesome, since this number is less than TINY(REAL). Depending upon circumstances, it may be treated as zero. The zero is then promoted from REAL*4 to REAL*8 for assignment, which still sets EPS to zero. If you really want the non-zero value, specify 1D-40, and declare EPS to be REAL*8 even in the first instance.

However, there is nothing in the build log to shed any light on the program running out of VM, so one has to look into the details of the program.

If your code is proprietary, please do not post it to this public forum. I am a user just as you are, and you should use a secure channel for sending the code to Steve Lionel, who is an Intel employee, or someone at Intel Premier Support.

Once again, thank you for your quick reply. 

Now is actually source code required or an executable with input data that makes it runnable or both (to see if it builds faulty *here*)?

Regarding source code: yes, it is proprietary code indeed; I will see tomorrow how to proceed as far as that is concerned.

As it is 8:30pm local time I'll say "talk to you later", likely tomorrow or the day after. 


actually the version ...Intel(R) Fortran Intel(R) 64 Compiler XE for applications running on Intel(R) 64, Version 12.1 Build 20120212

There we have it.

Ok. So you're using a 64-bit compiler on Linux and a 32-bit compiler on Windows.  That will limit your virtual memory usage on Windows to 2GB.

Retired 12/31/2016

We have now tested in Linux 64bit and in Windows, as specified in original email tested in both 32 bit and 64 bit:

Win32bit crashes as it exceeds some 2GB (growing to there gradually, starting from 150MB)

Win64bit gradually takes all mem it can get, starting from 150MB until paralysation (my machine has 10GB plus 3GB swap) or till I kill it

Linux64bit runs healthy ; mem usage in test case = constant = some 150MB

At this point I think a test case we can look at would be the next step.

Retired 12/31/2016

Hi Olaf L.,

just a guess: Have you tried to compile with /heap-arrays- ? In this way the usage of heap for automtic arrays will be avoided... and you mess your stack. :-( But maybe it helps.

Also I'm a little bit confused by the different versions of ifort: in Win32 you use " IA-32, Version" and on Linux x64 ifort 12.1.3? Maybe you give ifort ( or the latest 12.x.x) a try on Windows x64?

Kind regards,


I would be astonished if the different compiler versions made a difference. /heap-arrays would not address this particular issue - it would help if Olaf was seeing stack overflow errors.

Retired 12/31/2016

I have always used heap-arrays in Windows and Linux. See makefiles and -logs uploaded earlier .

When I did not; the program did almost not make it out of the gate; i.e. closed quickly. Just to post it here in the forum just what it says on the program exit  I quicky made a recompile with the option left out.So here you have your stack overflow.

Below compiled as Windows/intel64  (/heap-arrays not set)

forrtl: severe (170): Program Exception - stack overflow
Image              PC                Routine            Line        Source             
adcosrun_0264.exe  000000013FD533A7  Unknown               Unknown  Unknown
adcosrun_0264.exe  000000013FBC4191  CMKEE                    2284  fed_0264_2.f90
adcosrun_0264.exe  000000013FBBDA43  EINSTRUKT                1835  fed_0264_2.f90
adcosrun_0264.exe  000000013FB547AC  ADCOSOLV                 1158  fed_0264_1.f90
adcosrun_0264.exe  000000013FB513AD  MAIN__                    118  adcosrun.f90
adcosrun_0264.exe  000000013FD7928C  Unknown               Unknown  Unknown
adcosrun_0264.exe  000000013FD5383F  Unknown               Unknown  Unknown
kernel32.dll       0000000076BA652D  Unknown               Unknown  Unknown
ntdll.dll          0000000076CDC521  Unknown               Unknown  Unknown

..... and here it is for ia32 (/heap-arrays not set) .....

forrtl: severe (170): Program Exception - stack overflow
Image              PC        Routine            Line        Source             
adcosrun_0264.exe  015A43F7  Unknown               Unknown  Unknown
adcosrun_0264.exe  0143EA19  _EINSTRUKT               1835  fed_0264_2.f90
adcosrun_0264.exe  013E3D17  _ADCOSOLV                1158  fed_0264_1.f90
adcosrun_0264.exe  013E12F2  _MAIN__                   118  adcosrun.f90
adcosrun_0264.exe  015C7783  Unknown               Unknown  Unknown
adcosrun_0264.exe  015A4895  Unknown               Unknown  Unknown
kernel32.dll       75B333AA  Unknown               Unknown  Unknown
ntdll.dll          76EC9EF2  Unknown               Unknown  Unknown
ntdll.dll          76EC9EC5  Unknown               Unknown  Unknown

On linux ( -heap-arrays not set)  actually it runs through - again.

Please remember that this happens right at the beginning, versus after several minutes with the actual issue in question.

The routines that appear here (stack overflow when NOT using /heap-arrays) are called only once at the begnning.  Think of a finite element calculation. cmkee for example is an node renumbering scheme that is done only once per run - whereas our main loop in routine adcosolv may run 1e6 times.

===> so is the problem that /heap-arrays can avoid (stack overflow at the beginning)  related to the permanent memory problem that is present (and results in excessively increasing mem usage and finally a crash on ia32)? Steve, you said you don't think so -- but as again the program runs thru (and healthy and fast) on linux, could that mean we have two issues here, i.e.; the stack overflow you asked about (and that I could give you now without heap-arrays option set-;) is not related to the main issue?

- ---

As for the code; I cannot actually give out the source code at this time. I do know that at  Intel you would probably be the most knowledgable people to assess this problem by far , but for now we have to do without. If I could isolate the part of the code that would be a different matter. Modern coding often means more modularized code; meaning you can treat different components separately. Unfortunately that is not the case here; I cannot take out something easily. This includes legacy code that has been added to for quite a long time. I will probably proceed to make tests cutting down essential components in the code to try to narrrow down the area of problem. If I find anything I can forward separately, I will. Also, if I do find a solution or workaround in the process I will post that here,  too.


You could alwasys do it "old school" style. adding PRINT statements to show the progress through the program and move them around to narrow down where the error is occurring. Unfortunately, getting the insufficient virtual memory error tends to prevent other diagnostic tools such as traceback from working.

Retired 12/31/2016

I will try something like this. I am wondering though whether the program actually crashes at the time of the error or whether it stumbles a bit further before actually crashing. It is my experience that this can happen if you exceed your array bounds (and the program was not compiled for explicit range checking at runtime).

Which check flags, if any, could I set for the compiler so that as many relevant (memory-) areas of interest as possible are checked at run time? It does not matter if the program runs 20x slower (sometimes a result of such check options in my experienc) ; Whereas it crashes now after  some minutes such test runs may then run a few hours (or even actually stopping outright because the mem "leak" occures of course all the time) - and if it gave valuable information during such a controlled crash so to speak so much the better.

You should at least build with /warn:interface and /check  You may also want to download a trial of Intel Inspector XE and run the program under its memory analysis - it may find something interesting.

Retired 12/31/2016

OK I will try both these things and post what comes out of it.


Hi Olaf,

sorry for the wrong tip on stack instead of heap.

If you use /check:bounds be aware that if you have something like this

CHARACTER somechar*(*)

in your code, you get every time there an error, although it works quite good.

CHARACTER(len=*),DIMENSION(*) :: somechar

will work instead with array bound checking. I have old code with a lot of 'old' character definition in paramter list and I don't want to change the 2000 subroutines for it. Currently, I can't use the array bound checking in runtime as a drawback. But I am a lucky one, my program runs without memory issues.

You must have a lot of degrees of freedoms, if your Cuthill-McKee runs into stack problems...

However good luck in finding and fixing the problem.

Kind regards,


Regarding the character statement: yes we do have some of the old kind; I will see how many and if manual conversion would be quick. And yes, range checking is part of our routine tests when the code has changed; I will need this. The old compiler accepted it as it was and did a proper arrays check and subroutine header variable mismatch check and stuff. 

Regarding the Cuthill-McKee: I do actually not trust that where "death knell" occurs there is necessarily the location of the problem. The program may have been "mortally hurt" before and stumble on for a while before crashing. I have experienced such things before.

Our degrees of freedom  are not that many actually. We have to carry quite many variables, mostly set up as arrays. Certainly Old Style. The core of the program was created in the late 90s (sooo 90s so to speak) with a conservative attitude on top of that -- meaning any F95 or F2003 features or something like that are largely absent.

Essentially it is a FEA process that reoccurrs many times (order 1e6 times) with a real small number of DOFs compared to, say, today's commercial 3D FEA models.


Next thing will be the tests that Steve suggested. I did not get to it yet -- but I will report back and post here in the forum what comes out of it.




You stated earlier in this post : "if you are compiling from the command lines, some diagnostics go to standard error. If you are redirecting compiler output, use 2>&1 to merge the streams"
I have been looking for how to do this for a long time. (DOS help is hard to find).
Thanks for the advice, as it has solved a problem I have had with a batch file for a simple compile and run. The following simple batch file now works well for my needs. Others might find it helpful.
set options=/Tf %1.f95 /free /O3 /QxHost

del %1.obj
del %1.exe

now >> %1.tce
echo ifort %options% >> %1.tce
ifort %options% 1>>%1.tce 2>&1
type %1.tce

dir %1.* >> %1.tce

%1 >> %1.tce

notepad %1.tce

.... actually to give an update on this after some weeks:

(1) I also tried another vendors compiler in the meantime, NO SUCH mem problems occur; so it is only with the intel compiler. Still, I kept investigating with the intel compiler also.

(2) The usage of Intel Inspector pointed to the problem that the Intel compilation had: Locally defined subroutines with no access to global variables whatsoever; arrays plus the array dimensions passed into the subroutine (the latter defined in the main program or a higher level subroutine). In our view completely legal, even in the old days (f77 -- whereas f9x should be downwards compatible should it not?), and used for a long time with no issues, as mentioned. There were some 7 such locations causing problems. The program as a whole has many more than that, most of which had no such issues. 

Intel Inspector then just pointed to the subroutine header line as the bad one. I proceeded to take out one variable after the other from the header and handed them over via extra module statements. 

Turns out that  -- and I consider this as a workaround -- I need to pass the array *dimensioning* variable outside the subroutine header, i.e.; via a module. So I proceeded to define a "parameter module" containing these dimensions and shared this module in any soubroutine that uses the array(s) in question. The arrays themselves are still passed via the header. 

Once again, I consider this a workaround; especially since I (A) think that the code before was completely legal and never had issues (B) I do not completely understand WHY this helps and (C) the arrays and the dimensions are currently passed via different routes and I do not generally think this is a good idea.

Remarks to (C): Passing the array in the module as well would require major changes to the code: Remember some of this is old code and many variables are handled via common blocks. I would therefore need to restructure all of this, or created duplicate variables. At this stage I am hesitant to do any of this. The alternative to revert to many more global variables rather than local ones can't seem to be a good idea I believe. I think local programming is in many cases just easier concerning the safe handling of variables, namespaces and such.

With these changes, no more memory leaks are reported by Intel Inspector, and no increase of mem usage is indicated by the OS; the program (just "solver", no gui library linked so far) runs just fine.

What did Inspector actually say the problem was?  From your description I think you are talking about explicit shape dummy arguments.  They are still very much in the language.

>>Win32bit crashes as it exceeds some 2GB (growing to there gradually, starting from 150MB)

It is a well known limitation for 32-bit applications that they can't allocate more than 2GB of memory and in a real life for heavy applications the crash could even happen at significantly lower amounts of allocated memory. Take into account that many dependent DLLs are mapped to the address space of the application.

>>Win64bit gradually takes all mem it can get, starting from 150MB until paralysation (my machine has 10GB plus 3GB swap )

3GB is Not enough and try to use 16GB or 24GB values. Also, there are two values that control size of the Virtual Memory file, Min and Max sizes.

Here is an example, if your application tries to allocate 8GB of memory than a Min value for the Virtual Memory file has to be set to 8GB or higher, and a Max value for the Virtual Memory file has to be set to 16GB or higher ( use Min x 2 as a rule ).

I recently did a test with some Fortran codes ( Matrix multiplication / 16Kx16K / REAL(8) data type ) and without option heap-arrays:1024 ( that was already mentioned in a post earlier ) it didn't work even if I have on my computer 32GB of physical memory and 64GB of Virtual Memory ( Min value ).

@ IanH

Intel Inspector simply says "Memory Leak"


I appreciate your efforts to try to find a solution.

You see, the problem is not the fact that the program demands more mem than a 32bit environment would be able to provide. Kindly read the  problem description and the subsequent correspondence: In a healthy state the program comsumes a mere 150MB, constantly. It also does in its latest state (with workaround as described in my last post 1 day ago).

The cause I am after is the described memory leaks of a compiled "unhealthy executable", which lead to ever increasing mem usage (and therefore ultimately crash on Win32 or paralysis on Win64). Now what creates a "healthy executable" in this context: (1) Compilation of the code with other vendors' compilers or (2) compilation with Intel on Linux or (3) doing the workaround described in my last post, after being pointed there by Intel Inspector.

So, only with intel on Windows (compiling the original Fortran code without the workaround -- the same that has run for years with other compilers) do I have these issues, i.e.; an "unhealthy executable".

So I do not have a problem due to memory limitation, but one due to memory leaks

In any case: Again, thank you for your comment.


You could create a subroutine entry/exit report (to file) that reports the value of key dimension arguments and uses "sizeof" to report the size of key arrays. The report could include the variable name, nominal dimensions, SIZEOF and LOC. The use of LOC might be useful to indicate how the memory leak is shifting. ( try converting LOC to a string as 130,000,000 to make the address more readable)
I'm not sure how extensive the call structure is, but developing a report for entering key routines might give you a better picture.
You indicated you used few F2003+ extensions. Do you use ALLOCATE or POINTER ? Allocatable variables should be automatically released on exit, but pointer allocations can become lost and a source of memory leakage, especially where an allocated pointer is linked to a new memory allocation.
As you are not getting a stack overflow, the problem looks more like allocatable arrays, rather than automatic arrays.

I hope some of these ideas might help.


A code example (as a starting point - declarations of the actual arguments, the call statement itself, the salient bits of the specification part of the called subroutine) would be nice to see.

So there's a memory leak associated with explicit shape dummy arguments?  I could imaging memory allocation happening if the actual argument was an expression, or perhaps a non-contiguous (or unknown contiguous) array.  Is this the case?

Missing an explicit interface when one was needed could also create some fun.  How did compiling and running with the diagnostic and runtime check flags go?

Leave a Comment

Please sign in to add a comment. Not a member? Join today