We've been having difficulty with a memory leaks, and we suspect the problem is in the compiler. The appended code (test_leak.f90) is similar to ours, and reproduces the problem.
After executing the following commands, we see the memory usage of a.out increase approximately linearly in time.
ifort test_leak.f90 a.out & top
We don't see what what's wrong with the code.
We noted that the program barely uses any memory when the FINAL procedures are uncommented. But shouldn't they be redundant? In "Modern Fortran Explained", Metcalf et al. say that "when a variable of derived type is deallocated, any ultimate allocatable component that
is currently allocated is also deallocated, as if by a deallocate statement".
- OS: CentOS Linux release 7.3.1611 (Core)
- ifort version: 17.0.4
The command lscpu prints the following CPU info:
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
On-line CPU(s) list: 0-11
Thread(s) per core: 1
Core(s) per socket: 6
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model name: Intel(R) Xeon(R) CPU E5-2630 v2 @ 2.60GHz
CPU MHz: 2899.914
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 15360K
NUMA node0 CPU(s): 0-5
NUMA node1 CPU(s): 6-11