Memory leak with MPI + h5py state_phys
Created originally on Bitbucket by avmo (Ashwin Vishnu)
Earlier I had messaged in the fluiddyn-dev chatroom that:
I get out of memory errors on my jobs.
Symptoms
I am running a series of simulations with nh=2880 and using sw1l.onlywaves solver. I write a state_phys file every t=1. When the simulation reaches t=10, it runs out of memory. I resume and again when it reaches t=20 it fails again. It has to be related to the output files to happen at regular intervals for all jobs, irrespective of the number of iterations.
Now when I launched a fresh simulation with nh=1920, it ran out of memory at t=17. This delayed failure makes me suspect the h5py+mpi implementation.
What I have tried so far
- Ensure the jit extensions are compiled on the cluster.
- I have recompiled and executed h5py tests successfully.
Yet the out-of-memory errors continue.
TODO:
- Profile memory with h5py and MPI. https://jmdana.github.io/memprof/