Hi,
I have successfully installed deal.II_v9.3.3 with candi, and have cloned the development version of ASPECT and compiled it without any problem.
My problem is that ASPECT runs very slowly in my linux server (with a Ubuntu distribution 20.04.3). For example, when I run the cookbook example shell_simple_2.prm with mpirun -np 8 …
it takes 50 minutes to reach 200 timesteps. Instead it only takes 109 seconds in another server with the same -np 8
At the beginning it runs very fast, but it suddenly slows down at timestep 121 (or 181 for -np 32). When I check with top command, I see that the system takes 70% of the CPU. I think I have some problems with the parallelisation. Few weeks ago I managed to run 3D models of subduction and it worked properly.
Any clue on how to solve this?
Thank you very much in advance
Ana
Ana,
when you look at the program running with top
, you should see the memory consumption of the program. Does it exceed the memory your machine has? If so, in the top four or five lines of the output of top
, you can also see whether the machine is using swap space and how much. This should be essentially zero – if you are using swap space, anything will come to a crawl.
Best
W.
Hi Wolfgang,
Many thanks for your answer. I have checked the swap space with top (fifth line in the top output, where it is written MiB intercambio, below) and I think this is ok, nearly 0 used (1,0 usado). Instead the %CPU used in each of the 8 cores is extremely high.
I am not sure if I am using version 10 for gcc g++ and gfortran compilers, since the system manager preferred not to change the version of the compilers to avoid problems to compile his codes. I don’t know if this can have some effect, but as I said I managed few days ago to run 3D subduction models.
thanks again
Ana
Ah yes, this is awkward. Can you try the following? Assuming you are using bash
as your shell, can you do
export OMP_NUM_THREADS=1
before calling mpirun
? Does that change the load (the %CPU) you have for each process?
Best
W.
Hi,
Yeahh, I did this sentence and know it works perfectly. He CPU for each process is now around 100, while it used to be more than 1000% before.
I guess that I needed to limit the number of subprocesses that the libraries use (this is what I have understood looking in the manuals).
I am very grateful for your help, I was a bit desperate
Thanks again
Ana
Great, glad to hear that that solved the problem!