Temperature overshooting in 3D data assimilation model

Hi all,

I’m using ASPECT to develop a data assimilation model with 3D spherical geometry. I use the artificial viscosity as the stabilization method for the temperature field. However, the undershooting and overshooting become pretty large after ~100 Myr model time. The minimum and maximum temperature is 273 K and 3073 K in the input file, while it could be -250 K and 3764 K in my model.

I modified the code so the internal velocity in temperature and particle advection cannot be larger than the maximum surface velocity.

The temperature setting in my model is as follows:

  • List item

set CFL number = 0.98
set Nonlinear solver scheme = single Advection, iterated Stokes
set Max nonlinear iterations = 5

subsection Boundary temperature model
set List of model names = spherical constant
subsection Spherical constant
set Inner temperature = 3073
set Outer temperature = 273
end
end

subsection Solver parameters
set Temperature solver tolerance = 1e-11
subsection Advection solver parameters
set GMRES solver restart length = 500
end
end

subsection Formulation
set Formulation = Boussinesq approximation
end

subsection Discretization
subsection Stabilization parameters
set beta = 0.78
set cR = 1.0
end
end



Screenshot and model results. How to eliminate the temperature undershooting and overshooting? Thanks !

@lixinyv There are probably several other threads on this forum about over- and undershots of the temperature solution. In short, this is a consequence of solving an advection-dominated problem with a higher-order scheme. It is conceptually the same as Gibb’s phenomenon for the Fourier transform. The way to address the issue is through one of the following two means:

  • You can play with the stabilization scheme and stabilization parameters (see the manual for more information about which parameters are involved).
  • You can refine the mesh. Of course, in practice this is often difficult because it requires a significant increase in compute time and memory.

Best
Wolfgang

Thanks Wolfgang! It seems that the over- and undershot don’t affect a lot.

Another question about the memory. The total freedom is ~1.25 billion and the number of particles in my model is ~1.5 billion. Now I use 120 nodes and 6000 cpu cores to run the model. There are 256 GB memory and 56 cpu cores per node, and there are only 71 ~ 78 GB used every node. However, every time when the number of particles increase to ~1.6 billion. ASPECT will report the error message “cannot allocate memory” while there is more than 170 GB memory free at the same time. So the question is in which case this error might occur and how to use the other 120 x 170 GB memory?

Thanks!
Xinyu Li

@lixinyv The issue is that, apparently, on (at least) one machine you run out of memory. You cannot allocate memory on another node (that’s just not how computers work), so it does not matter that there is memory available on other nodes.

In practice, memory usage in most programs goes up and down. I don’t know how you determined the 71-78 GB of memory usage, but it is probably a snapshot in time. There will be times when the program uses more memory, and apparently at one time you are exceeding the available memory on one machine.

In order to address the question, I see two options:

  • First, let’s look at the end of the output you get. Perhaps we can identify which phase of the program this happens in.
  • You just have to use more nodes/cores if you want to do such large simulations. A rule of thumb is that you want perhaps ~100,000 DoFs per core. In your simulation, you have ~200,000. If you have access to more cores, you can try that and probably also run your simulations faster.

Best
W.