Traction boundary condition with particle compositional field in a subduction model

Hello,

I am trying to run a 3D self consistent subduction model with traction boundary condition and particle compositional field. Before this model, I had tried free-slip condition for all boundaries and this model runs with no problem. However, when I changed the left boundary into an initial lithostatic traction boundary, I run into a problem where there are cells with no particles:

The violated condition was:
non_empty_neighbors != 0
Additional information:
A cell and all of its neighbors do not contain any particles. The
`cell average’ interpolation scheme does not support this case unless
specified in Allow cells without particles.

I tried to add minimum particles per cell, but it doesn’t seem to solve the problem. I must misunderstand this parameter. I thought it means generate new particles if a cell particles are below the limit.
I also tried to add “set Allow cells without particles = true”, and so far this model seems to be able to run but the composition near the left boundary does not keep the original composition even though I set:

set Fixed composition boundary indicators = left, right


Figure shows the oceanic plate, where the material flows in on the top left boundary is not oceanic plate any more.

Here are my boundary conditions and particles section inputs:

subsection Boundary velocity model
set Tangential velocity boundary indicators = top, bottom, front, back, right
end

subsection Boundary traction model
set Prescribed traction boundary indicators = left:initial lithostatic pressure
subsection Initial lithostatic pressure
set Representative point = 0, 0, 660000
end
end

subsection Particles
set Number of particles = 75000000
set Minimum particles per cell = 50
set Allow cells without particles = true
set Time between data output = 2e6
set Data output format = vtu
set List of particle properties = velocity, initial composition
set Interpolation scheme = bilinear least squares
set Update ghost particles = true
set Particle generator name = random uniform
subsection Interpolator
subsection Bilinear least squares
set Use linear least squares limiter = true
end
end
end

Is there a way to create new particles at the left boundary when material flows in while using traction boundary condition? How to keep the inflow the same composition as the initial condition?

Thank you.

Hi Xiaowen,

Fortunately, you should just need to specify one additional parameter to solve the particle issue.

Under subsection Particles, you need to set the following:
set Load balancing strategy = add particles

However, you may also want to remove particles based on a set maximum limit, as too many particles may lead to slower run times without gaining any additional accuracy. For reference, here are the particle properties I have been using for a forthcoming update the continental extension cookbook:

  subsection Particles
    set Minimum particles per cell  = 40
    set Maximum particles per cell  = 60
    set Load balancing strategy     = remove and add particles
    set List of particle properties = initial composition, viscoplastic strain invariants, pT path, position
    set Interpolation scheme        = quadratic least squares
    set Update ghost particles      = true
    set Particle generator name     = reference cell
    subsection Generator
      subsection Reference cell
        set Number of particles per cell per direction = 7
      end
    end
    subsection Interpolator
      subsection Quadratic least squares
        set Use boundary extrapolation          = false
        set Use quadratic least squares limiter = true
      end
    end
    set Time between data output    = 10.e3
    set Data output format          = vtu
  end

I hope this helps!

  • John

Hi John,

Thank you. Yes, I just tried your way and it works!

Thanks,
Xiaowen

Hi John,

This is a follow-up question about changing particle numbers or mesh refinement through the model run.

Do you have any suggestion on choosing the number of cores that are needed to run the model? Since my DoF changes quite a lot (it can change from 7 million at the beginning of model to 70 million at the end of the model run) what should be the best way of choosing the number of processors?
If I restart the model, can I change the number of processor?

Thank you.
Xiaowen

Xiaowen
I think a good rule is to have ~100,000 unknowns per process. So for your 70 million, that would mean ~700 cores.

I believe that restarting with a different number of processes works. At least ASPECT is designed to do that, but I don’t know how often that is tested. You should at least try!

Best
W.

Hi Dr. Bangerth,

Thank you for your reply. Yes, I will definitely try change number of processes with these models that have large variation of DoF.

Best,
Xiaowen

Hi Xiowen,

A quick follow-up on @bangerth suggestions.

  • If the model is 3D, 100,000 unknowns per process (degrees of freedom ~ DOF) is probably at the lower limit of what you should use (150-200 DOF per processor might give faster scaling).

  • If the model is 2D, you should be able to go even lower than 100,000 DOF per processor and still achieve good scaling.

  • If you are still using particles, the only thing to watch out for is running out of memory if you use a very high number of DOF per core (say >300,000).

  • One approach for assessing the optimal number of processors to use for a given model size is to run the same model for a single time step using a different number of processors. I frequently loosen the nonlinear solver tolerance (say 1e-5 → 1e-3) for these type of tests to speed things up.

  • Are you currently using the GMG solver for the Stokes system? If not, happy to provide guidelines on settings to use for this, as in some cases in can significantly speedup model run times.

Cheers,
John

Hi John,

Thank you. I did a scaling test and for the 3D model, ~100,000 unknows is a good number for my model.

I am not using GMG solver. I have enable vectorization optimizations and I should be able to run with

set Stokes solver type = block GMG

However, I found that when I use GMG solver, I always run into converge problem, but without GMG solver the same model runs fine. I am not sure why, do you know what could possibly be the reason? I have turned on material averaging.

Thanks,
Xiaowen

Hi Xiaowen,

Great to hear 100,000 DOF is working well!

However, I found that when I use GMG solver, I always run into converge problem, but without GMG solver the same model runs fine. I am not sure why, do you know what could possibly be the reason? I have turned on material averaging.

It may be the problem has viscosity contrasts that the GMG solver can’t solve. What is your min and max viscosity value? Also, can you send over the PRM snippet where you specify to use the GMG solver? Based on recent testing, I may have some ideas to help improve the convergence behavior.

Cheers,
John

Hi John,

I am running 3D subduction model. I use single Advection, iterated defect correction Stokes. The nonlinear solver tolerance is set really high of 1e-2. The viscosity range is 1e19 to 1e23
Here is part of the input file for the solver parameters:

subsection Solver parameters
subsection Stokes solver parameters
set Number of cheap Stokes solver steps = 0
set Linear solver tolerance = 1e-4
set Use full A block as preconditioner = true
set GMRES solver restart length = 100
set Stokes solver type = block GMG
end
subsection Newton solver parameters
set Maximum linear Stokes solver tolerance = 1e-4
set Use Eisenstat Walker method for Picard iterations = true
end
subsection Advection solver parameters
set GMRES solver restart length = 100
end
end

For material averaging:

subsection Material model
set Model name = visco plastic
set Material averaging = harmonic average only viscosity

Below are part of the error message:

The violated condition was:
solver_state == SolverControl::success
Additional information:
Iterative method reported convergence failure in step 1000. The
residual in the last step was 1.

This error message can indicate that you have simply not allowed a
sufficiently large number of iterations for your iterative solver to
converge. This often happens when you increase the size of your
problem. In such cases, the last residual will likely still be very
small, and you can make the error go away by increasing the allowed
number of iterations when setting up the SolverControl object that
determines the maximal number of iterations you allow.

The other situation where this error may occur is when your matrix is
not invertible (e.g., your matrix has a null-space), or if you try to
apply the wrong solver to a matrix (e.g., using CG for a matrix that
is not symmetric or not positive definite). In these cases, the
residual in the last iteration is likely going to be large.

The same model without GMG solver runs well.
Thank you for your help.

Best,
Xiaowen

Hi Xiaowen,

With a viscosity range of 1e19-1e23 Pa s, I think it should be possible to get reasonable convergence behavior with the GMG solver.

Can you try setting the nonlinear solver tolerance to 1e-5 and using the following Solver parameters configuration:

subsection Solver parameters
  subsection Stokes solver parameters
    set Number of cheap Stokes solver steps = 2000
    set Linear solver tolerance = 1e-4
    set Use full A block as preconditioner = true
    set GMRES solver restart length = 150
    set Stokes solver type = block GMG
  end
  subsection Newton solver parameters
    set Maximum linear Stokes solver tolerance = 1e-4
    set Use Eisenstat Walker method for Picard iterations = true
  end
  subsection Advection solver parameters
    set GMRES solver restart length = 100
  end
end

The key changes above are the number of cheap Stokes solver steps (0-2000) and the GMRES solver restart length (100-150).

If the linear solver still does not converge with these settings, try decreasing the Linear solver tolerance to 1e-3. The key is that the model always converges to the specified nonlinear solver tolerance.

Cheers,
John
Cheers,
John

Hi John,

I tried with the parameters you provided. It works and it does speed up my model run. Thank you.
I am wondering why increasing the cheap stokes solver steps to a large number helps with the converging problem? On the manual, it mentioned that for problems with strong nonlinear viscosity we can skip this part. This is why I put zero before.

Thanks,
Xiaowen

Hi Xiaowen,

Great news!

Both the AMG and GMG solver should be much faster with the Cheap stokes solver, but I’m not sure why the model is not converging with GMG + the expensive Stokes solver. It very well may be related to the following open issue - GMG with DC Picard diverges · Issue #4563 · geodynamics/aspect · GitHub.

For now, my recommendation would be to just set a very large number of cheap Stokes solver steps (2000 or 5000) to make sure it never reaches the expensive Stokes solver (i.e., until the above issue is sorted out). If you run into convergence issues with GMG + only cheap Stokes solver steps, feel free to report back here and we can try to sort out the issues.

Cheers,
John