Single step elastic deformation with gravity

The PyLith version you are using doesn’t include the PETSc settings in the JSON file. Please post the PETSc settings that are in your .cfg files. I suspect there is a convergence issue and you don’t have PETSc triggering an error when it fails to converge.

Thanks for the swift response, Brad. Here are my PETSc settings.

## ----------------------------------------------------------------------
# PETSc
# ----------------------------------------------------------------------
[pylithapp.problem.interfaces.fault]
zero_tolerance = 1.0e-11
zero_tolerance_normal = 2.0e-11

[pylithapp.petsc]
# Linear solver tolerances
ksp_rtol = 1.0e-12
ksp_atol = 1.0e-12

# Nonlinear solver tolerances
snes_rtol = 1.0e-11
snes_atol = 1.0e-10

# Set preconditioner for friction sensitivity solve
friction_pc_type = asm
friction_sub_pc_factor_shift_type<p> = nonzero
friction_ksp_max_it = 25
friction_ksp_gmres_restart = 30
friction_ksp_error_if_not_converged = true

# End of file

You should always use the following additional settings:

[pylithapp.petsc]
ksp_error_if_not_converged = true
snes_error_if_not_converged = true

These will trigger an error and the application will abort when either the linear or nonlinear solvers fail to converge.

We also recommend using

ksp_converged_reason = true
snes_converged_reason = true

These settings will show what convergence tolerance was met when the solution converges.

I suspect that what is happening is that either the linear or nonlinear solver fails to converge. It will run through the maximum number of iterations and, without the above settings to trigger an error, even though the solver fails to converge, it will just continue to the next time step. Running the maximum number of linear and nonlinear iterations will take a while, which would explain why you are seeing long run times.

Some suggestions:

  1. Make sure the same simulation works (solver converges and you get reasonable solutions) for prescribed slip.
  2. Start with static friction and again make sure the solver converges and you get reasonable solutions.

I suspect

Here is what I got as error output.

  Linear solve did not converge due to DIVERGED_ITS iterations 10000
KSP Object: 1 MPI processes
  type: gmres
    restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
    happy breakdown tolerance 1e-30
  maximum iterations=10000, initial guess is zero
  tolerances:  relative=1e-10, absolute=1e-10, divergence=10000.
  left preconditioning
  using PRECONDITIONED norm type for convergence test
PC Object: 1 MPI processes
  type: ilu
    out-of-place factorization
    0 levels of fill
    tolerance for zero pivot 2.22045e-14
    matrix ordering: natural
    factor fill ratio given 1., needed 1.
      Factored matrix follows:
        Mat Object: 1 MPI processes
          type: seqaij
          rows=30478, cols=30478
          package used to perform factorization: petsc
          total: nonzeros=1136582, allocated nonzeros=1136582
          total number of mallocs used during MatSetValues calls =0
            using I-node routines: found 12066 nodes, limit used is 5
  linear system matrix = precond matrix:
  Mat Object: 1 MPI process[0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------
[0]PETSC ERROR:   
[0]PETSC ERROR: KSPSolve has not converged
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.10.2, Jul, 01, 2019 
[0]PETSC ERROR: /panfs/roc/groups/8/wadai0/sharm760/PylithResearch/pylith-2.2.2-linux-x86_64/bin/mpinemesis on a  named native.msi.umn.edu by sharm760 Tue May 17 10:09:27 2022
[0]PETSC ERROR: Configure options --prefix=/home/brad/pylith-binary/dist --with-c2html=0 --with-x=0 --with-clanguage=C --with-mpicompilers=1 --with-shared-libraries=1 --with-64-bit-points=1 --with-large-file-io=1 --download-chaco=1 --download-ml=1 --download-f2cblaslapack=1 --with-hwloc=0 --with-ssl=0 --with-x=0 --with-c2html=0 --with-lgrind=0 --with-hdf5=1 --with-hdf5-dir=/home/brad/pylith-binary/dist --with-zlib=1 --LIBS=-lz --with-debugging=0 --with-fc=0 CPPFLAGS="-I/home/brad/pylith-binary/dist/include -I/hes
    type: seqaij
    rows=30478, cols=30478
    total: nonzeros=1136582, allocated nonzeros=1136582
    total number of mallocs used during MatSetValues calls =0
      using I-node routines: found 12066 nodes, limit used is 5
ome/brad/pylith-binary/dist/include " LDFLAGS="-L/home/brad/pylith-binary/dist/lib -L/home/brad/pylith-binary/dist/lib64 -L/home/brad/pylith-binary/dist/lib -L/home/brad/pylith-binary/dist/lib64 " CFLAGS="-g -O2" CXXFLAGS="-g -O2 -DMPICH_IGNORE_CXX_SEEK" FCFLAGS= PETSC_DIR=/home/brad/pylith-binary/build/petsc-pylith PETSC_ARCH=arch-pylith
[0]PETSC ERROR: #1 KSPSolve() line 851 in /home/brad/pylith-binary/build/petsc-pylith/src/ksp/ksp/interface/itfunc.c
[0]PETSC ERROR: #2 SNESSolve_NEWTONLS() line 224 in /home/brad/pylith-binary/build/petsc-pylith/src/snes/impls/ls/ls.c
[0]PETSC ERROR: #3 SNESSolve() line 4396 in /home/brad/pylith-binary/build/petsc-pylith/src/snes/interface/snes.c
[0]PETSC ERROR: #4 void pylith::problems::SolverNonlinear::solve(pylith::topology::Field*, pylith::topology::Jacobian*, const pylith::topology::Field&)() line 152 in ../../../pylith-2.2.2/libsrc/pylith/problems/SolverNonlinear.cc
Fatal error. Calling MPI_Abort() to abort PyLith application.
Traceback (most recent call last):
  File "/panfs/roc/groups/8/wadai0/sharm760/PylithResearch/pylith-2.2.2-linux-x86_64/lib/python2.7/site-packages/pylith/apps/PetscApplication.py", line 74, in onComputeNodes
    self.main(*args, **kwds)
  File "/panfs/roc/groups/8/wadai0/sharm760/PylithResearch/pylith-2.2.2-linux-x86_64/lib/python2.7/site-packages/pylith/apps/PyLithApp.py", line 138, in main
    self.problem.run(self)
  File "/panfs/roc/groups/8/wadai0/sharm760/PylithResearch/pylith-2.2.2-linux-x86_64/lib/python2.7/site-packages/pylith/problems/TimeDependent.py", line 155, in run
    self.formulation.step(t, dt)
  File "/panfs/roc/groups/8/wadai0/sharm760/PylithResearch/pylith-2.2.2-linux-x86_64/lib/python2.7/site-packages/pylith/problems/Implicit.py", line 212, in step
    self.solver.solve(dispIncr, self.jacobian, residual)
  File "/panfs/roc/groups/8/wadai0/sharm760/PylithResearch/pylith-2.2.2-linux-x86_64/lib/python2.7/site-packages/pylith/problems/problems.py", line 186, in solve
    def solve(self, *ar/panfs/roc/groups/8/wadai0/sharm760/PylithResearch/pylith-2.2.2-linux-x86_64/bin/nemesis: mpirun: exit 255
/panfs/roc/groups/8/wadai0/sharm760/PylithResearch/pylith-2.2.2-linux-x86_64/bin/pylith: /panfs/roc/groups/8/wadai0/sharm760/PylithResearch/pylith-2.2.2-linux-x86_64/bin/nemesis: exit 1

I have no idea how to address this.
Any suggestions?

Also, I have not prescribed any slip, the fault interface is set up with FaultCohesiveDyn zero friction and with gravity, to analyze the effect of gravity on the domain.

When the linear solver fails to converge, it usually means there is something wrong with how the simulation is setup. Using fault friction introduces significant complexity into the solve (additional unknowns and nonlinear behavior). Therefore, when you encounter an error like this, the best thing to do is to back up and replace the fault friction with prescribed slip (start with zero prescribed slip). If there is a problem unrelated to fault friction, it will be much easier to diagnose with a simulation using prescribed slip (for example, this often makes the problem linear so that the nonlinear solver should converge in a single iteration).

Once the prescribed slip simulation runs and shows reasonable convergence and yields the expected solution, then switch back to fault friction. If zero friction is causing problems, then use a very large static friction value. That should produce zero slip.

Hi Brad, I changed my fault interface to FaultCohesiveKin and prescribed zero slip using the syntax shown below.

# ----------------------------------------------------------------------
# faults
# ----------------------------------------------------------------------
[pylithapp.problem]
# We prescribe slip on the bottom of the slab with spontaneous rupture
# on the top of the slab.
interfaces = [fault]

[pylithapp.problem.interfaces]
fault = pylith.faults.FaultCohesiveKin
#zn = pylith.faults.FaultCohesiveDyn

# fault interface|-------------------------------------------------------------
[pylithapp.problem.interfaces.fault]
id = 100
label = fault
edge = fault_edge


# We must define the quadrature information for fault cells.
# The fault cells are 2D (surface).
quadrature.cell = pylith.feassemble.FIATSimplex
quadrature.cell.dimension = 2

[pylithapp.problem.interfaces.fault.eq_srcs.rupture.slip_function]
slip = spatialdata.spatialdb.UniformDB
slip.label = Final slip
slip.values = [left-lateral-slip, reverse-slip, fault-opening]
slip.data = [0.0*m, 0.0*m, 0.0*m]

slip_time = spatialdata.spatialdb.UniformDB
slip_time.label = Slip initiation time
slip_time.values = [slip-time]
slip_time.data = [0.0*year]

But still the non-linear solver is not converging.

 -- Computing prestep with elastic behavior.
  0 SNES Function norm 2.323668250346e-01 
[0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------
[0]PETSC ERROR: Zero pivot in LU factorization: http://www.mcs.anl.gov/petsc/documentation/faq.html#zeropivot
[0]PETSC ERROR: Zero pivot row 0 value 0. tolerance 2.22045e-14

[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.10.2, Jul, 01, 2019 
[0]PETSC ERROR: /panfs/roc/groups/8/wadai0/sharm760/PylithResearch/pylith-2.2.2-linux-x86_64/bin/mpinemesis on a  named native.msi.umn.edu by sharm760 Tue May 17 12:12:44 2022
[0]PETSC ERROR: Configure options --prefix=/home/brad/pylith-binary/dist --with-c2html=0 --with-x=0 --with-clanguage=C --with-mpicompilers=1 --with-shared-libraries=1 --with-64-bit-points=1 --with-large-file-io=1 --download-chaco=1 --download-ml=1 --download-f2cblaslapack=1 --with-hwloc=0 --with-ssl=0 --with-x=0 --with-c2html=0 --with-lgrind=0 --with-hdf5=1 --with-hdf5-dir=/home/brad/pylith-binary/dist --with-zlib=1 --LIBS=-lz --with-debugging=0 --with-fc=0 CPPFLAGS="-I/home/brad/pylith-binary/dist/include -I/home/brad/pylith-binary/dist/include " LDFLAGS="-L/home/brad/pylith-binary/dist/lib -L/home/brad/pylith-binary/dist/lib64 -L/home/brad/pylith-binary/dist/lib -L/home/brad/pylith-binary/dist/lib64 " CFLAGS="-g -O2" CXXFLAGS="-g -O2 -DMPICH_IGNORE_CXX_SEEK" FCFLAGS= PETSC_DIR=/home/brad/pylith-binary/build/petsc-pylith PETSC_ARCH=arch-pylith
[0]PETSC ERROR: #1 MatPivotCheck_none() line 718 in /home/brad/pylith-binary/build/petsc-pylith/include/petsc/private/matimpl.h
[0]PETSC ERROR: #2 MatPivotCheck() line 735 in /home/brad/pylith-binary/build/petsc-pylith/include/petsc/private/matimpl.h
[0]PETSC ERROR: #3 MatLUFactorNumeric_SeqAIJ_Inode() line 1519 in /home/brad/pylith-binary/build/petsc-pylith/src/mat/impls/aij/seq/inode.c
[0]PETSC ERROR: #4 MatLUFactorNumeric() line 3065 in /home/brad/pylith-binary/build/petsc-pylith/src/mat/interface/matrix.c
[0]PETSC ERROR: #5 PCSetUp_LU() line 131 in /home/brad/pylith-binary/build/petsc-pylith/src/ksp/pc/impls/factor/lu/lu.c
[0]PETSC ERROR: #6 PCSetUp() line 932 in /home/brad/pylith-binary/build/petsc-pylith/src/ksp/pc/interface/precon.c
[0]PETSC ERROR: #7 KSPSetUp() line 391 in /home/brad/pylith-binary/build/petsc-pylith/src/ksp/ksp/interface/itfunc.c
[0]PETSC ERROR: #8 KSPSolve() line 723 in /home/brad/pylith-binary/build/petsc-pylith/src/ksp/ksp/interface/itfunc.c
[0]PETSC ERROR: #9 SNESSolve_NEWTONLS() line 224 in /home/brad/pylith-binary/build/petsc-pylith/src/snes/impls/ls/ls.c
[0]PETSC ERROR: #10 SNESSolve() line 4396 in /home/brad/pylith-binary/build/petsc-pylith/src/snes/interface/snes.c
[0]PETSC ERROR: #11 void pylith::problems::SolverNonlinear::solve(pylith::topology::Field*, pylith::topology::Jacobian*, const pylith::topology::Field&)() line 152 in ../../../pylith-2.2.2/libsrc/pylith/problems/SolverNonlinear.cc

Am I making a mistake in the syntax of zero slip?

When you have a fault, you should be using the solver settings in share/settings/solver_fault_fieldsplit.cfg. See Section 4.1.5 PETSc Settings of the PyLith manual for more information.

Thanks a lot for that, I was missing them. With these my Linear solver is converging in ~800 iterations. Is this reasonable? Considering the fact that I am going to increase geometry complexity in further simulation models, should I look back into my model to make some improvements?

Thanks again
Viven

800 iterations is a lot for a problem this size. The number of iterations has little to do with the complexity of the geometry. It is largely determined by the number of unknowns and the conditioning of the system. If you have large contrasts in material properties or relatively incompressible materials, the conditioning is worse and the solution will converge slower.

You can try replacing fs_fieldsplit_displacement_pc_type = ml with fs_fieldsplit_displacement_pc_type = lu to see if it reduces the runtime. It may use more memory but for a small problem it might be faster.