I am getting different Elastic Pre-step deformation by applying ElasticIsotropic3D and MaxwellIsotropic3D (linear Maxwell Viscoelastic) for my model (fig below).
In my understanding the linear Maxwell model (which is a spring and a dashpot in series) should give same elastic prestep response to that of an elastic model, given that we have identical elastic properties for both the runs.
I have checked the input scripts and they all seem fine, is there an explanation to this or I am doing something wrong?
PyLith v2.2.2 does not have a special formulation for incompressible elasticity like the one available in PyLith v3. You can try to set the elastic properties so that the material is nearly incompressible. If you use an iterative solver, the linear solver will likely converge extremely slowly due to the poor conditioning of the system of equations. You will likely need to use a direct solver (pc_type=lu). You should not expect the solver performance or accuracy to be as good as the incompressible elasticity formulation available in PyLith v3.
I have also set up elastic prestep model with dynamic fault formulation and I am getting the following error (with the same geometry shown in the 1st post to this thread, dipping side as fault interface with thin slab underneath).
-- Preparing for prestep with elastic behavior.
mpinemesis: ../../../pylith-2.2.2/libsrc/pylith/materials/ElasticMaterial.cc:406: virtual PylithScalar pylith::materials::ElasticMaterial::stableTimeStepImplicit(const pylith::topology::Mesh&, pylith::topology::Field*): Assertion `dtStable > 0.0' failed.
[4]PETSC ERROR: [5]PETSC ERROR: ------------------------------------------------------------------------
[5]PETSC ERROR: Caught signal number 3 Quit: Some other process (or the batch system) has told this process to end
[5]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[5]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
[5]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors
[5]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run
[5]PETSC ERROR: to get more information on the crash.
[5]PETSC ERROR: User provided function() line 0 in unknown file (null)
Also, before running the dynamic model, I tested the same with kinematic fault formulation with no-slip and prescribed slip condition, which converged quickly without errors.
The error is being caught in ElasticMaterial so this means that the stable time step is being computed from the material properties. The error message indicates that PyLith computes a stable time step that is zero or negative. This suggests that you should check your material properties values to make sure they are reasonable.
I didn’t find any issue with any of the material properties (units and values both look fine). So to simplify the problem I removed the Viscoelastic material model from all components of my domain. Keeping rest everything the same, I got the following error.
>> /home/PylithResearch/pylith-2.2.2-linux-x86_64/lib/python2.7/site-packages/pylith/problems/Implicit.py:207:step
-- implicit(info)
-- Solving equations.
[0]PETSC ERROR: MatSeqAIJSetPreallocation_SeqAIJ() line 3671 in /home/brad/pylith-binary/build/petsc-pylith/src/mat/impls/aij/seq/aij.c nnz cannot be greater than row length: local row 6 value 12 rowlength 9
[3]PETSC ERROR: [4]PETSC ERROR: ------------------------------------------------------------------------
[4]PETSC ERROR: Caught signal number 3 Quit: Some other process (or the batch system) has told this process to end
[4]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[4]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
[4]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors
[4]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run
[4]PETSC ERROR: to get more information on the crash.
[4]PETSC ERROR: User provided function() line 0 in unknown file (null)
[5]PETSC ERROR: ---------------------------------------------------------
I went back to recheck and regenerate my mesh and now the simulation is running fine (without errors). But the solver is taking a lot of time to converge and the simulation is not getting completed even after ~12 hrs.
Sorry to bug you with this again but as per my experience the slow convergence issue in PyLith 2.2.2 is connected to conditioning of the mesh (attached previously). In my case the condition number of the mesh is not too high (Max ~2.33, Fig below). Could please point out some other way to address this issue?
Thank you
Viven
The issue appears to be your solver parameters. Your KSP (linear solver) residual is only decreasing to the order of 1.0e-4 or 1.0e-5. See the PyLith manual section 6.4.5.2. When you are using the fault friction formulation, you want the linear solver to converge to the absolute tolerance criterion, not the relative tolerance criterion.
What do you mean by “not getting any result out”? Did the simulation run to completion? Did the linear (KSP) and nonlinear (SNES) solvers converge at every time step? Did the simulation run but there was zero fault slip?
1st I ran the model with high friction coefficient value to impose zero slip and results are reasonable (below). However, the slurm-stderr.txt (55.0 KB) is not having any information on the convergence lin or nonlin solver.
Now, with the no friction condition at the fault interface (sloped side), I am getting similar stderr output but there are no results showing up in the vtk output files. In this case I am expecting ~800 m of slip at the tapered side.
I suppose I should increase absolute tolerance values in the solver settings? Or there is some other issue with this?
To monitor what the solver is doing in PyLith v2.2.2, use the following settings:
[pylithapp.petsc]
# These will report why the solver converged or did not converge and the number of interations
ksp_converged_reason = true
snes_converged_reason = true
# These will make sure an error is triggered if the solver does not converge.
ksp_error_if_not_converged = true
snes_error_if_not_converged = true
# These will show the residual at each iteration. You may want to start with just snes_monitor.
ksp_monitor = true
snes_monitor = true
Linear solve converged due to CONVERGED_ATOL iterations 2231
10 SNES Function norm 9.080284768082e+05
Nonlinear solve did not converge due to DIVERGED_DTOL iterations 10
[0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------
[0]PETSC ERROR:
[0]PETSC ERROR: SNESSolve has not converged
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.10.2, Jul, 01, 2019
[0]PETSC ERROR: /home/wadai0/sharm760/PylithResearch/pylith-2.2.2-linux-x86_64/bin/mpinemesis on a named cn1136 by sharm760 Tue Oct 18 10:05:09 2022
[0]PETSC ERROR: Configure options --prefix=/home/brad/pylith-binary/dist --with-c2html=0 --with-x=0 --with-clanguage=C --with-mpicompilers=1 --with-shared-libraries=1 --with-64-bit-points=1 --with-large-file-io=1 --download-chaco=1 --download-ml=1 --download-f2cblaslapack=1 --with-hwloc=0 --with-ssl=0 --with-x=0 --with-c2html=0 --with-lgrind=0 --with-hdf5=1 --with-hdf5-dir=/home/brad/pylith-binary/dist --with-zlib=1 --LIBS=-lz --with-debugging=0 --with-fc=0 CPPFLAGS="-I/home/brad/pylith-binary/dist/include -I/home/brad/pylith-binary/dist/include " LDFLAGS="-L/home/brad/pylith-binary/dist/lib -L/home/brad/pylith-binary/dist/lib64 -L/home/brad/pylith-binary/dist/lib -L/home/brad/pylith-binary/dist/lib64 " CFLAGS="-g -O2" CXXFLAGS="-g -O2 -DMPICH_IGNORE_CXX_SEEK" FCFLAGS= PETSC_DIR=/home/brad/pylith-binary/build/petsc-pylith PETSC_ARCH=arch-pylith
[0]PETSC ERROR: #1 SNESSolve() line 4408 in /home/brad/pylith-binary/build/petsc-pylith/src/snes/interface/snes.c
[0]PETSC ERROR: #2 void pylith::problems::SolverNonlinear::solve(pylith::topology::Field*, pylith::topology::Jacobian*, const pylith::topology::Field&)() line 152 in ../../../pylith-2.2.2/libsrc/pylith/problems/SolverNonlinear.cc
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: Caught signal number 3 Quit: Some other process (or the batch system) has told this process to end
[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors
[0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run
[0]PETSC ERROR: to get more information on the crash.
[0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------
[0]PETSC ERROR: Signal received
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.10.2, Jul, 01, 2019
[0]PETSC ERROR: /home/wadai0/sharm760/PylithResearch/pylith-2.2.2-linux-x86_64/bin/mpinemesis on a named cn1136 by s/home/wadai0/sharm760/PylithResearch/pylith-2.2.2-linux-x86_64/bin/nemesis: mpirun: exit 59
/home/wadai0/sharm760/PylithResearch/pylith-2.2.2-linux-x86_64/bin/pylith: /home/wadai0/sharm760/PylithResearch/pylith-2.2.2-linux-x86_64/bin/nemesis: exit 1
The non-linear solver didn’t converge in this case.
I am expecting significant slip on the fault interface, can these solver settings can handle this? From the manual and examples, 3D subduction spontaneous rupture example is not complete. Any suggestions how to modify this?
The SNES residual after 10 iterations is 9.0e+5. This suggests something went wrong in one or more linear solves in previous iterations. Even if the linear solver hit the convergence tolerance, it may have not given a reasonable solution. This can happen when the solver and zero tolerances need additional tuning or the boundary value problem is ill-posed.
I am still confused by your problem setup. If you have a fault with zero friction that extends across the entire boundary, what keeps it from sliding with rigid body motion? I know you have put a lot of effort into trying to get this setup to work, but you might make better progress taking a step back and considering a more conventional problem setup like we have in the examples with a fault in the middle of the domain.