# Isostatic restoring boundary condition in V2.2.2?

How can we apply Isostatic force boundary condition (figure below) in Pylith V2.2.2.

Thank you
Viven

We have not implemented a spring restoring force for boundary conditions in PyLith.

I believe the small strain formulation is a more appropriate approach for these types of situations. @willic3 Do you have specific recommendations for how to model this type of behavior with the small strain formulation?

I am not sure of the best approach here. Is this a viscoelastic problem? I suspect you will need to put your boundaries much further away from the region of interest, and use a combination of small strain + reference stresses to balance the initial deformation due to turning on gravity.

Thanks for the swift replies, Brad and Charles.

The problem is elastic quasistatic.
I had thought of extending the bottom boundaries, only issue with this is that it will increase computational costs. I do want to keep my mesh fine to have my stress distribution along the plate interface computed properly.

Questions:

1. Do we also need to specify initial strains (total-strain-xx, . . .) with ImplicitLgDeform model in addition to initial stress? How will the result vary if we just apply initial stress with/without initial strain.
2. Also, can you share some example file which uses initial strain (total-strain) as input to a PyLith example?
3. Will the absorbing damper BC work in place of spring restoring BC?

Hi Viven,

If the problem is purely elastic, I’m not sure that you need body forces. They don’t make much difference for purely elastic problems. That also means you probably do not need the small strain formulation (or initial stress/strain). If you can tell us a bit more about the problem you are trying to solve, we might be able to help you with your problem setup.

Cheers,
Charles

Hi Charles,

I am trying to investigate the effect of gravity on the upper plate, figure below shows a simulation result with gravity. As you can see that the top surface is not flat the effect of gravity cannot be estimated just by applying initial stresses.

The issue I am facing is highlighted in the second figure below, where at the far bottom end of the upper plate and the slab, gravity is causing XZ shear. This is probably because I am having Dirichlet BC at the bottom. Spring BC would have probably caused subsidence and increase in stress-ZZ uniformly along the bottom boundary.

Thank you
Viven

I am getting the following error in trying to run with ImplictLgDeform.

``````sharm760@ln0004 [~/PylithResearch/AGU22/B_11] % pylith B.cfg ImpDef.cfg
>> ImpDef.cfg:23:
-- pyre.inventory(error)
-- pylithapp.timedependent.implicitlgdeform.singleoutput.domain.writer.filename <- 'output/domain.h5'
-- unknown component 'pylithapp.timedependent.implicitlgdeform.singleoutput.domain.writer'
>> ImpDef.cfg:19:
-- pyre.inventory(error)
-- pylithapp.timedependent.implicitlgdeform.singleoutput.zp.writer.filename <- 'output/zp.h5'
-- unknown component 'pylithapp.timedependent.implicitlgdeform.singleoutput.zp.writer'
>> ImpDef.cfg:10:
-- pyre.inventory(error)
-- pylithapp.timedependent.ImplicitLgDeform.output <- '[domain,subdomain]'
-- unknown component 'pylithapp.timedependent.ImplicitLgDeform'
>> ImpDef.cfg:12:
-- pyre.inventory(error)
-- pylithapp.timedependent.ImplicitLgDeform.output.subdomain <- 'pylith.meshio.OutputSolnSubset'
-- unknown component 'pylithapp.timedependent.ImplicitLgDeform.output'
``````

Here is the ImpDef.cfg file.

``````[pylithapp]

# ----------------------------------------------------------------------
# problem
# ----------------------------------------------------------------------
[pylithapp.timedependent]
formulation = pylith.problems.ImplicitLgDeform

[pylithapp.timedependent.ImplicitLgDeform]
output = [domain,subdomain]

output.subdomain = pylith.meshio.OutputSolnSubset

# ----------------------------------------------------------------------
# output
# ----------------------------------------------------------------------
# Ground surface
[pylithapp.problem.formulation.output.zp]
writer.filename = output/zp.h5

# Domain
[pylithapp.problem.formulation.output.domain]
writer.filename = output/domain.h5

# Crust
[pylithapp.problem.materials.crust.output]
writer.filename = output/crust.h5

# Mantle
[pylithapp.problem.materials.mantle.output]
writer.filename = output/mantle.h5

# Fault
[pylithapp.problem.interfaces.fault.output]
writer.filename = output/fault.h5

# End of file

``````

Could you please point out what am I doing wrong here?
Thank you
Viven

To diagnose this problem, look at the error message and the line numbers given in the cfg file. Start with the first line where an error occurs (`ImpDef.cfg:10` means the error message is associated with line 10 of `ImpDef.cfg`).

When assigning a component to a facility, the name of the component matches the filename of the code (for example, `pylith.problems.ImplicitLgDeform` for the file `pylith.problems.ImplicitLgDeform.py`) while the name of the facility is always lowercase (for example, `pylithapp.timedependent.formulation`).

I believe the following will resolve the problem:

``````# ----------------------------------------------------------------------
# problem
# ----------------------------------------------------------------------
[pylithapp.timedependent]
formulation = pylith.problems.ImplicitLgDeform

[pylithapp.timedependent.implicitlgdeform]
output = [domain,subdomain]

output.subdomain = pylith.meshio.OutputSolnSubset

# ----------------------------------------------------------------------
# output
# ----------------------------------------------------------------------
# Ground surface
# The name must match the name you used above (output = [domain, subdomain]).
[pylithapp.problem.formulation.output.subdomain]
writer.filename = output/zp.h5

# Domain
[pylithapp.problem.formulation.output.domain]
writer.filename = output/domain.h5

# Crust
[pylithapp.problem.materials.crust.output]
writer.filename = output/crust.h5

# Mantle
[pylithapp.problem.materials.mantle.output]
writer.filename = output/mantle.h5

# Fault
[pylithapp.problem.interfaces.fault.output]
writer.filename = output/fault.h5``````

However, I am still getting an error with my formulation output.

``````sharm760@ln0004 [~/PylithResearch/AGU22/B_11] % pylith B.cfg ImpDef.cfg --nodes=4
>> {default}::
-- pyre.inventory(error)
-- timedependent.implicitlgdeform.output.outputsolnsubset.label <- ''
-- Label for group/nodeset/pset in mesh not specified.
pylithapp: configuration error(s)
``````

No idea how to address this.
Viven

The error message is quite clear. You did not specify the nodeset for the subdomain output.

``````output.subdomain.label = NAME_OF_YOUR_NODESET
``````

I have modified the above problem (image below) to incorporate Linear Maxwell rheology in the mantle, using Composite Spatial database. This is resulting in an error.

``````[2]PETSC ERROR: Caught signal number 3 Quit: Some other process (or the batch system) has told this process to end
[2]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[2]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
[2]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors
[2]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run
[2]PETSC ERROR: User provided function() line 0 in  unknown file (null)
[3]PETSC ERROR: ------------------------------------------------------------------------
[3]PETSC ERROR: Caught signal number 3 Quit: Some other process (or the batch system) has told this process to end
[3]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[3]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
[3]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors
[3]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run
[3]PETSC ERROR: User provided function() line 0 in  unknown file (null)
[4]PETSC ERROR: ------------------------------------------------------------------------
[4]PETSC ERROR: Caught signal number 3 Quit: Some other process (or the batch system) has told this process to end
[4]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[4]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
[4]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors
[4]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run
[4]PETSC ERROR: User provided function() line 0 in  unknown file (null)
[6]PETSC ERROR: ------------------------------------------------------------------------
[6]PETSC ERROR: Caught signal number 3 Quit: Some other process (or the batch system) has told this process to end
[6]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[6]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
[6]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors
[6]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run
[6]PETSC ERROR: User provided function() line 0 in  unknown file (null)
[0]0:Return code = 0, signaled with Aborted
[0]1:Return code = 0, signaled with Aborted
[0]2:Return code = 0, signaled with Aborted
[0]3:Return code = 0, signaled with Aborted
[0]4:Return code = 0, signaled with Aborted
[0]5:Return code = 0, signaled with Aborted
[0]6:Return code = 0, signaled with Aborted
[0]7:Return code = 0, signaled with Aborted

``````

I am not sure what is wrong here? Here is the entire output log and my parameters file.
parameters.txt (228.9 KB)
pylithlog.txt (43.0 KB)

Also, I am using the following as Petsc settings

``````[pylithapp.timedependent.formulation]
split_fields = True
matrix_type = aij

# ----------------------------------------------------------------------
# PETSc
# ----------------------------------------------------------------------
[pylithapp.petsc]
malloc_dump =

# Use asm preconditioner
pc_type = asm

# Convergence parameters.
ksp_rtol = 1.0e-8
ksp_atol = 1.0e-10
ksp_max_it = 3000
ksp_gmres_restart = 50

# Linear solver monitoring options.
ksp_monitor = true
#ksp_view = true
ksp_converged_reason = true
ksp_error_if_not_converged = true

# Nonlinear solver monitoring options.
snes_rtol = 1.0e-8
snes_atol = 1.0e-12
snes_max_it = 100
snes_monitor = true
snes_linesearch_monitor = true
#snes_view = true
snes_converged_reason = true
snes_error_if_not_converged = true

# PETSc summary -- useful for performance information.
log_view = true

#snes_view = true
#ksp_monitor_true_residual = true
fs_pc_type = fieldsplit
fs_pc_use_amat = true
fs_pc_fieldsplit_type = schur
fs_pc_fieldsplit_schur_factorization_type=full
fs_fieldsplit_displacement_pc_type = ml
fs_fieldsplit_lagrange_multiplier_pc_type = jacobi
fs_fieldsplit_displacement_ksp_type = preonly
fs_fieldsplit_lagrange_multiplier_ksp_type = gmres

``````

I have tested all the input config files in separate simulations and they all seem to be fine. The problem is apparently in the application of viscoelasticity.
Please let me know if you need any other information.

Thank you
Viven

Hey there, I worked. There was some issue with the SpatialGridDB file which was causing the problem.

Best
Viven

To the same problem I added timesteps using TimeStepAdapt and total time of 200 years to see the effect of viscoelastic relaxations and I am getting the following error:

``````  Linear solve converged due to CONVERGED_ATOL iterations 11
Line search: Using full step: fnorm 2.189275395478e-03 gnorm 2.445774867902e-08
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: Caught signal number 15 Terminate: Some process (or the batch system) has told this process to end
[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors
[0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run
[0]PETSC ERROR: User provided function() line 0 in  unknown file (null)
[0]0:Return code = 0, signaled with Aborted
``````

Could you please give some direction as to where is it coming from?

Thank you
Viven

This error message indicates that process 0 received an abort message from MPI that originated on another process. This can happen if a process encounters and error and aborts. In this case it looks like any error message generated on the process where the error occurred did not get written to the screen. This can happen when running in parallel. Usually the easiest way to debug this type of error is to use a coarse mesh with the same settings and run the simulation on a single process (`--nodes=1`). If the single process simulation runs, then you can try capturing all output to a log file to see if that gives the error message.

I tried reducing the resolution and running the simulation on a single node. It turns out, that it starts fine but somewhere in some timestep it collapses giving the same error mentioned above. I did use –petsc.start_in_debugger in the command line but that didn’t give me anything. Is there some other command which can capture all the output log?

Thank you
Viven

If you are running on a single process and not using a batch system, I don’t think you should get a SIGNAL 15 error. Are you using a batch system?

Getting additional output, if it exists, can depend on the MPI implementation and operating system. Usually piping all output to a log file by using `pylith ARGS >& run.log` works. If you are using a batch system, then you need to consult the documentation for the batch system on how to redirect stdout to a file.

Yes, I am using a slurm based scheduler system. I had been trying to extract stdout and stderr outputs but couldn’t.
Here is my source file.

`````` pylith B.cfg mesh_4km.cfg viscoelastic.cfg
--job.stdout=example1.log
--job.stderr=example1.err
--nodes=1
``````

Thank you
Viven

If you are not getting stderr and stdout, then please consult with your system administrator on how to get these. See the PyLith manual on how to use `--scheduler.dry` to dump the bash script being run by the scheduler. You may need to make adjustments to redirect stderr and stdout on your system.