How to Obtain a Steady-State Thermal Conduction Solution in ASPECT?

Hi all,

I am currently working with ASPECT and trying to compute a steady-state thermal conduction solution as the initial temperature field. I found the continental geotherm option in the Initial Temperature Model section of the ASPECT manual, which solves the steady-state conductive equation as described in Turcotte & Schubert (Ch. 4.6) and Chapman (1986). However, this model assumes a layered lithosphere with uniform thermal conductivity and radiogenic heating, which is not applicable to my case where the crust is non-uniform in thickness and where thermal conductivity and radiogenic heating depend on compositional fields.

To address this, I have considered two possible approaches:

  1. Direct Approach: Writing a custom plugin in the initial temperature model that solves the steady-state thermal conduction equation using compositional information. However, ASPECT does not currently provide a built-in solver for this equation alone. Instead, it solves the full thermal advection-diffusion equation, which requires a velocity field. Since I am not very familiar with ASPECT and deal.II, implementing this might be quite challenging for me. If there are some suggestions, I’d love to try that.
  2. Two-Step Approach: Running an ASPECT model until it reaches steady-state and then extracting the temperature field to use as the initial condition for a second model. However, I encountered difficulties with the ASCII data input format. The ascii data plugin for initial temperature supports only a structured grid (r, phi, theta), which does not directly match the unstructured hexahedral mesh output from my first model. Even when using a uniform grid (without refinement) in Model 1, it does not directly map to the expected format in Model 2.

For small datasets, I could interpolate the temperature field into an appropriate format, but my current model contains ~100 million points and several hundred million degrees of freedom, making 3D interpolation extremely computationally expensive. I attempted an interpolation process, but even after several hours, there was no output.

Some additional information during processing

Test model: I read the necessary xyz coordinate information and r phi theta, it is difficult to directly see the necessary connection between its shapes

The shape of the expected input r phi theta seems to be cone, but the cell profile of the output of the shell model seems to be different from the former.
(Sorry, due to new user restrictions, I can only put it in one picture.)

Request for Guidance

I would greatly appreciate any suggestions or insights on the following:

  1. Is there an existing way in ASPECT to solve the steady-state thermal conduction equation separately within the initial temperature model?
  2. If not, what would be the best approach to implement such a feature?
  3. Are there efficient methods to transfer a temperature field from an unstructured ASPECT output mesh to a new model’s initial condition, given the limitations of ASCII data input?

Thank you in advance for your time and help!

Ninghui Tian

1 Like

Hi Tian,

This might not be a complete solution to your problem, but I encountered a similar problem where I wanted my ASPECT initial condition to be the solution of a time-dependent 1D conduction problem with a T- and depth-dependent conductivity. Since I only wanted the solution to a 1D problem (albeit for lots of different plate ages), I decided to write a python script that computes the temperature and can be read in as an ascii data file. It’s currently a pull request for the geodynamic world builder: Add temperature models that reads from ascii data file by jdannberg · Pull Request #830 · GeodynamicWorldBuilder/WorldBuilder · GitHub

However, since you consider a two-step approach anyway: Could you just write a checkpoint and then restart your model from that point with new parameters? Then there wouldn’t be a need for interpolation.

Best,
Juliane

Hi Juliane,

Thank you so much for taking the time to respond and for your helpful suggestions! I really appreciate it.

The idea of using Add temperature models that read from an ASCII data file is indeed very interesting, and I’ve been following the development of WB as well. However, I think it might not be directly applicable to my case.

Regarding your suggestion on checkpoint restarts, I had considered this approach before. However, my first model is purely viscous, and I use it only to compute the steady-state thermal conduction solution, which then serves as the initial temperature field for my second model, which is viscoelastic. I am a bit concerned that directly restarting from a checkpoint might introduce inconsistencies.

For example, in my second model, I need to track stress-related compositional fields:

subsection Compositional fields  
  set Number of fields = 9  
  set Names of fields  = ve_stress_xx, ve_stress_yy, ve_stress_zz, ve_stress_xy, ve_stress_xz, ve_stress_yz, ......
end  

Would restarting from a purely viscous model cause inconsistencies in these stress-related quantities, or are there other potential issues I should be aware of?

I would love to hear your thoughts on this! Thanks again for your time and help.

Best,
Tian

Hi Tian,

I agree with @jdannberg that restarting from a checkpoint file seems like a practical approach in this case.

Would restarting from a purely viscous model cause inconsistencies in these stress-related quantities, or are there other potential issues I should be aware of?

A strategy you could employ here is to set the Shear moduli to very high values (ex: 1e50) to approximate purely viscous behavior during the initial thermal equilibration phase. This should result in minimal (effectively 0) accumulation of viscoelastic stress.

During the second phase (model), you can then change the Shear moduli to more realistic values (ex: 1e10-1e11).

Another option that might be of use is the static Compositional field method (documentation), which would allow you to keep selected fields (ex: lithologies representing rock types) fixed in place during the initial equilibration phase.

A recent paper by @mibillen and @rfildes use the later approach in a novel manner for models with viscoelasticity. Note that they do something different than what you are looking for here, but I recommend reading the paper (it is excellent) nonetheless.

Cheers,
John

Hi John,

Thank you for your detailed response and for sharing these valuable suggestions!

The idea of setting shear moduli to very high values during the initial thermal equilibration phase to approximate purely viscous behavior sounds like a promising approach. I will give this a try and see how it works in my case.

The static compositional field method also seems like a useful way to keep selected fields fixed during the equilibration phase. I will look into this option as well.

I appreciate the recommendation of @mibillen and @rfildes paper. Even if their approach differs from mine, I’ll certainly read it—it sounds like an excellent resource.

I’ll test these methods and provide feedback on how they work. Thanks again for your time and insights!

Best regards,
Tian

Hi all,

I would like to follow up on my previous discussion regarding computing a steady-state thermal conduction solution as the initial temperature field in ASPECT. I sincerely appreciate the insights and suggestions provided earlier.

As previously discussed, I extracted the steady-state temperature field from Model 1 and used it as the initial condition for Model 2. To avoid any temperature-driven buoyancy effects in the Stokes equation, I set:

set Thermal expansivities = 0.

Additionally, I used the default formulation (custom) without explicitly modifying it. Since Model 1 had no velocity field(zero velocity), there was no advection of compositional fields. Hence, I expected Model 2 to produce the same velocity solution before and after restart, as the only difference should be the temperature field initialization.

Unexpected Discrepancy in Velocity after Restart

However, I observed an unexpected difference in the velocity field when restarting Model 2. Specifically, the RMS velocity changed significantly:

Results Before Restart

Rebuilding Stokes preconditioner...
Solving Stokes system (AMG)... 1000+29 iterations.

Postprocessing:
  Writing graphical output: no-restart/solution/solution-00000
  Topography min/max:       -1.863e-09 m, 1.863e-09 m
  RMS, max velocity:        0.0194 m/year, 0.0603 m/year

Results After Restart

(solution-00002 corresponds to Model 1 running for two timesteps: 0,1)

Rebuilding Stokes preconditioner...
Solving Stokes system (AMG)... 1000+30 iterations.

Postprocessing:
  Writing graphical output: restart/solution/solution-00002
  Topography min/max:       -1.863e-09 m, 1.863e-09 m
  RMS, max velocity:        0.000127 m/year, 0.000477 m/year

This discrepancy is quite surprising to me, as I expected identical velocity solutions.

Potential Causes & Open Questions

I am unsure whether I have missed any necessary settings that could affect the restart behavior. Additionally, I have limited knowledge of what exactly gets stored and restored during a restart.

Some specific questions:

  1. What state variables are saved in the restart files? What parts of the solution are reconstructed upon restart, and what might cause deviations from the original run?
  2. Could this be related to differences in how the linearization point is handled across timesteps?
  3. Does this issue resemble the restart inconsistencies discussed in GitHub Issue #4088 (Different advection system after restart)?

Next Steps & Request for Guidance

I acknowledge that I may not have conducted a sufficiently thorough investigation, and I apologize for any oversight. However, before proceeding further, I would greatly appreciate any guidance on:

  • How to properly diagnose restart-related inconsistencies in ASPECT.
  • Best practices for ensuring consistency between restarted and non-restarted runs.

If there are additional tests or settings that I should check, please let me know. Thank you all for your time and assistance!

Best regards,
Ninghui Tian

Hi @tiannh7,

Can you summarize what changes in the parameter file after the restart (i.e., what changes between model 1 and 2)? After looking back through previous posts, I’m not exactly sure what is changing, which of course is key to determining why the solution is changing significantly after the restart.

Cheers,
John

Hi @John,

Thank you for your response! Below is a summary of the key differences between Model 1 (before restart) and Model 2 (after restart), based on the diff output:

1. Time Settings

  • Start time: Changed from 0 to 1e20.
  • End time: Changed from 1e20 to 1e21.

2. Time Stepping

  • Maximum time step: Previously commented out (# set Maximum time step = 1e3), now explicitly set to 1e3.

3. Boundary Conditions

  • Velocity boundary conditions:
    • Model 1:
      set Zero velocity boundary indicators = outer, inner
      # set Tangential velocity boundary indicators = outer, inner
      
    • Model 2:
      # set Zero velocity boundary indicators = outer, inner
      set Tangential velocity boundary indicators = outer, inner
      
    • This means the velocity boundary condition switched from zero velocity to tangential velocity.

4. Elasticity Parameters

  • Elastic shear moduli:
    • Model 1: 1e77 (single value).
    • Model 2: Different values assigned to multiple layers:
      1e10, 1e10, 1e10, 1e10, 1e10, 1e10, 1e10, 1e10, 1e11, 1e11
      

5. Thermal Expansion

  • Thermal expansivities: Set to 0. in Model 2, which was not explicitly set in Model 1.

6. Gravity Settings

  • Magnitude:
    • Model 1: 0.
    • Model 2: 3.71

7. Simulation Steps

  • End step:
    • Model 1: 1
    • Model 2: 11

8. Checkpoint Settings

  • Steps between checkpoint:
    • Model 1: 1
    • Model 2: 0

Possible Impact on Restart Behavior

Given these changes, the most significant factors that could affect the velocity solution after the restart include:

  1. Switching boundary conditions from zero velocity to tangential velocity.
  2. Changing the elastic shear moduli values.
  3. Setting thermal expansivity to zero, which affects buoyancy forces.
  4. Enabling gravity with a nonzero magnitude.

Please let me know if you need any additional details or if I should test specific scenarios to isolate the cause of the discrepancy. Thanks again for your help!

Best regards,
Ninghui Tian

Hi @tiannh7,

Thanks for the summary of the changes between the models. I think the changes in the velocity field after the restart make sense, given that the boundary conditions, gravity, and thermal expansivity are changing.

To take a step back, my understanding of what you want to achieve is as follows:

  1. Implement a laterally varying lithospheric structure and then solve for the steady-state conductive profile. Do not accumulate viscoelastic stress in this step.
  2. After step 1, change the boundary conditions and allow deformation to evolve.

As discussed earlier, not having viscoelastic stresses accumulate in step 1 can be achieved by setting the shear modulus to a very high value, and then switching these to realistic values during the restart.

The crux of the problem is how to just solve the conduction part of the energy equation (i.e., do not include advective terms).

A few additional ideas for how to address this in the initial model step (i..e, before restart):

  1. use set Nonlinear solver scheme = single Advection, no Stokes, which required inputting a prescribed Stokes solution for velocity and pressure. You could set the prescribed velocity to 0 everywhere and have a simple depth-dependent pressure profile.
  2. set all of the background (reference) densities to the same value everywhere and set the thermal expansivity to 0. If the density is constant throughout the model domain, there should be now flow field. Alternatively, set the initial viscosities to a very high constant value, such that the flow velocities approach. However, note that I recall there being a potential issue with the Stokes solver when the velocities approach 0 throughout the model domain.

Do either of the above approaches seem tractable, or am I off in what you are hoping to achieve?

Cheers,
John

Hi @jbnaliboff

Thank you so much for your detailed and thoughtful guidance—it directly addresses the core of the problem I’m trying to solve.

Apologies for the confusion in my earlier message. To clarify, the inconsistency I mentioned refers to the difference between:

  1. Model 2 (run independently from scratch), and
  2. Model 2 (run after restarting from Model 1),

specifically at time step 1. In both cases, I have set the thermal expansivity to zero, so the buoyancy force on the right-hand side arises solely from compositional anomalies, not temperature. Under this configuration, I would expect identical results between the two approaches, since they share the same initial temperature field (loaded from Model 1).

Regarding your suggestion to use:

set Nonlinear solver scheme = single Advection, no Stokes

and prescribe velocity and pressure—this is a great idea. My assumption was that using:

set Zero velocity boundary indicators = outer, inner

along with:

set Gravity magnitude = 0.

should effectively ensure zero velocity throughout the domain. Indeed, the log outputs during Model 1 confirm this:

Solving Stokes system (AMG)... 0+0 iterations.
...
RMS, max velocity:        0 m/year, 0 m/year

Nevertheless, I’ll experiment with your approach to see whether it improves consistency or helps isolate the issue further.

Additionally, I have a follow-up question: Is it possible to specify Dirichlet boundary conditions for temperature not only on the outer and inner boundaries, but also at specific internal locations, such as at the lithosphere-asthenosphere boundary (LAB)? For example, could I impose a fixed temperature at a certain depth within the model domain, and have that condition contribute to the stiffness matrix and RHS vector?

Thank you again for your help and time!

Best,
Ninghui Tian

Hi @tiannh7,

Additionally, I have a follow-up question: Is it possible to specify Dirichlet boundary conditions for temperature not only on the outer and inner boundaries, but also at specific internal locations , such as at the lithosphere-asthenosphere boundary (LAB)? For example, could I impose a fixed temperature at a certain depth within the model domain, and have that condition contribute to the stiffness matrix and RHS vector?

Yes, it is possible to write a plugin that allows you to constrain internal degrees of freedom. An example of how to do this for velocity DOF in a subduction corner flow problem can be found here, within the prescribed_velocity cookbook folder

Cheers,
John

1 Like

Hi @jbnaliboff

Yes, it is possible to write a plugin that allows you to constrain internal degrees of freedom. An example of how to do this for velocity DOF in a subduction corner flow problem can be found here, within the prescribed_velocity cookbook folder

Thank you very much for your helpful response and for pointing me to the example in the prescribed_velocity cookbook folder. I truly appreciate your time and support:)

Best regards,

Ninghui

Hi @jbnaliboff,

Thank you again for your earlier guidance!

I have successfully tested the internal temperature constraint plugin by modifying the shell_simple_3d.prm file. Specifically, I increased the initial global refinement and added the new plugin using:

set Additional shared libraries = ./libprescribed_internal_temperature.release.so
set Prescribe internal temperature = true

subsection Prescribed internal temperature
  set Coordinate system = spherical
  subsection Indicator function
    set Variable names      = r, phi, theta, t
    set Function expression = if((r>5265e3)&(r<5266e3), 1, 0)
  end

  subsection Temperature function
    set Variable names      = r, phi, theta, t
    set Function expression = 1777
  end
end

However, I noticed a significant visual difference in the results depending on the value of the Interpolate output parameter (true vs. false), the latter seems to be what I want to see more. With interpolation enabled, the temperature field appears more “blurred” and intermittent in the visualization, whereas disabling it reveals sharper, cell-wise features, including the prescribed internal temperature layer.




I’m unsure whether this is the expected behavior or a visualization artifact caused by the postprocessing setup. I hesitate to start a new thread since it seems directly tied to my ongoing test of internal DOF constraints, but I would appreciate any clarification on whether this is just a visualization issue or something more fundamental.

Thanks again for all your help!

Best regards,
Ninghui

Hi @tiannh7,

Quoting from the [manual parameter description](https://spatial resolution in each dimension by a factor equal to the polynomial degree used for the velocity finite element (usually 2)), what Interpolate output = true does is increase “the spatial resolution in each dimension by a factor equal to the polynomial degree used for the velocity finite element (usually 2)”.

I think the issue you are seeing is that the imposed internal temperature boundary is not well resolved, so further interpolating the solution results in that feature being blurred.

I would increase the resolution of the mesh so that there are at least 2 cells within the imposed internal temperature boundary, and then see how the two output methods compare. Note that a related parameter to test is [Write higher order output](https://spatial resolution in each dimension by a factor equal to the polynomial degree used for the velocity finite element (usually 2)).

Cheers,
John

Hi @jbnaliboff,

Thank you again for your helpful explanation!

I have a quick follow-up question to clarify some of the underlying concepts.

  1. Based on the temperature profiles from my visualization, I can clearly see that the internal temperature constraint is being applied, so I’m wondering: is the interpolation issue purely a visualization artifact, or does it reflect something more fundamental in the numerical solution?
  2. Conceptually, I initially designed the indicator function like this: if((r > 5265e3) & (r < 5266e3), 1, 0),because I intended to impose the internal temperature on a single layer of nodes, similar to how boundary conditions are applied. However, it seems that internal constraints must act over entire cells, and even require at least two cells to resolve the feature clearly. I’m not entirely sure why that is necessary, and I would really appreciate any clarification on why a full layer of cells (or more) is required, instead of just a thin node-defined layer.

Thanks again for your time and support!

Best,
Ninghui