I have a question about the implementation of the matrix-free GMG solver with Newton method. In the present implementation, the Newton derivatives are only considered in StokesOperator::local_apply(), which means that only the outer loop of the linear solver uses the Jacobian matrix, while the inner loop (the Chebyshev iterations in the V-cycle multigrid solver/preconditioner) uses the original Stokes matrix. This looks weird. It seems that the original author of the GMG solver planned to add Newton derivatives to the multigrid hierarchy, but did not do it (see the TODO in line 1318 of file source/simulator/stokes_matrix_free.cc).
I tried to add Newton derivatives in the Chebyshev iterations. It is not hard — instead of Q_0 or Q_1 projection, we can obtain the viscosity and the derivatives on coarse levels directly by MaterialModel::evaluate(), then add them to ABlockOperator::inner_cell_operation(). However, it turns out that the convergence rate becomes worse. I have tested the modified code with kaus_2010.prm, gerya_2019_vep.prm and continent_extension.prm (modified a little to use iterated Newton Stokes). Only the first example shows better convergence behavior comparing with the main branch, while the last two need more linear iterations to converge in each Newton iteration.
This result confuses me. I wonder if there is something that I fail to notice. Could anyone explain why using the original Stokes matrix in the Chebyshev iterations is a better choice?
It has been a while, but I remember that we played with this as well and didn’t have much success.
It is not hard — instead of Q_0 or Q_1 projection, we can obtain the viscosity and the derivatives on coarse levels directly by MaterialModel::evaluate()
This is not exactly easy: In general, evaluate() can use arbitrary solution fields, which are difficult to get on coarser levels. To implement this correctly, one would need to interpolate all fields that are inputs to evaluate(). It is also not clear to me that an evaluation on a coarse cell is a good approximation. An alternative would be to evaluate Newton terms on the active level and interpolate down the multigrid hierarchy.
Thank you for the reply. I agree that an evaluation on a coarse cell may not be a good approximation. I choose this method only because it looks more efficient than interpolating the derivatives down the multigrid hierarchy.
About the implementation: as far as I concern, we can utilize the function DoFCellAccessor::get_interpolate_dof_values(), which uses the restriction matrix to interpolate the DoF values to coarse levels. Function FEValues::get_function_values/gradients() internally calls this function to compute the values/gradients at the quadrature points of an inactive cell. Thus, all we need to do is to change the current_cell in MaterialModelInputs from DoFHandler::active_cell_iterator to DoFHandler::cell_iterator, and then evaluate the material properties in the same way as for the active cells. The interpolation of viscosity down to the coarse levels in the present implementation has also utilized this function internally.
Anyway, I think I can leave this idea behind now that I know it has been tried out and passed. Thank you again!
I think we only call get_function_values on the active cells, so get_interpolated_dof_values is not used (we use multigrid transfer). I don’t think get_interpolated_dof_values works in a parallel computation when the current processor does not see all children cells.