Particle Interpolation and Limiting the Interpolated Values

Mack and I are working on designing a way to limit the values of the interpolation function derived from the particles for a given property in a given cell. The simplest example, which is our test case, is the ‘Beam’ property from the Bending Beam benchmark with particles. The property Beam = 1 if the particle lies within the beam at time t = 0 and Beam = 0 otherwise. In this computation the values for Beam on the particles are constant for the entire computation. During the course of the Bending Beam computation the Bilinear Least Squares (BLS) interpolation algorithm will overshoot and undershoot, i.e., have values > 1 and < 0 respectively, as will the Quadratic Least Squares (QLS) interpolation algorithm.

My question is a simple one. Prior to placing the new interpolated values for Beam on the support points of the compositional field associated with the property ‘Beam’ at the new time t^{n+1}, are the values on that computational field still the values for Beam at time t^n; i.e., from the nth time step? Our limiting strategy needs the value of Beam at some point on the cell (e.g., the center) or perhaps the average value of Beam over the cell at the previous time step.

Another way to phrase my question is "Does the compositional field associated with Beam retain the values of Beam from the previous time step, the same way that uold retains the values of the velocity at the previous time step?

It seems appropriate to include in this topic the results of a computation that Rene asked Mack to make as he was reviewing Mack’s version of the BLS interpolation algorithm that solves the normal equations (for the least squares problem) with QR rather than forming the pseudoinverse, which requires one to invert (A^T A) as was coded in the original version of BLS. Rene wanted to know if QR was significantly more expensive than the previous approach. (The answer is "not really. See the table referenced below.)

Mack computed a problem three times on the same machine with all other things being the same. What I wanted to point out is that Mack found the nearest neighbor interpolation is between 3 and 6 times more expensive as compared to cell averaging; as much as 7.2% of the computation versus,1.2%. I personally don not think nearest neighbor interpolation is more accurate than cell averaging and therefore it is not worth the additional cost of using it.

A table with his results can be found near the bottom of the conversation associated with the PR

QR bilinear interpolation #3436

"Does the compositional field associated with Beam retain the values of Beam from the previous time step, the same way that uold retains the values of the velocity at the previous time step?

I’m not entirely sure in this case. The compositional field representing the Beam is simply the values interpolated from the particles to a compositional. So, it is not equivalent to a model where “composition” in explicitly tracked with compositional fields and not particles. If you are able to access the values of composition from the previous time step and they are range between ~ 0-1 (i.e., the range for ‘Beam’), then my guess is that they would be indeed be the values of the beam interpolated from the particles to a compositional field in the previous time step.

A related question: Can you instead do this with the particle values from the previous time step, assuming they are accessible?

I’m pretty sure the particle values from the previous time step are not available. I think we may have a chance of getting the value of a quantity that is carried on the particles from it’s compositional field though.