Authors: Unbekannter Benutzer (ge32dur) , Hritik Singh 

Supervisor: Sebastian Schopper

Introduction

Motivation

Complex and large numerical models require substantial computational power and large amounts of memory. Model Order Reduction (MOR) techniques allow to reduce these requirements at the cost of an approximation of the full system. If this reduced model is good, that is to say, if the choice of the reduced subspace on which the reduced model is built is good, it can effectively replace the original model. However, the reduced model becomes invalid when parameters of the original model, such as material properties, the geometry or boundary conditions are changed. Some applications like optimization, navigation, etc. require repeated model evaluations over a large range of such parameter values.  A naïve approach would be to generate a reduced order model (ROM) at each queried point in the parameter space. This is a costly task requiring large computations with a full order system, and thus it is generally unfeasible.

Parametric model order Reduction (pMOR) helps in overcoming these problems. In pMOR, instead of generating ROMs at each point, a sampling at different parameter combinations is done to generate a so-called "database" of reduced quantities. From this database, an approximation of the result for a parameter not found in the sample points can be obtained. The benefit of pMOR comes into play in large enough systems when the offline and online costs are compared. Offline costs include the computational power and time needed to build the database of reduced quantities, while the online costs include the computational power and time to obtain the final solution for that specific parameter. While offline costs may be considerable, they are considered a good investment if the online costs are minimal compared to the cost of solving the full order system individually for each required parameter. In general, pMOR methods can be split into two types: Global pMOR and Local pMOR [1].

Tasks and Goals

In this project, the overall aim was to compare, in terms of computational time and accuracy, these two methods with each other, with non-parametric MOR methods, and with a full-scale solution. To do so, the following tasks were necessary:

  1. Get familiar with projection-based MOR and pMOR.
  2. Implement the second-order Iterative Rational Krylov Algorithm (SO-IRKA).
  3. Implement global pMOR.
  4. Implement local pMOR.
  5. Perform parametric study to compare different methods.

This wiki page serves to present a quick glimpse at the theory behind the methods used and to discuss a series of results obtained using those same methods. Fortunately, much of the theoretical and background content on which this project is based has already been explained in detail in the Modelling and Simulation in Structural Mechanics Wiki, of which this very page is a part. Therefore, to save time and make use of the linking capabilities of wikis, pages relevant to the content will be linked to, just like in the previous sentence.

All the code for this project can be found at the following repository: Global_Local_pMOR.git.

Theoretical Background

Linear Time Invariant Dynamic Systems

The system to be solved in this project (and most projects that are part of this course) is the case of a linear time-invariant dynamic system, described by the following differential equation of second order: 

(1) \mathbf{M} \ddot{\mathbf{x}}(t)+\mathbf{C} \dot{\mathbf{x}}(t)+\mathbf{K} \mathbf{x}(t)=\mathbf{f}(t)

This expression is also known as the equation of motion. The system matrices describe mass, damping, and stiffness and are represented, respectively, by \mathbf{M}, \mathbf{C}, \mathbf{K} \in \mathbb{R}^{n \times n}, while the load on the system is described by \mathbf{f}(t) \in \mathbb{R}^n. Finally, the time-dependent degrees of freedom are displacement, velocity and acceleration, denoted by \ddot{\mathbf{x}}, \dot{\mathbf{x}}, \mathbf{x} \in \mathbb{R}^n.

In order to work with this expression in the frequency domain, which facilitates finding the solution to this system, a Fourier transformation needs to be applied, making the degrees of freedom dependent on complex frequency s \in \mathbb{C}:

(2) s^2 \mathbf{M x}(s)+s \mathbf{C x}(s)+\mathbf{K x}(s)=\mathbf{f}(s)

Second Oder Iterative Krylov Algorithm (SO-IRKA)

MOR methods can be generally grouped into three categories: reduction using physical coordinates, reduction using generalized coordinates, and subspace methods. This project focuses on this third type, which is used both in MOR and pMOR. In subspace methods, the full system is projected into a lower-dimensional subspace in order to obtain a solution. The idea is to find a certain number of poles r of the system's transfer function and interpolate between them. Krylov methods, which use the Galerkin projection method, do so implicitly; that is to say, they do not explicitly construct an interpolation polynomial. The advantage of Krylov methods is that the only calculations needed are matrix-vector mutliplications. For sparse systems, this means that the numerical complexity is of order \mathscr{O}\left(r^2 n\right). There are no singular value decompositions (SVD), eigenvalue decompositions, or inversions involved, which would bring the complexity up to \mathscr{O}\left( n^3\right).

Simple Krylov methods use a series of expansion points s_0 \in \mathbb{C} to create a projection matrix using the following formula:

(3) \mathbf{V}=\left[\left(s_1^2 \mathbf{M}+s_1 \mathbf{C}+\mathbf{K}\right)^{-1} \mathbf{f},\left(s_2^2 \mathbf{M}+s_2 \mathbf{C}+\mathbf{K}\right)^{-1} \mathbf{f}, \ldots,\left(s_r^2 \mathbf{M}+s_r \mathbf{C}+\mathbf{K}\right)^{-1} \mathbf{f}\right]

However, the choice of expansion points can have a great effect on the adequacy of the Krylov subspace, meaning that an iterative method, namely the second-order iterative rational Krylov method (SO-IRKA) is a more appropriate choice. In this project, the initial choice of expansion points has been made by sampling them equidistantly from the range of frequencies on which harmonic analysis has been done and then taking their conjugates as well to get 2r expansion points.

Using this algorithm, the largest r eigenvalues of the reduced system are then used as the new expansion points until convergence is reached.

Global Parametric Model Order Reduction

In the case of pMOR, the equation of motion, after the Fourier transform, depends not only on the frequency domain but also on the parameter domain:

(4) s^2 \mathbf{M}(\mathbf{p}) \mathbf{x}(s, \mathbf{p})+s \mathbf{C}(\mathbf{p}) \mathbf{x}(s, \mathbf{p})+\mathbf{K}(\mathbf{p}) \mathbf{x}(s, \mathbf{p})=\mathbf{f}(s, \mathbf{p})

In global pMOR, the reduced bases (V_k) obtained at sampling points (P_k) are concatenated into one global basis matrix V. Rank deficient columns of this matrix are removed using SVD to obtain the reduced basis V_r. What the number r should be is a complex question, with the singular values of the SVD usually being used as cutoff points. The values might be filtered using a simple threshold like 10^{-3} (as done in this project), or with a certain percent (usually around 99%) of the energy of the system as represented by the sum of the singular values. Overall, global pMOR is easy to implement, but the parametric reduced model grows with the number of samples. Additionally, it requires the reduced bases to be on the same mesh as the sample points.

Local Parametric Model Order Reduction

Local pMOR offers three different possibilities: the interpolation of the reduced basis, which is very inefficient; the interpolation of the transfer function, which requires the most work in the offline phase but is the fastest in the online phase; and, finally, the interpolation of the reduced matrices, which is the all-around best option and the one implemented in this project.

In local pMOR, both the reduced basis and the reduced matrices are obtained at sample points P_k. This means that each of the reduced matrices relates to a different reduced basis, making direct interpolation between them not possible. To solve this problem, the concatenated bases are used to transform all the reduced matrices to a generalized coordinate system R. It is important to note that, in contrast with global pMOR, the reduction size r is chosen beforehand rather than by using the SVD's singular values. Thus, given a SVD of the concatenated basis

(5) \left[\mathbf{V}_1, \mathbf{V}_2, \ldots \mathbf{V}_K\right]=\mathbf{U} \mathbf{\Sigma} \mathbf{Y},

the first r columns of the matrix U are chosen as the basis vectors of the generalized coordinate system R. The system matrices are then transformed using

(6) \mathbf{T}_k=\left(\mathbf{R}^T \mathbf{V}_k\right)^{-1}

into

(7) \tilde{\mathbf{K}}_r\left(\mathbf{p}_k\right)=\mathbf{T}_k^T \mathbf{K}_r\left(\mathbf{p}_k\right) \mathbf{T}_k, \quad \tilde{\mathbf{D}}_r\left(\mathbf{p}_k\right)=\mathbf{T}_k^T \mathbf{D}_r\left(\mathbf{p}_k\right) \mathbf{T}_k, \quad \tilde{\mathbf{M}}_r\left(\mathbf{p}_k\right)=\mathbf{T}_k^T \mathbf{M}_r\left(\mathbf{p}_k\right) \mathbf{T}_k, \\ \qquad \tilde{\mathbf{f}}_r\left(\mathbf{p}_k\right)=\mathbf{T}_k^T \mathbf{f}_r\left(\mathbf{p}_k\right), \quad \tilde{\mathbf{G}}_r\left(\mathbf{p}_k\right)=\mathbf{G}_r\left(\mathbf{p}_k\right) \mathbf{T}_k.

The elements of these matrices then need to be interpolated element by element for a certain parameter, and finally this system can be solved [2] . For the implementation of this project, cubic splines were used for the interpolation. Other interpolation methods may also be used, as discussed in the Future Work section of this wiki page.

Local pMOR via interpolation of reduced matrices has the clear advantage over global pMOR in that it does not grow with the number of sample points, with the reduction size r remaining fixed. Furthermore, even though its offline phase is more intensive, its offline phase is highly efficient, as it only needs to interpolate the reduced model.

Methodology

The methodology used in this project can be summarized in the following steps:

  1. Generate a FEM model of the Timoshenko cantilever beam at different parameter samples (e.g. different beam lengths and thicknesses) to obtain system matrices based on [3].
  2. Perform MOR at each parameter sample using SO-IRKA.
  3. Collect reduced model bases together to obtain a parametric model.
  4. Reduce the parametric model using either global or local pMOR technique.
  5. Solve the reduced system model at required parameter value.
  6. Obtain the frequency response (harmonic analysis) for a sweep of frequencies.

Results

To generate results, simulations have been carried out under the following conditions:

  • Parameter – Beam length (L)
  • Parameter range, P = [0.214 \space 0.3 \space 0.4 \space 0.5 \space 0.6 \space 0.75 \space 0.9 \space 1.25]
  • All models have been evaluated at beam length, L = 1m and beam thickness, H = 0.1m.
  • 100 elements have been used to discretize the beam, this results in a full-order system with size of 300 (3 dofs each).

Observations:

  • All models give approximately the same results over a wide frequency range.
  • Accuracy of reduced-order models decreases at high frequencies.

Extrapolation 

A study has been done to compare how well each technique performs when the extrapolation of system matrices is required outside the parameter range. This has been done by evaluating all models at L = 1.5m and 100 elements. The following observations have been made:

  • Accuracy of all reduced-order models is worse than the interpolation case before.
  • Accuracy of reduced-order models decreases even further at high frequencies (>4000 rad/s).
  • Global pMOR performs better than local pMOR since it is less sensitive to parameter samples.
  • Accuracy of local pMOR is the worst since it is based on interpolation and thus, is unable to extrapolate the results.

Offline Computation Time

A performance study has been done to compare the offline computation time for different MOR techniques. The offline computation time is defined as the time taken to assemble the parametric model and reduce it. It can be observed from the graph that the offline computation time is higher for local pMOR since it requires further steps like transformation and interpolation as compared to other pMOR techniques.

After pMOR, obtained reduced model size: r_{Global} = 24 and r_{Local} = 10

Online Computation Time

A similar performance study has been done for online computation time as well. The online computation time is defined as the time taken to solve and compute the frequency response of the reduced system. It can be observed that:

  • Online time is higher for global pMOR since the size of reduced system is more as compared to local pMOR.
  • Number of elements doesn’t affect the size of reduced system and thus, it doesn’t affect online times as well.
  • Variations in global pMOR online time are probably due to processor usage by other processes.

Total Computation Time

Total computation time has been calculated by adding offline computation time to online computation time. The following observations can be made: 

  • For small-sized systems, solving the full order system takes less time as compared to pMOR techniques due to overhead involved in setting up the MOR.
  • As the number of elements increase, both local and global pMOR take considerable less time as compared to the full order system.
  • Local pMOR is taking more time than global pMOR because only one online computation has been performed but with repeated online evaluations, local pMOR would be faster .
  • SO-IRKA takes least time because it is performing MOR for just a single parameter (not easily scalable)

Reduced bases and beam stiffness

A parametric study has been performed by changing the beam thickness (H) instead of beam length this time. The beam thickness has been reduced to H = 0.01m to decrease the beam stiffness and the size of reduced order model for local pMOR has also been changed to r = 5

  • Many modes (peaks) have been represented.
  • Global pMOR is more accurate because the size of reduced order system is based on SVD.
  • Local pMOR with reduced system size of 5 can only represent 5 modes.

Conclusions

In this project, both pMOR techniques (global and local) have been successfully implemented with the choice of different parameters. These pMOR techniques are quite effective and result in considerable resource-saving, specially if the parameter range is chosen appropriately.

Furthermore, local pMOR outperforms other MOR techniques for applications like optimization, navigation, etc. that require repeated model evaluations in the online phase. The size of the reduced-order model is the smallest for local pMOR, and the more computationally intensive tasks can be evaluated in the offline phase beforehand. This is true specially if extrapolation isn’t involved.

Future Work

Sampling

During the presentation of the results, it was suggested that it would be worthwhile to study the effect that changing the range of the sample points would have on the results. In general, for both of the parameters studied (length and height), the range was chosen as being between two "realistic" values. For instance, for length, the range was between 0.214 and 1.25 meters with relatively equidistant values chosen. It would be interesting to see how well MOR and pMOR methods would work if the parameter space was orders of magnitude bigger or smaller.

Furthermore, the effect of the sampling method was not explored, even though it has a clear impact from the very beginning of the process. In all cases, the sample points were chosen by hand with little variation in the distance between. They were not, however, as seen in the case of the length parameter, consistently equidistant, like a 1D full factorial design would necessitate. At the same time, they were not completely random. In this manner, there is probably some kind of implicit bias in the chosen values. It would be adequate, therefore, to study the accuracy of MOR and pMOR methods when using a variety of sampling methods.

Interpolation

Another aspect of these methods, particularly for pMOR, which has been somewhat glossed over but that is key to the accuracy of the results, is interpolation. For this project, cubic splines were used for several reasons, including the fact that they are relatively easy to understand and that there is an in-built function in MATLAB with full functionality. The field of optimization, however, offers a more statistically robust approach to interpolation in the form of the Kriging/Gaussian process, which is most often used for surrogate modelling. Through this method, the output is not just given as a set result, but they are rather expressed as random events and modelled via stochastic fields. This and other approaches to interpolation could lead to improved, or at the very least more informed, results.

References

[1] P. Benner, S. Gugercin, and K. Willcox, A survey of projection-based model reduction methods for parametric dynamical systems, SIAM Review, 57 (2015)

[2] H. Panzer, J. Mohring, R. Eid, and B. Lohmann, Parametric model order reduction by matrix interpolation, auto, 58 (2010)

[3] Peuscher, Heiko & Hubele, Jörg & Eid, Rudy & Lohmann, Boris. (2009). Generating a Parametric Finite Element Model of a 3D Cantilever Timoshenko Beam Using MATLAB.






  • Keine Stichwörter