Authors: Ho Kiu Holdee Pang Matthew Chi Hung Lau  Feifan Li 

Supervisor: Sebastian Schopper


1. Introduction

Computing large simulation model or solving large models require high effort in real life, therefore recent years Model Order Reduction (MOR) techniques are heavily studied. Especially due to its applicability in machine learning domain. It is also highly beneficial for structural dynamics problem where large model with high number of DOF exists. This project aims to implement 3 different methods of MOR in the bm-fem Matlab code that could enable future usage.

2. Model Order Reduction

Model Order Reduction techniques involve projection of the full system matrices, which are usually very large in size, into a basis with reduced size. The algorithm aims to capture dominant information from the full system so that the reduced model will retain these important information, while discarding those that are irrelevant or less important. For structural dynamics problems, the equation of motion is given by: 

\mathbf{M} \ddot{\mathbf{w}}+\mathbf{C} \dot{\mathbf{w}}+\mathbf{K} \mathbf{w}=\mathbf{f}(t)

where w represents the solution vector, containing the degrees of freedom (DoFs) of the problem. M is the mass matrix, C is the damping matrix and K is the stiffness while f is the force vector. After transforming from time into the frequency domain, the final version of equation of motion reads:

s^2 \mathbf{M}_r \mathbf{x}_r(s)+s \mathbf{C}_r \mathbf{x}_r(s)+\mathbf{K}_r \mathbf{x}_r(s)=\mathbf{f}_r(s)

For problems with large amount of nodes and therefore large amounts of DoFs, solving this equation after discretizing will take up a lot of computational time and resources. Therefore in order to obtain the reduced model, the system matrices will be multiplied by a basis V of reduced size with the following equation: 

\mathbf{M}_r=\mathbf{V}^H \mathbf{M V}, \mathbf{C}_r=\mathbf{V}^H \mathbf{C V}, \mathbf{K}_r=\mathbf{V}^H \mathbf{K V}, \mathbf{f}_r=\mathbf{V}^H \mathbf{f}

Different methods are therefore proposed to generate this basis V, and the algorithms used to construct this basis is the focus of this project, where three different family of methods will be implemented, namely modal truncation, POD and Krylov subspace methods. After the projection, the reduced matrices are obtained and can therefore be solved with a significant speedup in the process.

3. Modal Truncation

3.1. Introduction and Methodology

3.1.1. Modal superposition 

Modal truncation is a method within Model Order Reduction (MOR), rooted in mode superposition. The mode superposition is known as a technique utilizing mode shapes obtained from modal analysis to describe the dynamic response of a structure.

To characterize the vibrational bahaviour of Multi-Degree of Freedom (MDOF) system, the approach differs from calculating displacements and rotations at each node over time, known as degrees of freedom (DoFs). Instead, it is typically represented more globally by summing vibrational patterns known as mode shapes, which are then scaled by their respective contributions. This process is recognized as the transformation from the DoF coordinate to modal coordinate. Mathematically, it can be expressed as:

\mathbf{w}(t)=\sum_i^n y_i(t) \boldsymbol{\phi}_i=\mathbf{\Phi} \mathbf{y}(t),

with the matrix of eigenvectors (mode shapes)

\Phi=\left[\phi_1, \phi_2, \ldots, \phi_n\right]

and the modal coordinates

\mathbf{y}(t)=\left[y_1(t), y_2(t), \ldots, y_n(t)\right]^T.

The equation of motion of the MDOF system then transforms from

\mathbf{M} \ddot{\mathbf{w}}+\mathbf{C} \dot{\mathbf{w}}+\mathbf{K} \mathbf{w}=\mathbf{f}(t)

into

\boldsymbol{\Phi}^T \mathbf{M} \boldsymbol{\Phi} \ddot{\mathbf{y}}+\boldsymbol{\Phi}^T \mathbf{C} \boldsymbol{\mathbf { y }}+\boldsymbol{\Phi}^T \mathbf{K} \boldsymbol{\Phi} \mathbf{y}=\boldsymbol{\Phi}^T \mathbf{f}.

Nonetheless, in numerous instances, the majority of mode shapes exert negligible influence on the ultimate outcome, especially in large-scale systems subjected to dynamic loads such as seismic or wind loads.

Hence, the principle behind modal truncation method revolves around identifying the most influential modes to construct the basis V, which has been introduced in the previous section.

The approaches of how to find the significant modes are illustrated in the following sections.

3.1.2. Pick ๐‘Ÿ eigenvectors with smallest eigenvalues

Selecting the modes with smallest eigenfrequencies to construct the basis is a common practice. This preference arises from the inherent difficulty in exciting higher frequency modes, particularly in systems with damping. However, while this approach has a wide range of applicability, it lacks adaptability to the characteristics of a particular problem and can lead to inefficiencies in some cases.

3.1.3. Dominant pole algorithm

Contrary to the method described above, the dominant pole algorithm enables the selection of modes based on both the input and output vectors. This approach is grounded in the concept of employing second-order systems to model structural vibration, as it is described by second-order differential equations. Such systems are typically represented as:

\left\{\begin{aligned} \left(s^2 \mathbf{M}+s \mathbf{C}+\mathbf{K}\right) \mathbf{x}(s) & =\mathbf{f} u(s) \\ y(s) & =\mathbf{g x}(s) \end{aligned}\right.,

where \mathbf{f} and \mathbf{g} are input and output vectors. In the context of structural dynamics, the input vector typically denotes the load type and position, while the output vector records the desired response type and position. 

The formula for determining the dominance of each mode shape is provided in [1]:

\frac{\left\|\mathbf{g v}_i \mathbf{v}_i^{\top} \mathbf{f}\right\|_2}{\operatorname{Re}\left(\omega_{\mathrm{d}+, i}\right) \operatorname{Re}\left(\omega_{\mathrm{d}-, i}\right)}>\frac{\left\|\mathbf{g v}_j \mathbf{v}_j^{\top} \mathbf{f}\right\|_2}{\operatorname{Re}\left(\omega_{\mathrm{d}+, j}\right) \operatorname{Re}\left(\omega_{\mathrm{d}-, j}\right)} \quad \forall j \neq i,

in which \mathbf{v}_i is the eigenvector, and \omega_{\mathrm{d} \pm, i} is the corresponding complex conjugate pair of damped eigenfrequencies:

\omega_{\mathrm{d} \pm, i}=-\omega_i \xi_i \pm \omega_i \sqrt{\xi_i-1},

where the damping ratio \xi_i for the case of Rayleigh damping reads 

\xi_i=\frac{\alpha}{2 \omega_i}+\frac{\beta \omega_i}{2},

in which\alpha and \beta are the Rayleigh damping mass parameter and stiffness parameter.

According to the formula provided, the dominant pole algorithm enables the significance of mode shapes to be characterized based on input and output vectors, while also considering the damping effect. Consequently, the algorithm generates a unique basis consisting of the most influential modes, thus effectively capturing the system characteristics across various problem scenarios. 

3.2. Experiments

In order to gain deeper understanding this method, an investigation is conducted into how the input and output vectors affect the dominance of the mode shapes. 

The basic idea is to apply this method to a test case, followed by alterations in the load case and output vector to observe the dominant mode shapes. The test case model setup entails a Reissner-Mindlin plate clamped at both ends.

3.2.1. Dominant modes under different input vectors

For this investigation, three load cases (LCs) are applied:

  1. Vertical excitation at middle point
  2. Opposite vertical excitation symmetrically w.r.t. z-axis along x-direction.  (M_{y})
  3. Opposite vertical excitation symmetrically w.r.t. z-axis along y-direction (M_{x})

Additionally, the output vector has been configured as the transpose of the input vector, indicating that the response of interest is the vertical displacement at the load position.

First four mode shapes with smallest eigenfrequencies are shown in the Fig.3.1, serving as a reference for comparison with the first four dominant modes under different load cases.

It can be observed that for the excitation at middle position (LC1), all first four dominant modes are symmetric (see Fig. 3.2). If the load position is shifted towards left and right in same distance, and applying opposite vertical force on this two positions (LC2),  all the dominant modes are antisymetric (see Fig.3.3). For the torsional load case (LC3), all torsional modes are dominant. 

This underscores the profound influence of the load case on the dominance of mode shapes.

3.2.2. Dominant modes under different ouput vectors

To explore the impact of the output vector on the dominance of mode shapes, the same torsional load case discussed in the previous section is revisited, with the output vector has been changed to compute the displacement at the center point.

It can be observed that, unlike the modes depicted in Fig. 3.4, more bending modes emerge as dominant in this scenario (see Fig. 3.5).

3.3. Numerical results

For computing the numerical results, the load case is configured to apply a vertically downward excitation at two locations symmetrical w.r.t. the z-axis along the x-axis. The reason for avoiding opposite vertical loads is to prevent zero displacement results at the center point from affecting data comparisons. The system has been setup to be reduced from 585 into 10.

According to the results, the dominant pole algorithm demonstrates strong performance. As depicted in Fig. 3.6, whereas the general modal truncation approach only captures the first four peaks, the utilization of the dominant pole algorithm captures almost all peaks. However, a relatively large error is evident in the transition part. In contrast, the displacement at the center point displays a relatively smoother transition (refer to Figure 3.7), resulting in an improved overall capture.

3.4. Conclusions and Outlook

Based on the investigations and numerical results presented above, it becomes evident that this approach is commonly preferred in engineering settings due to its practicality and direct relevance to structural dynamics issues. It is intuitive for engineers as it aligns with their understanding of the physical behavior of structures and systems.

Those methods such as Proper Orthogonal Decomposition (POD) and Krylov Subspace will be illustrated in following sections are rooted more in linear algebra and statistical techniques. They aim to identify the patterns in data, which can be applied to various fields beyond structural engineering, such as fluid dynamics or signal processing. These methods may involve more mathematical sophistication and may not always directly correspond to physical modes of vibration in engineering applications.

However, there is a limitation in the current implementation. As demonstrated in the dominance formula in section 3.1.3, computation requires both eigenvectors and eigenvalues. To accurately capture all dominant modes, computing all eigenvectors and eigenvalues might be necessary, whose amount equals the number of Degrees of Freedom (DoFs). This goes against the original intent of Model Order Reduction (MOR) as it may be as time-consuming as calculating the full system. Although users can optimize by setting a narrower search range, it may still be inefficient or imprecise for those structures with complex deformation especially without damping. For instance, as illustrated in Figures in section 3.2.1, it is noticeable that despite computing the same number of dominant modes for all three load cases, the torsional case requires a wider searching range for the latest dominant mode compared to others. 

Introducing the Subspace Accelerated Quadratic Dominant Pole (SAQDP) algorithm [5, 6, 7] might address this issue by combining engineering and mathematical perspectives. It enhances efficiency by transitioning from continuous searching to discontinuous one. This presents a valuable future improvement of the modal truncation method.

4. Proper Orthogonal Decomposition

4.1. Introduction

Proper Orthogonal Decomposition (POD) is a numerical technique developed to reduce the complexity of the data that is being computed. Originally, it was developed for the field of fluid dynamics. However, with its power to effectively reduce data complexity and applicable in most other field, it has been widespread in other engineering disciplines, including machine learning, structural dynamics. We hope to investigate the performance of various model order reduction techniques in this report. And the following would briefly introduce the theoretical background of POD, ease of use, and also final result.

4.2. Reduced basis retrieval

The full state solutions is first computed by taking a set of snapshots regarding the system and accumulating each column into a matrix. Figure 4.1 shows the shape and dimension of the matrix, which is defined by the number of DOF and number of snapshots. Through SVD, the matrix system can be broken down into 3 matrix corresponding to the eigenvectors and eigenvalues, Uโˆˆ๐‘›๐‘ฅ๐‘›_๐‘  formed by the left singular vectors, ฮฃโˆˆ๐‘›_s๐‘ฅ๐‘›_๐‘  formed by the singular values and Y^Tโˆˆ๐‘›_s๐‘ฅ๐‘›_๐‘  formed by the right singular vectors. The reduced basis could then be retrieved by selecting r number of columns from the left singular vectors, which form Vโˆˆ๐‘›๐‘ฅr

Fig. 4.1: Full state solution

Since we would only need the left singular vectors, we would not require to compute the entire SVD process. This would also allow to compute the reduced basis directly through the following equation:

U=XX^T,๐‘ˆโˆˆ๐‘›๐‘ฅ๐‘›_๐‘ 

4.3. Choosing size of r 

The reduced basis are often referred 'optimal', as it is a minimization of the least square error of the snapshot reconstruction. This means that when the reduced basis is used to reconstruction the original data, the error compare to the origin snapshot is minimized. 

From figure 4.2, shows the typical snapshot reconstruction. SVD could represent the data from the 3 matrix X=๐‘ˆฮฃ๐‘Œ^๐‘‡, and the 3 matrix could also reconstruct the data with a loss of information. However, literature proposed an alternative way shown in figure 4.3 to project the snapshot into the reduced basis two times, this could lead to the error of the reduced basis formation. Though this, we could completely avoid the process of performing standard SVD, which would compute the redundant matrices.  

Fig. 4.2: Snapshot reconstruction from SVD

Fig. 4.3: Alternative snapshot reconstruction

We could compare the norm between reduced basis and the full basis, reducing it until it reach a tolerance would form a typical minimization problem. This was originally describe by comparing the singular values, but we have adopted the approach of to comparing the basis. We would first compute the frobenius norm of the full basis and the reduced basis with an initial dimension r,  then iterate with increasing the number of r until it reaches the tolerance. The mathematical representation is given below equation.

\frac{||๐‘‰๐‘‰^{๐‘‡}๐‘‹||_{๐น}^2{}}{{||๐‘‹||_{๐น}^2}} > Tolerance

The full algorithm is also represented in a flow chart in figure 4.4. To compute the reduced model, the number of sample frequency is also independent of the number of sample to retrieve the solution from the system at the end. Therefore, the most important variables in the algorithm is the number of sample, and the tolerance. 

Fig 4.4: Algorithm flow chart

4.4. Result

Fig. 4.5: Computational time between the two models

Fig. 4.6: Using reduced basis to compute the full solution

The literature proposed a tolerance of 99.9% that is commonly used, but it did not specify how many samples points are usually required. Therefore, figure 4.5 showed the the result of a simple harmonic example with the load acting completely in the mid-point using 50 samples to compute a sample frequency of 1000 points while having 99.9% tolerance with it reaching less than 0.5% error. The total computational was speed up 4.57 times. Another case in figure 4.6 was shown to retrieve the full solution. The figure was showing result that the load is applied symmetrically, but for investigating a more complicated situation, the remaining discussion would have a load applied asymmetric which is offset from the mid-point. Below parts will investigate the optimal variable selection.


4.4.1. Reduced computational time

Fig. 4.7: The affect of tolerance and number of samples to solving time

The solving time is only calculated for using the reduced basis to solve the solution at the end, the offline 'training' time is not considered. This final process of the computation, is using the reduced basis matrix to solve the transfer function, therefore the most important factor that actually affect the time is the dimension of the matrices. Throughout our testing of parameters, the reduced basis is often around r = 10, which is both significantly smaller than the full model of dimension 1000, the difference between the models are also small. Resulting the time difference is not apparent and from figure 4.7 shown more of a fluctuation of time measurement more than the actual time difference.


4.4.2. Accuracy of reduced basis

Fig. 4.8: The affect of tolerance and number of samples to accuracy

The change of number of sample and tolerance produces a much meaningful relationship with the percentage of error, the computation of error is given in equation below.  The role of tolerance is relatively straight forward as it directly affects the algorithm. The tolerance is the only criteria on the minimization problem to retrieve the optimal size of r, higher tolerance would retrieve a higher r, which would capture the whole system accurately. And with our experiments, the suggestion of 99.9% suitable as general tolerance selection where it produced 0.4%error.

Percentage \: error=\frac{||Full-Reduced||_{2}{}}{{||Full||_{2}}}

However, the affect of number of sample is less direct. In our example, we define the frequency [0,3*10^4]to solve at the system with 1000 sample points. The main goal of POD model order reduction, is hope to represent the system by using less sample points. In the right of figure 8, 20 sample number is already produced only 0.4%error, which is generally more than accepted. Therefore, in our experiments, using 2% of the full model sample points generated a good model to be used in actual simulation.

4.4.3. Extrapolation of model

Fig. 4.9: Solution from symmetric loadFig. 4.10: Basis from symmetric load applied to asymmetric load

The previous results showed the capability of using less sample points to represent the system which would greatly help in computational. However, it would also be great if the method could even extrapolate to other situation with a good accuracy. Figure 4.9 showed the result of a normal use case, the algorithm retrieved a reduced basis and represent the full solution in acceptable accuracy. Then, the reduced model was directly use to applied in another load case(asymmetric).

From the figure 4.10, it could be seen that the result retrieved is not ideal. Only 3 peaks are captured, the other frequency and also displacement are not reflected in the model. As these excited frequency does not exists in the original model, it is difficult for POD algorithm to extrapolate as it is a very data-driven technique.

4.5. Conclusion

To conclude, the above is a brief introduction to the method of POD and the application in simple structural dynamics problem. It showed the affect of choosing the number of samples and tolerance would affect the performance, and 2% of full sample points and 99.9% of tolerance is generally a suitable selection for parameters. However, the algorithm does not extrapolate to other load cases very well. This could also reflect that POD is a very general algorithm to treat matrix and data, but not a specific method that focuses on the physical behavior of structural dynamics. 

5. Krylov Subspace Method

For the Krylov subspace method, the underlying principle is similar to methods mentioned before, where the reduced model can be obtained by projecting the full model of size n into a subspace, which has a reduced dimension of r.โ€‹  Our goal is to find the optimal projection basis to project the full system V, in which the system matrices will be projected into. This can be achieved by constructing the subspace iteratively on the initial sample points, and there are various submethods using this principle to perform model order reduction.

5.1. Second Order Iterative Rational Krylov Algorithm (SO-IRKA)

Second Order IRKA, which is a second order variant of IRKA used on second order systems, where the transfer function is of second order [4]. This causes the eigenvalue decomposition to yield twice as many eigenvalues 2r. The first step of this algorithm is to give a vector of initial expansion points s with r points equally spaced inside the problem's frequency range to calculate the initial reduced basis V using formula.

V = [(s_{0,1}^2 \mathbf{M} +s_{0,1}^2 \mathbf{C} +\mathbf{K} )^{-1}f, ....., (s_{0,r}^2 \mathbf{M} +s_{0,r}^2 \mathbf{C} +\mathbf{K} )^{-1}f]

Then the system matrices will be projected into this basis to obtain an initial reduced model and we will solve the quadratic eigenvalue problem, which will give us twice the amount of eigenvalues 2r. Here only the r of the smallest eigenvalues will be chosen for the next iteration, because just like modal truncation, we want to capture the most significant modes of the system.โ€‹ 

(\lambda^2 \mathbf{M}_r +\lambda \mathbf{C}_r +\mathbf{K}_r )x= 0

The mirror image of these eigenvalues about the imaginary axis, which is the negative of the conjugate function is used as the updated expansion points for the next iteration. The convergence criteria is when two consecutive expansion points are approximately equal (within tolerance), therefore in the code it is defined as the Euclidean distance between two expansion points. The default tolerance is defined to be 1e^{-6}

An example is shown here using the Harmonic Solver example given in the bm-fem code, where a Reissner-Mindlin plate is clamped on one end and a point load is applied on another end. The full system has a size of n = 1080, and the input reduced size is r = 5. The results from the following shows a promising prediction from this method, where it can capture the peaks at low frequency fairly accurate, with a relative approximate error of down to 10^{-10}. The error grows larger and larger as frequency increases, this is due to the updating criteria of choosing the r smallest eigenvalues and causing the method to mainly capture lower frequency peaks while ignoring those in higher frequencies. The speedup is around 1.16 times for the simple problem, and only 28 iterations are used to produce such results.

5.2. Frequency Limited IRKA

Frequency Limited IRKA is a method mainly used for focused reduction on a specific frequency range. It is especially useful for people who only wanted to know the solution for a specific range of interest, and does not care about the other areas. This can potentially save a lot of computation time and resources, since areas without interest can be directly ignored.

The base algorithm is basically the same as SO-IRKA, except the update criteria. After the eigenvalue decomposition step, instead of choosing the r smallest eigenvalues, only the points where its imaginary part lies within the predefined frequency range [\omega_l, \omega_u] will be chosen [4]. However if there is not enough points, those that lie near the boundaries will also be chosen.
It should be noted that the results starts to deteriorate near the boundary, and therefore the literature [4] proposes that at least one pair of expansion points that are on the left and right outside of the range should be included, in order to improve the interpolation accuracy. This was also implemented in the code as part of the update criteria, however this may cause the number of expansion points to change every iteration, this part is currently commented out in order to avoid array size errors in the convergence criteria. Future work could be done to improve the convergence criteria in order to accommodate this method.

The following shows the same example as SO-IRKA using the Frequency Limited IRKA, where the frequency range is set to be [20,80]. It can be seen that the results are very accurate within the predefined domain, with error of down to 10^{-8}. It has a speed up of 1.69 times, and only 3 iterations was done to produce such results.

5.3. Confined IRKA (CIRKA)

Confined IRKA (CIRKA) is proposed to address the issue that for each iteration in SO-IRKA, an r number of linear system of equations of size n needed to be solved. Therefore the idea is to perform reduction on an intermediate model with size m where m << n and m > r. By performing SO-IRKA on a surrogate model, computational time and resources can be saved. In the algorithm, the default intermediate size is being set to m = 4r and can be changed if needed.โ€‹

Here the first step is to calculate the intermediate basis using the initial sample points [4], which themselves are samples with the intermediate size. The system matrices are then projected into this basis to give the intermediate model, and from now on, SO-IRKA operations are performed on this intermediate model not the full model.โ€‹ The already converged expansion points from SO-IRKA will be collected and used to shift the new updated expansion points for the next iteration. In the algorithm this is defined as the r smallest points of the previous expansion points will be replaced by the converged expansion points from SO-IRKA. This causes a very fast convergence rate and the convergence criteria is defined as an estimate of relative error of the transfer functions between intermediate H_m and reduced model H_r:

\epsilon_{CIRKA}=\frac{|H_m(s) - H_r(s)|}{|H_m(s)|}

where transfer function H(s) = g (s^2M + sC+K)^{-1}f

The same example is used on CIRKA, and it can be seen that it is extremely accurate, with a relative error consistently in the 10^{-10} range. Another thing to note is that it has a speedup of 0.669 times, meaning it is slower than a full evaluation but that is because for such a simple problem, the time taken to construct the intermediate model is comparatively large, and the speedup gets better in more complex problems. In addition, the convergence criteria requires the calculation of transfer functions and that will take up some time as well. It is also noted that the results does not always convergence under the tolerance of 1e^{-6}, therefore from the literature a tolerance of 1e^{-3} is used instead and now this CIRKA itself can only take 1 iteration and 5 iterations of SO-IRKA to achieve such results.

5.4. Results Comparison

A comparison of the three methods is shown here on a more complicated example, where a Reissner-Mindlin plate is clamped on both sides, and z displacement is prescribed on the middle node. The full system has a size of n = 4090 and the algorithms will reduce it to a size of r = 5. From the results, it can be seen that CIRKA is the most accurate, where it basically overlaps with the full solution. It is also very fast as well, with a speedup of 4.3 times, and it only uses one CIRKA iteration and 8 SO-IRKA iterations. โ€‹The other methods are behaving as expected, where Frequency Limited IRKA is the fastest because of the focused reduction on a specific range, having a speedup of 4.45 times and 4 iterations used. SO-IRKA can capture the peaks at lower frequency very well and has a speedup of 2 times, with 89 iterations.

6.  Conclusion

In conclusion, we have implemented modal truncation, POD and IRKA methods into the bm-fem code. They are already integrated in the utilities folder (under a separate folder 'MOR') and followed the structure and conventions of bm-fem, so they can be used right away by calling the MORStrategy. Several example cases are also included in the example folder. Further work can be done to improve each methods and to implement some new methods such as the mentioned SOAR and TOAR methods.

7. References

[1] Aumann, Q. J. (2022). Efficient and robust interpolation-based model order reduction of vibro-acoustic problems. (Doctoral dissertation). Technische Universitรคt Mรผnchen.

[2] Benner, Peter, Serkan Gugercin, and Karen Willcox. "A survey of projection-based model reduction methods for parametric dynamical systems." SIAM review 57.4 (2015): 483-531.

[3] J. Weiss, โ€œA tutorial on the proper orthogonal decomposition,โ€ AIAA Aviation 2019 Forum, Jun. 2019. doi:10.2514/6.2019-3333

[4] Aumann, Q. and Mรผller, G. (2022) โ€˜An adaptive method for reducing second-order Dynamical Systemsโ€™, IFAC-PapersOnLine, 55(20), pp. 337โ€“342. doi:10.1016/j.ifacol.2022.09.118.

[5] Benner, Peter, et al. "Semiโ€active damping optimization of vibrational systems using the parametric dominant pole algorithm." ZAMMโ€Journal of Applied Mathematics and Mechanics/Zeitschrift fรผr Angewandte Mathematik und Mechanik 96.5 (2016): 604-619.

[6] J. Rommes, Methods for eigenvalue problems with applications in model order reduction, PhD thesis, Universiteit Utrecht, 2007.

[7] J. Rommes and N. Martins, Computing transfer function dominant poles of large-scale second-order dynamical systems, SIAM J. Sci. Comput. 30(4), 2137โ€“2157 (2008).







  • Keine Stichwรถrter