Authors: Berkay Oralalp, Ahmed Nabil Hanafy Mahmoud Elsayed  

Supervisors: Sebastian Schopper , Rupert Ullman (BMW)



1. Introduction


1.1 Why MOR is important

Accurate modelling is a necessary part in all fields of engineering dealing with physical systems and to achieve an accurate model, powerful computers, newly developed methods and algorithms are necessary.

Usually modeling can be done by approximating an ordinary differential equation. The more complex system is and the more accurate the model is, the higher the order of the corresponding differential equation should be. So, to model a physical behavior with a good accuracy, a high order differential equation is unavoidable.

Also, with the restrictions in numerical algorithms and digital computers, using a high-order model to solve an engineering problem is a time-consuming task or may lead to an incorrect result.

And that’s why the model order reduction techniques would be useful because It enable real-time simulation even in a computational environment with limited resources.

The following figure shows the general steps to produce a reduced system based on Krylov-subspace methods.


Figure 1: General Steps to produce a reduced system based on Krylov-subspace methods

1.2 Error Estimator And why do we use it

After evaluating the transfer function of the reduced system, we need to assess the error difference between the full system and the reduced system.


  • H(s) = C(sE-A)^-^1 B


  • ε_T_r_u_e_ = H_F_u_l_l - H_R_O_M


And since the theoretical equation above is hard to be evaluated, the error estimation techniques are needed.

The error estimator is working on the transfer function error, as well as the output error. 

The proposed error estimator avoids computing the singular values of any matrix and depends mainly on the ROM. 

And it is applicable also to any system whose ROMs are computed using a projection-based MOR method.

Furthermore, the proposed error estimator avoids computing the singular values of any matrix and depends mainly on the ROM and the authors showed in the numerical results that the error estimator is usually much sharper than the error bound.

1.3 Study Case Information

The error estimator will be implemented based on the following study case:

  • “A 3D solid beam benchmark for parametric model order reduction", Mendeley Data, V1, doi: 10.17632/cprx2kx2ws.2
  • 2nd Order system
  • Arnoldi Algorithm MOR


2. Literature Review


In this section, we going to give a quick introduction to each MOR approach we are using based on article [1]

2.1. Block Arnoldi Algorithm

The first approach is Arnoldi-based algorithm, it is a classical Krylov-subspace-based Model Order Reduction that finds a set of normalized vectors that are orthogonal to each other like in the equation below.


  • V_Q^T V_Q = I


Our study case was originally implemented using "Block Arnoldi algorithm with deflation and modified Gram-Schmidt orthogonalization".

The system that we have is second order, but the Kyrlov subspace method is first order, so the transfer function will be changed to:


  • H(s) = C(s^2 M + sD + K)^-^1 B


2.2. Error Estimator

The second approach is based on the error estimator method.

And to compute the error estimator, first you need to construct simultaneously the projection matrices VV_d_uV_r_d_u.


  • V=orth\{V_\hat{\mu^1},...,V_\hat{\mu^l}\}
  • Constructing V Using the Multi-moment-matching based the state vector \mu of the primal system.


  • V_d_u=orth\{V_d_u^{\hat{\mu}^1},...,V_d_u^{\hat{\mu}^l}\}
  • V_d_u will be constructed using "Reduced Basis Method" or it can be constructed based on the multi-moment-matching method as well, and in this case, it will be similar to V.


  • V_r_d_u=orth\{V_r_d_u^1,...,V_d_u\}
  • V_r_d_u can be constructed from the state vector of the dual-residual system.

And there are two choices for computing V_r_d_u^1.

  • range(V_r_d_u^1)=span\{Q^-^T (\hat{\mu}^1) C^T (\mu^1),...,Q^-^T (\hat{\mu}^l) C^T (\mu^l)\}
  • range(V_r_d_u^\hat{\mu^j})=span\{\hat{R_0},\hat{R_1},...,\hat{R_q}\}_{\hat{\mu^j}} , j = 1 , ... , l


2.3. Greedy Algorithm Approach


The greedy algorithm can be implemented along with the error estimator to construct the ROM of the original system.

According to [1], the greedy algorithm was previously used based on the error bound, but in this work, the algorithm was extended and adapted to be used proposed error estimator.

And once the maximal error estimator over the whole sample set is below the tolerance criteria, the algorithm stops.


                                         Figure 2: The greedy process of ROM construction by Greedy Algorithm for parametric systems.



3. Error Estimator


3.1. Investigation of the 3D Beam Model

  • The Full Model Order: 3145


We used the Block Arnoldi algorithm with pre-selected expansion points over the frequency range of 50-800 Hz.

  • Expansion Points = [50, 300, 550, 800]
  • Reduced Order: 35

The transfer function and the absolute error by calculating the following equations:

  • H(s) = C(s^2 M + sD + K)^-^1 B


  • \widehat{H} = C_r_e_d(s^2 M_r_e_d + sD_r_e_d + K_r_e_d)^-^1 B_r_e_d


  • H(s)\, – \widehat{H} = Absolute Error


                                  Figure 3: Transfer function plot over frequency

                                         Figure 4: Error plot of the first model

3.2. Error Estimator Results


We implement the following error estimator in order to estimate the error over the whole frequency range.


  • |H(\nu) - \widehat{H(\nu)}| \lesssim |\widehat{x(\widetilde \nu)}_r_d_u ^T r_p_r(\widetilde \nu)| + |\widehat{x(\widetilde \nu})_d_u ^T r_p_r(\widetilde \nu)| = \Delta(\widetilde \nu)


                                 Figure 5: Error Estimation plot of the first model


The model order reduction process takes around 28 seconds and the error estimator code takes 12 seconds. As can be seen in figure [5], the error estimator gives sufficient results in predicting the Error.


3.3. Results for random input parameters


After the first approach, we set some lower and upper bounds to randomly generate input parameters using MATLAB's rand() function. The following results have been achieved for random input parameters.


                               Figure 6: Error Estimation plot for a set of random input parameters


                          Figure 7: Error Estimation plot for another set of random input parameters



4. Greedy Approach


4.1. What is the next step after achieving the Error Estimator?


The main focus of the paper [1] was to implement a greedy algorithm with the error estimator. As we saw in our first approach, we had pre-selected expansion points and the algorithm ran based on them. The main logic in selecting the expansion points is to spread the expansion points over the frequency range. However, how do we know if those are the best expansion points to be chosen? In the greedy approach, the error estimator can be used in a way that expansion points are placed in the position where we have the maximum errors so that we can minimize the absolute error over the frequency range.


4.2. Pseudo-Code

Initial\, expansion\, point:\, s_i:\, the\, first\, sample\, in\, a\, set\, of\, samples\, of\, s\, covering\, the\, interesting\, frequency\, range, \,s_i^\alpha : \,the\, last\, sample\, in\, the\, interesting\, frequency\, range,\, i = 1,\, q > 1.\\ \epsilon = \epsilon _t_o_l + 1\\ \\ While \, \epsilon > \epsilon _t_o_l \, do\\ \quad range(V_s_i) = span\{\widetilde{B} (s_i), ... , ((\widetilde{A} (s_i))^q^-^1 \widetilde{B} (s_i))\},\\ \quad \widetilde{A} (s) = (sE-A)^-^1 E,\\ \quad range(V_d_u ^s^i) = span\{\widetilde{C} (s_i), ... , ((\widetilde{A_c} (s_i))^q^-^1 \widetilde{C} (s_i))\},\\ \quad \widetilde{A_c} (s) = (sE-A)^-^T E^T,\\ \quad range(V_r_d_u ^s^\alpha) = span\{\widetilde{C} (s_i^\alpha), ... , ((\widetilde{A_c} (s_i^\alpha))^q^-^1 \widetilde{C} (s_i^\alpha))\},\\ \quad \widetilde{A_c} (s) = (sE-A)^-^T E^T,\\ \quad V = orth\{V, V_s_i\}\\ \quad V_d_u = orth\{V_d_u, V_d_u^s^\alpha\}\\ \quad V_r_d_u = orth\{V_d_u, V_r_d_u^s^\alpha\}\\ \quad i = i + 1\\ \quad s_i = arg \, max\Delta (s)\\ \quad s_i^\alpha = arg \, max |\widehat{x(s)}_r_d_u ^T r_p_r(\widetilde (s))|\\ \quad \epsilon = \Delta (s_i)\\ end\, while\\ Compute\, ROM


Check paper [1] for more detailed explanation of the Pseudo-Code.

4.3. The Greedy Algorithm Results


In this approach, we are only choosing a frequency range of interest. In our example, it is 50-800 Hz range. The algorithm starts and continues the iteration until the error is below a pre-selected limit. In the starting model [4], we had 4 pre-selected expansion points. We achieved a reduced order of 35. With the greedy algorithm, the error limit is set to a lower value to achieve a lower absolute error. So, we achieved 5 iterations (5 expansion points) and a reduced order of 37.


                                    Figure 8: Error plot for the greedy algorithm


5. Conclusion


In conclusion, error estimator implemented is accurate and efficient in estimating the Absolute error. The Greedy algorithm uses the error estimator for choosing the expansion. Good results have been achieved with relatively low error tolerances for the iterations. Further investigations can be made for different cases where we have the initial expansion points chosen somewhere in the middle of the interested frequency range. Also for parametric systems, cases where the algorithm underestimates the error should be identified.


References


  1. L. Feng and P. Benner, "A New Error Estimator for Reduced-Order Modeling of Linear Parametric Systems," in IEEE Transactions on Microwave Theory and Techniques, vol. 67, no. 12, pp. 4848-4859, Dec. 2019, doi: 10.1109/TMTT.2019.2948858.
  2. Salimbahrami, B. and Lohmann, B. (2004). Structure Preserving Order Reduction of Large Scale Second Order Systems. IFAC Proceedings Volumes, 37(11), pp.233–238. doi:10.1016/s1474-6670(17)31618-x.
  • Keine Stichwörter