Authors: Unbekannter Benutzer (ge32red) 

Supervisor: Rupert Ullmann

Abstract

Solution of multidimensional systems are relevant for many engineering applications. In particular, for dynamic systems, frequency domain analysis demands solutions that scale with the complexity of the system and the frequencies inspected. As a result, a class of methods design to alleviate the solving of these systems are developed, called Model Order Reduction. This report focuses on one aspect of Model Order Reduction for solving dynamic systems: Deflation. Expanding on the existing concepts, a more refined variation of Deflation is introduced - Adaptive Deflation, implemented on a reference problem and some of its derivative modified systems. Additionally, there is an additional Fine-Search Adaptive Deflation whose progress will be introduced, which aims to address some problems with identified with Adaptive Deflation. Overall, the new Deflation strategies show potential to effectively reduce Reduced Order Model size while maintaining high output accuracy. 

Background

Model Order Reduction

As Model Order Reduction (MOR) forms the backbone of the Deflation implementations, its background to the discussion of later methodologies. However, as no modifications is made to the core of the existing MOR implementation (specifically Arnoldi Moment Matching), coverage of this section will be mostly in reference of appropriate literature.

Consider a second-order dynamic system:

 \mathbf{M}\ddot{\mathbf{x}}\left(t\right) + \mathbf{C}\dot{\mathbf{x}}\left(t\right)+\mathbf{K}\mathbf{x}\left(t\right) = \mathbf{f}\left(t\right)

Where the component matrices \mathbf{M},\mathbf{C},\mathbf{K} \in \Bbb{R}^{n \times n}, \mathbf{f} \in \Bbb{R}^{n}. Thus, if we can define a "rougher" space (reduced) such that V \in \Bbb{R}^{n \times r}, where r denotes the reduced dimension space r<n, we can obtain an expression of the reduced kinematics as:

\begin{equation} \mathbf{x}\left(t\right) = \mathbf{Vx}_r\left(t\right), \mathbf{\dot{x}}\left(t\right) = \mathbf{V\dot{x}}_r\left(t\right), \mathbf{\ddot{x}}\left(t\right) = \mathbf{V\ddot{x}}_r\left(t\right) \end{equation}

Which then allows the representation of the dynamic system, transformed by the Laplace Transform, as: 

\begin{align} s^2\mathbf{M}_r\mathbf{x}_r(s) + s\mathbf{C}_r\mathbf{x}_r(s) + \mathbf{K}_r\mathbf{x}_r(s) &= \mathbf{f}_r(s) \end{align}

Some more details on the general form representation of MOR can be found here: Model Order Reduction

Arnoldi Moment Matching

Mathematical Expressions

For consistency, note the difference in notation of the following section compared to the previous description of the dynamic problem. This section aims to explain how the reduced-space projection of the system components along with the target transfer function can be achieved.
Consider a second order system

M\stackrel{̈}{q}\left(t\right)+D\stackrel{̇}{q}\left(t\right)+Kq\left(t\right)={B}w\left(t\right)

with output y\left(t\right)={C}}^{T}q\left(t\right), where  M,S,K\in {\mathbb{R}}^{N\times N}, B\in {\mathbb{R}}^{N\times N_i}, S \in {\mathbb{R}}^{N\times N_o}N_i, N_o are the number of input and output respectively.

Rewritten in a compact matrix form:

\left[\begin{array}{cc}\hfill M\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill I\hfill \end{array}\right]\left[\begin{array}{c}\hfill \stackrel{̈}{q}\left(t\right)\hfill \\ \hfill \stackrel{̇}{q}\left(t\right)\hfill \end{array}\right]-\left[\begin{array}{cc}\hfill -D\hfill & \hfill -K\hfill \\ \hfill I\hfill & \hfill 0\hfill \end{array}\right]\left[\begin{array}{c}\hfill \stackrel{̇}{q}\left(t\right)\hfill \\ \hfill q\left(t\right)\hfill \end{array}\right]=\left[\begin{array}{c}\hfill {B}\hfill \\ \hfill 0\hfill \end{array}\right]u\left(t\right)\text{,}\phantom{\rule{2em}{0ex}}y\left(t\right)=\left[\begin{array}{cc}\hfill 0\hfill & \hfill {{C}}^{T}\hfill \end{array}\right]\left[\begin{array}{c}\hfill \stackrel{̇}{q}\left(t\right)\hfill \\ \hfill q\left(t\right)\hfill \end{array}\right]


The analytical solution of the transfer function takes the form:  H\left(s\right)={{C}}^{T}{(M{s}^{2}+Ds+K)}^{-1}B ,which can be expressed as a power series expansion:  H\left(s\right)={M}_{0}+{M}_{1}s+\cdots =\sum _{l=0}^{\infty }{M}_{l}{s}^{l}\text{,}. These {M}_{i} entries are the moments at corresponding order i that we try to match in order to approximate the transfer function.

Applying a nth Second Order Krylov Subspace to the approximation of the above power series, the associated problem becomes finding \mathrm{span}\{{q}_{1},{q}_{2},\dots ,{q}_{j}\}={\mathcal{G}}_{j}(A,Z,w)\text{,}\phantom{\rule{1em}{0ex}}\text{for}j\ge 1\text{.} (which is the generalization of   {\mathcal{K}}_{n}(H,v)=\mathrm{span}\{v,Hv,{H}^{2}v,\dots ,{H}^{n-1}v\} where v is the corresponding starting vector). {\mathcal{G}}_{n}(A,Z,w)=\mathrm{span}\{{r}_{0},{r}_{1},\dots ,{r}_{n-1}\} is then expressed in the second-order Krylov sequence {r}_{0}=w\text{,} {r}_{1}=A{r}_{0}\text{,} {r}_{j}=A{r}_{j-1}+Z{r}_{j-2}\text{,}\phantom{\rule{1em}{0ex}}\text{for}\phantom{\rule{0ex}{0ex}}2\le j\le n-1 using an iterative process with the w starting vector.

In the context of multiple inputs multiple outputs (MIMO) systems, r,w,q  becomes R,W,Q. Let us define a transformation matrix to the reduced space q\left(t\right)={Q}_{r}\stackrel{ˆ}{q}\left(t\right) where {Q}_{r}\in {R}^{N\times r} corresponding to the r reduced dimension. If {R}_{j}\in \text{colspan}\left({Q}_{r}\right) \text{for} j=0,\dots ,r  then the reduced second-order Krylov sequence is  {R}_{j}={Q}_{r}{\stackrel{ˆ}{R}}_{j}.

The state vector can be expressed as {X}_{i}={[{R}_{i}^{T},{R}_{i-1}^{T}]}^{T}={\left({F}^{-1}G\right)}^{i}\left({F}^{-1}B\right)\text{,}\phantom{\rule{1em}{0ex}}\text{for}\ \phantom{\rule{0ex}{0ex}}i\ge 0\text{.}, which gives {M}_{i}={C}^{T}{X}_{i}={C}^{T} {[{R}_{i}^{T},{R}_{i-1}^{T}]}^{T}\text{,}\phantom{\rule{1em}{0ex}}\text{for}\ \phantom{\rule{0ex}{0ex}}i\ge 0. Computing {M}_{i} with the previous reduced second-order Krylov sequence, one obtain the transfer function is directly recoverable from the power series expression without solving for the inverse analytical form.

This description of Arnoldi Moment Matching is a vast simplification of a very complex system of iterative algorithms. For more details readers can refer to various literature covering the derivation and details of this algorithm (for example Salimbahrami and Chu et al.

Functional Expression

Instead, a functional description of Arnoldi Moment Matching that covers a high-level perspective as a bases-finding function is illustrated in Figure B.1. This description is also that used to discussion further implementations of the Deflation concept.


Figure B.1 Arnoldi Moment Matching function I/O

Note that in application of the Arnoldi Moment Matching function in the dynamic MOR problem, the E,A,B,C inputs corresponds to \tilde{K},M,B,V where B is the input matrices as defined by the reference problem, and V is the bases currently found that further output solutions has to be orthonormal to. The separation between Order 1 and Orders >1 (which corresponds to the order in the power expansion wherein moments are matched) will become relevant for deflation processes.

Deflation concepts

At the core of the bases finding iterative process for Arnoldi Moment Matching is a Gram-Schmidt QR decomposition that together represents the bases discovered. The deflation concept then aims to detect highly linearly independent vectors in the bases collection. This detection is based on the criteria on R matrix's diagonal entries R_{ii} <\epsilon where \epsilon is called the deflation tolerance. Once entries in R that satisfy this condition are detected, the corresponding collumn is deleted in both R and Q , effectively removing the basis vector from the V matrix.

Figure B.2 Deflation concept: Criteria and Action

Interpreting the bases in the geometrical sense, the \hat{v_3} vector is reduced to ab_2 projection on the v_1 , v_2 space. If the magnitude of b_2 is small, that means the projection carries little information on this reduced space. In the context of large multi-dimensional system, storing and then computing these small bases are disadvantageous computationally while retrieving little information.

Figure B.3 Krylov space vector projection: spatial representation

The reference problem

Figure B.4 The reference problem

To discuss results of relevant Deflation implementations on the Arnoldi-driven MOR, we use a reference problem of a dynamic 3D beam composed of 2 sections taken from Ullmann. Parameters of this dataset, which in terms fully describe the beam dynamically, is manipulable and later used to generate modified systems for benchmarking. This reference dataset contains definition of matrices corresponding to the stiffness, mass, viscous damping and structural damping matrices, which then allows for the solution of an analytical transfer function as

H(\omega,p) = C(K(p) + isgn(\omega)S(p)+i \omega D(p)-\omega^2 M(p))^{-1}B

where p  is the parameter array containing information to constructK,M,D,S 


Relative Error Measure

To discuss the quality of the MOR, the following error measure is used throughout this report: 

Err(𝑓,x_j)=\frac{1}{12} \sum_{𝑖=1}^𝑛{\frac{|𝐻(𝑓,x_j) −𝐻_{𝑀𝑂𝑅} (𝑓,x_j)|}{|𝐻(𝑓,x_j)}|}_{𝑖𝑖}

This definition is motivated by the large size of the transfer function (being a MIMO system), the assumption of significance of the diagonal terms of the transfer function, and that this error measure outputs an array that matches each frequency of interest. We only report the degree of freedom x_j when reporting this error measure to focus on the values of error across frequencies. 


Methods

Static Deflation

Following the functional description of the Arnoldi Moment Finding process (Figure B.1), its functionality is extended to include Deflation as follows:

In correspondence with the functional description of the Arnoldi Moment Matching process, deflation occurs at each order of moment matching (ie. the bases found at these matched moments). There, deflation filters out bases that satisfies the Linear Dependence Detection condition:

Figure M.1 Static Deflation

We see that deflation is filtering out bases that violates the linear dependence threshold \bar{R}_{ii}<\epsilon_{def}; this is done for each (moment) order specified to find. As deflation tolerance \epsilon_{def} is a necessary parameter, user input is required. Thus, this method is refered to as Static Deflaton (SD), as deflation tolerance \epsilon_{def} is specified at the beginning of each run which does not change throughout the solution procedure. 

Adaptive Deflation

There are 2 shortcomings of the SD method that needs to be addressed:

  • User Input: In terms of the magnitude that constitutes a linearly dependent basis, only machine precision (of the current floating-point format) is obvious. At any deflation threshold higher than that, the choice of deflation tolerance becomes arbitrary, as we do not have access to the information regarding the exact (full-order) transfer function magnitudes (that we try to avoid solving ). 
  • Constant: Moment-matching is performed for a number of expansion points, which allows the possibility of changing of deflation tolerance for each point.

As a result, we come to a formulation to adaptively decide the tolerance for each expansion point, called Adaptive Deflation (AD). However, as discussed, a choice of deflation tolerance is dependent upon knowledge of a reference - whether that is information of the current transfer function, the bases in comparison to each other, or otherwise. The reference must exploit the iterative nature of the solution process, and in this work we consider the bases recovered from previous expansion points, accumulative, be the reference for the moment matching of the next expansion point. Within each expansion point, the bases recovered from Order 1 becomes a reference for higher orders.

Figure M.2a Construction of Reference for Adaptive Deflation

Applying the idea of references to deflation, we come to the Adaptive Deflation strategy for a given deflation point as follows:

Figure M.2a Testing of deflation tolerance candidates for Adaptive Deflation

Two important processes are :

  • Partial Reconstruction:
    As each bases or collection thereof can be used to reconstruct a transfer function, we do this everytime we want to assess the quality of a deflation tolerance. In other words, for each deflation tolerance, we reconstruct the transfer function with the bases in storage (accumulative reference from earlier expansion points) and the bases filtered by the current deflation tolerance at the current expansion point. Quality of the reconstruction is assessed with the error measure described in Background
    This method then yields 3 entities for each expansion point (beyond the first expansion point):
    reference transfer function (H reference): Transfer function that is constructed based on the accumulative bases of previous expansion points. "Reference" refers to the fact that we are not re-filtering the bases discovered from previous expansion points, meaning that they are permanently a part of the reduced space. As a result, we can consider these bases as a "ground truth" reference.
    no-deflate transfer function (H no-deflate): Transfer function that is constructed based on the accumulative bases of previous expansion points, and bases recovered from the current expansion point at first order moment matching. "No-deflate" refers to the fact that bases recovered from the current expansion point at first order moment matching is deflated down to machine precision.
    test transfer function (H test): Transfer function that is constructed based on the accumulative bases of previous expansion points, and bases recovered from the current expansion point at 2+ order moment matching. "Test" refers to the fact that we are scanning through deflation tolerance candidates, and seeing if they trigger the stopping criteria.
     
  • Compare:
    We use a similar metric to evaluate the discrepancy between 2 partially reconstructed transfer functions: 

    \Delta = \sum_{1}^{n_f}{\frac{|target-reference|}{|reference|}_{ii}}
    From this definition, we can define 2 entities that relative deviation \Delta is assigned to:
    reference deviation: relative deviation of no-deflate transfer function (H no-deflate) and reference transfer function (H reference). In this context, target is H no-deflate, reference is H reference
    test deviation:
    relative deviation of no-deflate transfer function (H no-deflate) and reference transfer function (H test). In this context, target is H no-deflate, reference is H reference.


As soon as we are able to the aforementioned deviations, we start to check the fidelity of the test bases (that corresponds to the current tested deflation tolerance). The fidelity check takes the form of another relative difference measure, this time between test deviation and reference deviation. We define a stopping criteria, this time being Stopping Criteria (SC) SC = \sum_{1}^{n_f}{\frac{|target-reference|}{|reference|}_{ii}} where target is test deviation and reference deviation. As soon as SC exceeds a defined value, the testing of deflation tolerance stops, and we return to the last deflation where SC did not violate the search termination condition. One can see that the defined value used as the upper bound for SC is a critical parameter. For this work we use this value as 5 based on results covered in Methods. However, the choice of a strict static/constant upper bound can be subject to future improvement, a short discussion of which can be found in future works


Pseudocode of the modified Arnoldi moment matching algorithm including adaptive deflation is as follows:

change_threshold  = 5; // Initiate SC upper bound termination condition
H_ref = obj.calcHpartial (V_start,frqs);    
V_start  = V; 		   // Initiate reference ie. ground truth of bases
if k>1 // no-deflate on exp_pt 1

	// Construct reference transfer function
    H_no_deflate = obj.calcHpartial (V_start,frqs);   
    reference_deviation= Compare (H_no_deflate,  H_ref );

 	// test each deflation candidate
    for i = [10^-17, 10^-16... ,10^-1 ] 	
        ind=find(diag(abs(R))> tempTol); //refilter Q wrt i threshold
        V_test=[V_start Q(:,ind)];

		// Construct test transfer function
        H_test = PartialRecons(V_test,frqs);
        test_deviation = Compare (H_test, H_test);

        // SC evaluation
        if any(abs(test_deviation -reference_deviation)./abs( test_deviation )> change_threshold)  // compare deviations against termination SC
            sigTol =i/10;  // Rejection  == rescale to previous one that works
            ind=find(diag(abs(R))> sigTol); //refilter Q wrt accepted threshold             
			V=[V_start Q(:,ind)];
            break
        end
    end
end


Fine-Search Adaptive Deflation

One important design choice of AD is the deflation tolerance candidates used to test the fidelity of deflation. Concern can be raised if the candidate search array is too dense or too sparse. A variant of AD is thus introduced to add an inner search loop of candidates called Fine-Search Adaptive Deflation (FSAD). Refer to Figure M.3b for a brief visualization of the choice of additional candidates for FSAD. 

Figure M.3a Deflation Tolerance candidates of Adaptive Deflation

Figure M.3b Deflation Tolerance candidates of Fine-Search Adaptive Deflation

FSAD also defines a different loop termination condition compared to AD, as we are seeking to fine-tune the assessment compared to AD. The same SC formulation is used, but with a different upper bound 10^{k-0.7} where k is the expansion point index. This reflects the assumption that later expansion points adding (dynamic) have a decreased confidence in the bases discovered in previous expansion points. This is based on the posteriori scoping results that deviation increases for some later expansion points, as seen in Results

The control loop of FSAD is as follows: 

for i = [10^-17, 10^-16... ,10^-1 ]
         [defTol, break_signal, gridtree_signal] = deflate_candidate(V,Q,R,i,gridtree_range ); // with gridtree_signal == 0, AD is performed
         if gridtree_signal ==1 
            gridtree_range = logspace(log10(lastTol), log10(tempTol/(k-1)),10^(k-2)); //Initialize finer-search candidates
            for j = [ gridtree_range(1),  gridtree_range(2),...]
                [defTol, break_signal, gridtree_signal] = deflate_candidate(V,Q,R,j,gridtree_range ); // with gridtree_signal == 1, FSAD is performed
                 if break_signal==1
                	break
                end
            end
         end
                 if break_signal==1 
                 break
                 end
         lastTol = defTol; 
end		

In this design, the deflate_candidate function carries functionalities shared by both AD and FSAD. The AD part is responsible for the triggering of gridtree_signal which initiate the finer search specific to FSAD, after which AD is no longer active. The FSAD-specific part then is responsible for the triggering of break_signal, which terminates all deflation. 

Results

Static Deflation

Without repeating results previously covered by literature, Figure R.1 cover the results of Static Deflation as described by Ullmann performed on the reference problem. Deviation from the analytical solution is hard to see visually, at the scale relevant to deflation. As a result, the plots of relative errors will be the main subject for discussion. 

Figure R.0 Transfer function results of the reference example, compared with static deflation of tolerance 1e-8

Figure R.1: Error of final deflated Reduced Order Model vs full model, different deflation tolerances (static deflation scheme) 

One can see a rapid decay of the deflated Reduced Order Model (ROM) quality compared to the analytical solution as deflation crosses a certain magnitude. The goal of subsequent deflation methods should be to avoid this quality decay ie. reduce the error of the ROM.

Relative-Change Adaptive Deflation

The reference problem

Figure R.2 shows the SC value at various tested deflation tolerance for expansion point 2, the first expansion point at which SC becomes valid to evaluate.

Figure R.2: Relative Difference Stopping Criteria at each tested Deflation tolerance, at expansion point 2

It is observable that the peak magnitude of SC stays within the 10^{-1} \rightarrow 10^0 magnitude, until the last tested deflation tolerance. We know also posteriori for the reference problem that 10^{-7} \rightarrow 10^{-6}is the problematic deflation tolerance magnitudes at which SD start seeing quality decay. As a result, we commit to define a threshold that stops the deflation testing that catches this magnitude change in SC (at deflation tolerance  10^{-7}). Again empirically, we see that there are 2 possible value peaks that could be caught by our to-defined threshold. Thus, 5 seems to be  a reasonable magnitude for this threshold as it catches both SC peaks at  10^{-7}, and does not trigger testing termination for earlier SC evaluations. Effectively, this enforces relative deviation of higher order Partially Reconstructed transfer functions to be within 500% of reference deviation.  No reason was found to adjust this threshold in this revision of AD, or change it adatively.


Now that operating parameters for AD is defined, final termination behaviour at each expansion point is presented. Figure R.3a shows the SC as it triggers the testing termination (after threshold exceeded), and the SC evaluated at the deflation tolerance selected (before threshold exceeded). 

Figure R.3a Stopping Criteria before (above) and at (below) testing termination, for each expansion point

From the accepted deflation tolerance as visualized in figure R.3a, we Partially Reconstruct the transfer function accumulatively with all bases discovered as Arnoldi Moment Matching is exited for the current expansion point, and compare them to previous similar accumulative Partial Reconstructions at previous expansion points. The analytical solution (Full Order Model) is also visualized, however for computation AD does not access to this information. 

Figure R.3b Relative Error from Partial Reconstruction at current expansion point vs. at each previous expansion point, 
and analytical solution ie. full order model, for each expansion point 

There is no need to modify this constant scalar threshold for testing termination, as AD manages to terminate and select the appropriate deflation tolerances at each expansion point, both in terms of posteriori knowledge of an appropriate static deflation tolerance, and the change in magnitude of SC before and after termination.
From the selected deflation tolerance, the relative error evaluated with the new deflated bases of the current expansion point and bases (accumulative from previous expansion points) in storage vs. the bases accepted at each expansion point is presented in Figure R.3b . Note that, for example, at expansion point 3 we are comparing all accepted bases ( discovered at expansion points 1,2,3)  and the bases as accepted at expansion point 1, and the bases as accepted at expansion point 2 (which includes bases discovered at expansion points 1,2); however in SC testing loop only accepted bases at expansion point 2 is involved in testing loop termination.  Additionally, relative error compared to the analytical solution is shown, however this data is hidden from AD (because application of MOR is to avoid evaluating a full analytical solution).

Qualitatively speaking, the new bases that exit the deflation selection process are chosen so that they do not excessively change from the locally perceived difference to the existing bases, which balances an enforcement of low deviation globally (observed in bases in storage) and locally (observed in local first-order bases) 

Figure R.4 shows the final ROM output with AD:

Figure R.4 Error of final AD - ROM(bases size = 51) vs. analytical solution ie.full order model

Compared to SD, AD manages to keep the error measure low without having to require user input and risk deteriorating the ROM quality. Moreover, the error does not increase past that at and around the first expansion point. This is significant because we only deflate the first expansion point to machine precision, and thus error around it represents the most accurate information of ROM that AD later observes as ground truth. However, there is an observable increase in error magnitude towards the final expansion post (at the end frequency) that suggests possible performance decay under different conditions. This observation motivated the benchmark of modified systems

Modified systems

To showcase the robustness (or lack thereof) of AD, it is tested on modified variations of the default system. The modification itself is a uniform scaling of both sections of the beam in terms of mass (M) stiffness (K) and damping (S) respectively. In detail, the scales are 5, 1, 0.02 for masses; 25, 1, 0.05 for stiffness; 1000, 1,  0.0005 for damping, respectively. The value here are chosen so that impact of each parameter scaling can be easily observed. Then each modified system is a combination of the above parameter scaling.For most of the modified systems, AD manages to keep the relative error around the same order of magnitude that the first expansion point achieves. This is again indicative of well-performing deflation, because without further knowledge of the analytical transfer function, the first expansion point (whose ROM are deflated to machine precision) represents the first ground truth that the ROM system have access to. Illustrative of some well-performing cases of the system are included in Figure R.5

Figure R.5 Transfer Function and Relative error of the AD - ROM output vs. analytical solution - non-problematic modified system test cases

However, higher mass scaling leads to significantly worse deflation quality, as illustrated in Figure R.6. For these cases, error consistently increases (almost monotonically) towards later frequencies. As opposed to centered around expansion points, this suggests that deflation is including bases that contain information that misrepresents higher dynamics. Moreover, error around the first expansion point is low and error of the undeflated ROM is low, further corroborating the deficiency of AD in these cases. This serves as motivation for FSAD, as finer spacing of deflation tolerance candidates was attempted to overcome these problematic cases for AD. The assumption that unnecessary bases are included, however, is incorrect, as proven in FSAD results.

Figure R.6 Transfer Function and Relative error of the AD - ROM output vs. analytical solution - problematic modified system test  cases

Fine-Search Adaptive Deflation

Without reillustrating SC at each expansion point on deflation testing termination, final results of FSAD quality is presented. Figure R.7 shows the FSAD in comparison with AD and SD (at machine precision) for the reference problem, and Figure R.8 shows the corresponding comparison on modified systems previously illustrated. On all of these comparisons, FSAD performs worse than AD, suggesting that restricting the bound on SC (even with decaying strictness for later expansion points) is forcing the removal of relevant bases that contains important dynamics of higher frequencies. This agrees with the fact that SD at machine precision represents the most accurate ROM possible, albeit with larger size.


Figure R.7 Transfer Function and Relative error of the AD - ROM output vs. analytical solution - reference problem 



Figure R.8 Transfer Function and Relative error of the AD - ROM output vs. analytical solution - non-problematic modified systems test cases

Note that for Figure R.8, all Relative Error plots are overlapping - albeit with different ROM sizes.

Figure R.9 Transfer Function and Relative error of the AD - ROM output vs. analytical solution - problematic modified systems test cases

FSAD is observably worse than AD, specifically for the problematic test cases. This in turn serves a sensitivity analysis on removal of bases around the bases that AD accepts, in the direction of smaller ROM size. Although it proves the value of SC termination threshold for AD, FSAD also show AD's shortcoming in 2 ways:

  • Problematic cases can be encountered that AD is not able to appropriately relax its SC termination criteria. No detection is built in to detect if AD will worsen the ROM output. 
  • Unproblematic cases can be encountered that FSAD is able to further decrease ROM size without suffering in quality. No detection measure is allowing for this desired further ROM size reduction.

Although an unsuccessful strategy, FSAD offer some insights to further improvement of AD: 

  • Relaxing SC testing termination threshold for FSAD:
    Although as seen in Figure R.3a, abnormally rapid increase in SC magnitude can be observed, it is not indicating an inclusion of superfluous dynamic information, but rather discovery of significantly unrepresented dynamics. As a result, the current implementation of AD neccesitates the  removal of dynamically significant bases, in favor of a smaller size ROM. Consequently, future variants of AD should modify the termination threshold condition on SC to both (1) keep the size of the system conservative and (2) not remove important bases. In other words, to more meaningfully improve AD, ROM size reduction needs to become an optimization objective alongside relative error reduction. 
  • Storage of machine-precision bases:
    Currently, solving for AD makes all bases available before removal. The reason for this is twofold:
    • Practical: Industrial usage of MOR concerns more with the size of the resulting model, rather than the solution speed. 
    • Implementation: A QR decomposition is at the core of Arnoldi Moment Matching and is the most expensive computation of MOR. For the currently implemented AD, QR is always allowed to converge unrestricted. This means that all of the bases, including the ones that SD will eliminate at machine precision, needs to be found before removal from storage. But as FSAD has shown, the entirety of bases discovered (including those nearly linearly dependent) is valuable to assess the quality of deflation.

As a result, ground truth that is observable by deflation can be expanded to all discovered bases, undeflated (as opposed to in AD where only the first machine-precision deflated bases and other deflated bases are accepted as ground truth). This will influence what is considered a reference in SC calculations. 

  • Relaxing SC testing termination for AD:
    Figure R.8 shows how yet smaller size for final ROM can be achieved without decay in output quality. More investigation like that done in Figure R.3a can be done on benchmark cases to achieve a more effective SC threshold.



Conclusion

In this work, 2 variations of Deflation were developed, implemented and benchmarked against standard Deflation for Krylov-Based Model Order Reduction using Arnoldi Moment Matching for Multiple-Input-Multiple-Output dynamic system of a 2-sectional 3D beam example. The Adaptive Deflation variant is reasonably successful in constraining the Reduced Order Model size while keeping relative difference compared to the analytical solution low throughout various systems with modified dynamic properties, without access to the analytical solution itself or requiring user input for the Deflation Tolerance operating parameter. The Fine-Search Adaptive Deflation variant, although unsuccessful, served as a sensitivity analysis for Adaptive Deflation and provided insight to its future improvement.


Reference

“Model Order Reduction - Modelling and Simulation in Structural Mechanics - TUM Wiki.” https://wiki.tum.de/display/modsim/Model+Order+Reduction (accessed Feb. 19, 2023).

B. Salimbahrami, B. Lohmann, T. Bechtold, and J. Korvink, “A two-sided Arnoldi algorithm with stopping criterion and MIMO selection procedure,” Mathematical and Computer Modelling of Dynamical Systems, vol. 11, no. 1, pp. 79–93, Mar. 2005, doi: 10.1080/13873950500052595.

C.-C. Chu, H.-C. Tsai, and M.-H. Lai, “Structure preserving model-order reductions of MIMO second-order systems using Arnoldi methods,” Mathematical and Computer Modelling, vol. 51, no. 7–8, pp. 956–973, Apr. 2010, doi: 10.1016/j.mcm.2009.08.028.

Rupert Ullmann, “A 3D solid beam benchmark for parametric model order reduction.” Mendeley, Oct. 10, 2022. doi: 10.17632/CPRX2KX2WS.2.

R. Ullmann, S. Sicklinger, and G. Müller, “Optimization-Based Parametric Model Order Reduction for the Application to the Frequency-Domain Analysis of Complex Systems,” in Model Reduction of Complex Dynamical Systems, P. Benner, T. Breiten, H. Faßbender, M. Hinze, T. Stykel, and R. Zimmermann, Eds. Cham: Springer International Publishing, 2021, pp. 165–189. doi:10.1007/978-3-030-72983-7_8



  • Keine Stichwörter