Subspace methods are used for Model Order Reduction. They are used for complex and large numerical models, which require a substantial computational effort and large amounts of memory. Model order reduction methods reduce the size of the numerical models. The reduced models approximate the full system's response characteristics with less computational effort and memory requirements. Good reduced models are able to replace the original model.

Subspace methods aim at finding a low dimensional subspace depicting the desired behavior of the full system. Moment matching methods try to match the poles (or moments) of a system's transfer function and interpolate between them. This can be done explicitly by constructing a polynomial (e.g. Padé approximation methods) or implicitly by Galerkin projection (e.g. Krylov methods). Methods based on singular value decomposition (SVD) such as the proper orthogonal decomposition (POD) use transfer function evaluations of the full system to compute the reduction basis. The singular values act as a measure of the truncation error. The balanced truncation (BT) is a more system theoretic method and can be seen as a POD applied to the impulse response of the system.

In the following, a Krylov method will be presented. Information about other methods like the Padé methods as well as on POD and BT can be found in the literature listed in the references.

Krylov methods

Overview

Krylov methods produce a projection matrix \mathbf{V} with orthogonal columns, i.e. \mathbf{V}^H\mathbf{V}=\mathbf{I} (compare: (Model Order Reduction:1)). \mathbf{V} spans the Krylov subspace onto which the system is projected.

At least one expansion point s_0 needs to be specified, at which the reduced model will match the full model's response. After applying this projection to the given dynamic system, the reduced system matches r poles (and depending on the chosen algorithm some of its derivatives) of the full system.

Some advantages of Krylov methods:

  • No matrix factorizations (e.g. singular value or eigenvalue decompositions) or inversions, only matrix-vector multiplications are required.
  • The numerical complexity of Krylov methods to reduce a model from dimension n to r is \mathcal{O}\left(r^2n\right) for sparse systems, while the SVD or eigendecomposition has a numerical complexity \mathcal{O}\left(n^3\right).

A simple Krylov-based MOR method

For a system with symmetric matrices \mathbf{K},\mathbf{C},\mathbf{M}, the Krylov subspace around an expansion point s_0\in\mathbb{C} is spanned by

\[ \mathbf{V} = \left[ \left(s_0^2\mathbf{M} + s_0\mathbf{C} + \mathbf{K}\right)^{-1}\mathbf{f}, \left(s_0^2\mathbf{M} + s_0\mathbf{C} + \mathbf{K}\right)^{-2}\mathbf{f},\dots, \left(s_0^2\mathbf{M} + s_0\mathbf{C} + \mathbf{K}\right)^{-r}\mathbf{f} \right]. \]

After the columns of \mathbf{V} have been orthonormalized, it can directly be used as projection matrix.

Instead of using more derivatives around s_0, multiple expansion points s_i,\ i=1,\dots,r can be used. The corresponding matrix spanning the desired Krylov subspace is then given by

\mathbf{V} = \left[ \left(s_1^2\mathbf{M} + s_1\mathbf{C} + \mathbf{K}\right)^{-1}\mathbf{f}, \left(s_2^2\mathbf{M} + s_2\mathbf{C} + \mathbf{K}\right)^{-1}\mathbf{f},\dots, \left(s_r^2\mathbf{M} + s_r\mathbf{C} + \mathbf{K}\right)^{-1}\mathbf{f} \right].

Choice of expansion points

The choice of expansion points s_i has a major influence on the quality of the reduced model. A rule of thumb for choosing the expansion points according to Antoulas: Interpolate at the mirror images -\lambda_i^* of the poles \lambda_i of the full system.

Example

An example for the Krylov method can be found here:

Iterative Krylov methods

The quality of a reduced model depends very much on the distribution of the expansion points. However, an optimal distribution is typically not known a-priori. Iterative methods, such as the Iterative Rational Krylov Algorithm (IRKA), can find a to some extent optimal reduced model.

IRKA works as follows:

  1. Compute \mathbf{V} using an initial set of expansion points s_i,\ i=1,\dots,r:

    \mathbf{V}=\left[ \left(s_1^2\mathbf{M} + s_1 \mathbf{C} + \mathbf{K}\right)^{-1}\mathbf{f}, \dots, \left(s_r^2\mathbf{M} + s_r \mathbf{C} + \mathbf{K}\right)^{-1}\mathbf{f} \right].
  2. Reduce the system: \mathbf{M}_r=\mathbf{V}^T\mathbf{MV},\ \mathbf{C}_r=\mathbf{V}^T\mathbf{CV},\ K_r=\mathbf{V}^T\mathbf{KV} .
  3. Solve the quadratic eigenvalue problem \left(\mathbf{M}_r\lambda^2 + \mathbf{C}_r\lambda + \mathbf{K}_r\right)\mathbf{x}=0 in the reduced space. This yields 2r eigenvalues.
  4. Choose r eigenvalues \lambda_r from the 2r eigenvalues \lambda_{2r}.
  5. Update expansion points s_i = -\lambda_r .
  6. Check convergence.
  7. Continue iterating, if convergence has not been reached.

An Example for the IRKA can be found here:

References

Avery, Philip, Charbel Farhat, and Garth Reese. "Fast frequency sweep computations using a multi‐point Padé‐based reconstruction method and an efficient iterative solver." International Journal for Numerical Methods in Engineering 69.13 (2007): 2848-2875.

Antoulas, Athanasios C. Approximation of large-scale dynamical systems. Society for Industrial and Applied Mathematics, 2005.