Koopman analysis is looking into quasi-chaotic, non-linear system simulation and its propagation through time.
In other words, we could just forecast non-linear dynamic systems and describe it efficiently, thought its frequency behaviour, up a a specific degree of accuraccy, other classical systems couldn't!
When looking into stadard Fourier Analysis, we describe a linear system between the actual oscillators and the desired output space, minimizing the squared error:
With
The oscillator
In Koopman Analysis, however, we look into any type of function
Furthermore, from this error function, we can derive some type of pseudo
In this case we optimize our non-liearity
Here for such a pseudo-likelihood, we would use a softmax function over all samples
HAVOK aims to have an expressively non-linear view onto the recombination function within the Koopman framework.
Thus, similar to the Deep Koopman, where we approximate our function with a Deep Neural Network, we here try to adapt our function given the Hankel criterium.
So, this whole Koopman process can be expressed in a simple Koopman operator which describes finite, linear transition from our dynamical non-linear function $g(x_{k})\rightarrow{\mathcal{K}} g(x_{k+1})$, as described below:
$\rightarrow \mathbf{F}t(x(t_0)) = x(t_0+t) = x(t_0) + \int{t_0}^{t_0+t} f(x(\tau))d\tau$
$\rightarrow \mathcal{K}t g(X_k) = g(\mathbf{F}t(x_k)) = g(x{k+1})$
$\rightarrow g(x{k+1}) = \mathcal{K}_t g(X_k)$, discrete time update
Thus, if we would find a subspace for
$\mathbf{F}t: x_k \rightarrow x{k+1}$
$\mathcal{K}t: y_k \rightarrow y{k+1}$
Now to not sit in a cave for about eternity, trying to find such subspace
Thus, we are using the Hankel matrix
$ \begin{bmatrix} x(t_1) & x(t_2) & x(t_3) & \cdots & x(t_p) \ x(t_2) & x(t_3) & x(t_4) & \cdots & x(t_{p+1}) \ \vdots & \vdots & \vdots & \ddots & \vdots \ x(t_q) & x(t_{q+1}) & x(t_{q+2}) & \cdots & x(t_m) \end{bmatrix} $
Thus, we have DMD like setting, using the SVD of this matrix to directly have the Koopman-invariant measurement system on the attractor. (Giannakis 2015)
This diagram illustrates how the regression model connects to methods like Dynamic Mode Decomposition (DMD) (Rowley 2009, Schmid 2010, Kutz 2016) and Sparse Identification of Nonlinear Dynamics (SINDy) (Brunton 2016). The linear part is captured in matrix
\begin{bmatrix} A & B \ \text{- Bad -} & \text{- Fit -} \end{bmatrix} \begin{bmatrix} v_1 \ v_2 \ v_3 \ \vdots \ v_r \end{bmatrix} $$
So, when looking into the aforementioned methology, whe can easily see that there is a specific range, HAVOK could hadle, finding the most optimal operator.
When we want to have a better fit on
Figure: Machine Learning filling the computational gab (bad fit) in HAVOK analysis. (Yang et al 2022)
Here, we will try to generalize this finite evolution using many small time whindows of a given observation and even future predicted observations, given the HAVOK singular value decomposition.
