In the article we consider the problem of linear extrapolation of zero-mean widesense-stationary random process both discrete-time and continuous-time cases under conditions of the absence of a priori information about the statistical characteristics of disturbance in the absence of measurement errors under scalar observation only the restricted disturbance is assumed. We investigate a minimax approach, which guarantees the prediction of high quality at the least favorable disturbance spectrum. The simple implementation of an optimal adaptive minimax predictor and prediction based on Kalman -Bucy filter and their comparative characteristics has been obtained. Examples are given.
## I. INTRODUCTION
The solution to a similar problem for the continuous-time processes was obtained for the first time by U. Grenander [1], where the problem of predicting a stationary continuous in time occasional process observed without noises with only its dispersion to be known, was solved. This paper should be marked as the first one where the minimax approach to extrapolation problem for stationary processes was proposed. In the papers by M. Moklyachuk [2] the minimax approach is applied to extrapolation, interpolation and filtering problems for functionals which depend on the unknown values of stationary processes and sequences. Many investigators have been interested in minimax extrapolation problems for the stationarystochastic sequences. J. Franke [4], J. Franke and H.
V. Poor [5] investigated the minimax (robust) extrapolation and filtering problems for the stationary sequences with the help of convex optimization methods. This approach makes it possible to find equations that determine the least favourable spectral densities for various classes of densities.
Unlike the Kalman method [11,13] or more general case of Bayes evaluation, the essence of the minimax method is that the disturbances are not considered probability-described for the full, i.e., as for instance in the Kalman model, the disturbances are defined by a no correlated in time process. Such a problem, as well as other more general problems may be solved using the minimax approach which guarantees the prediction of high quality at the least favorable spectrum.
In engineering motivation, the article differs from previous publications in that it contains an analytical study of the problem of predicting a scalar stationary process in a broad sense with the worst disturbance spectrum, about which only the variances in continuous and discrete time are known a priori, both in the medium and long term.
Unlike previous works [3,4] devoted to methods and problems of forecasting using parametric time series models, this work uses a minimax approach to obtain more correct guaranteed results. In this case, the prediction algorithm, in contrast to the Kalman method or the more general case of Bayesian estimation, consists in the fact that disturbances are not considered probabilistically described in full, that is, for example, as in the Kalman model, disturbances are set by a process uncorrelated in time and represents a correction of the forecast by the model and therefore does not coincide with simple forecasting. At the same time, the prediction efficiency increases with an increase in the depth of the filter memory in N steps.
To compare the results, as well as to specifically explain how adaptation is carried out when finding an approximating model, the most general method based on the Kalman filter is given in the paper. At the same time, an algorithm is synthesized and the worst disturbance spectrum is found.
The aim of this work is to present new methods of solution of the linear extrapolation of zero-mean wide-sense-stationary random process for both discrete-time and continuous-time cases under conditions of the absence of a priori information about the statistical characteristics of disturbance in the absence of measurement errors under scalar observation. The existence of the saddle point of the extrapolation game in terms of the extreme properties of permissible spectral densities of linear filter and nature is also discussed.
The above setting of the problem is characterized, in particular, for the tasks of determining the movements of the space object, the aircraft on radar or optical measurements. Here, fluctuational (noise) measurement errors are in nature "white noise." The remaining components of errors and indignations are indefinite in nature - their dispersion is known or, in a more general case, dispersion matrix, i.e. an ellipsoid containing these errors at a fixed level of confidential information $\alpha \approx 0.1$.
## II. STATEMENT OF A PROBLEM OF MINIMAX EXTRAPOLATION IN CASE OF THE ABSENCE OF MEASUREMENT ERRORS IN CONTINUOUS TIME
Let us assume that the real component of the measured signal $y(t)$ was formed from a certain disturbance $u(t)$ by means of the dynamic system:
$$
\dot{\vec{x}}(t) = A\vec{x}(t) + \vec{b}u(t)
$$
Here $A$ is the constant matrix of $n \times n$ dimension; $t \in (-\infty, +\infty)$; $\vec{x}(t) \in R^n$ is the state vector of the system; $\vec{b} \in R^n$ is the constant vector; $u(t)$ is the unknown disturbance.
Representing a scalar stationary occasional process with zero mean value with the only information concerning its correlation function about the constraint on its dispersion, which satisfies inequality.
$$
Mu^{2}(t)\leq a,
$$
Where $M$ denotes the mathematical expectation of $u^2(t)$; $a < \infty$ is a fixed disturbance power and, perhaps, the constraint on the concentration area of its spectral density- $H_u(\lambda)$ at $\lambda \in \Lambda$, $\Lambda$ is the given subset of the frequency axis. The measured signal upon the results of the observations on the time interval $\tau \in (-\infty, t)$ are given by
$$
y (\tau) = \vec {C} ^ {T} \vec {x} (\tau), \tag {2}
$$
where $\vec{C} \in R^{n}$ - is the constant column vector.
The linear equations input-state-output (1) and (2) will be called in the future the meter-object system(1), (2).
Let us make the following assumptions concerning $A$, $\vec{b}$ and $\vec{C}$ matrixes:
1. The meter-object system (1),(2) in the absence of measurement errors and disturbances is the system, i.e.:
$$
r a n k (\vec {C}, A ^ {T} \vec {C}, \dots , (A ^ {n - 1}) ^ {T} \vec {C}) = n
$$
Throughout this paper, let "rank" be the matrix operator of the taking the rank over correspondent compound matrix stated below here as:
$$
\vec {C}, A ^ {T} \vec {C}, \dots , (A ^ {n - 1}) ^ {T} \vec {C}
$$
It means that if at least one of its minors of order $n$ is different from zero while Every minor of order $(n + 1)$ is zero.
### 2. System (1) is "masked" by the disturbance, i.e.:
$$
\vec{x}(t) = \int_{-\infty}^{t} e^{A(t-\tau)} \vec{b} u(\tau) \, d\tau
$$
The concept of "mask ability" is similar to the concept of controllability in systems with control signals, therefore, the necessary and sufficient condition for "maskability" is mathematically expressed for our case as [6]
$$
r a n k (\vec {b}, A \vec {b}, \dots , A ^ {n - 1} \vec {b}) = n
$$
Due to the observability of the useful signal, the set of functions satisfying the unbiased condition [10] is not empty.
From the vector record of the meter-object system (1), (2) let's move on to its equivalent scalar representation record.
It is required to find out the process:
$$
\hat {s} (t + T) = \int_ {- \infty} ^ {t} g (t + T - s) y (s) d s
$$
i.e., to find the transitional function $g(t)$ of a physically realized filter which evaluates linear functional at the moment of time $t + T$ which can be calculated by the formula
$$
s(t + T) = \int_{-\infty}^{t} \vec{q}^T(t + T - s) \vec{x}(s) ds
$$
where $T$ is the time of extrapolation of the useful vectorsignal $\vec{x}(s)$ ( $s \leq t$ ) and $\vec{q}^T(t)$ is an $1 \times n$ real-valued and mean-square-integrablerow vector that is the transpose of an $n \times 1$ column vector $\vec{q}(t)$ which is set by a row vector of frequency characteristics $Q(\lambda)$ associated by means of this transformation function.
The quality criterion is the dispersion of the extrapolation error with the time period of extrapolation $T$ is given by
$$
\min_{G^{\text{ext}}} \max_{u} M[\hat{s}(t+T)-s(t+T)]^{2} = \min_{G^{\text{ext}}} \max_{h} D(G^{\text{ext}},h)
$$
where $G^{ext}$ and $Q$ are the transfer functions associated with transformations $g$ and $\vec{q}$ respectively, $h$ is the spectrum of the unknown disturbance $u(t)$.
Then for fixed spectral density $h(\lambda) \in K$, the optimum linear extrapolator $G^{ext}(\lambda)$ is found by solving the problem
$$
\min _ {G ^ {\text{ext}} \in K ^ {\text{ext}}} D \left(G ^ {\text{ext}}, h\right) \tag{5}
$$
Let $K^{ext}$ denote the class of complex-valued linear extrapolators, $K$ denote a class $K \subseteq L_{1}(R)$ of disturbance spectrum where $L_{1}(R)$ denotes the class of absolutely integrated real-value functions on $R$.
Thus, the problem comes to finding the frequency characteristics $G^{ext}(\lambda)$ according to minimax criterion (4). It is of interest to consider situations in which the exact form of the power spectral density of the signal is known, but the form of the power spectral density of the disturbance is unknown.
In particular, we consider disturbance spectral class $K$ defined by
$$
\begin{array}{l} K = \left\{h (\lambda) \in L _ {1} (\lambda): - \frac{1}{2 \pi} \int_ {- \infty} ^ {\infty} h (\lambda) \Psi (\lambda) d \lambda \leq D _ {u} \right\} \\ h (\lambda) = 0 \text{if} \lambda \notin \Lambda , \\\end{array}
$$
Ref where $D_{u} < \infty$ is a fixed perturbation power and $\Psi(\lambda)$ is the non-negative even upon $\lambda$ pre assigned function satisfying the Paley-Wiener condition
$$
\int_{-\infty}^{\infty} \frac{|\ln \Psi(\omega)|}{1+\omega^2} d\omega < \infty ,
$$
$\Lambda$ is the given subset of the frequency axis of positive measure (the infinite measure is possible), We shall consider in further $K^{ext}$ as the subspace of $L_{2}(\lambda)$ the Hilbert space of complex -valued functions on $[- \infty, \infty]$ that are square integrable with respect to the Lebesgue measure with density $h(\lambda)$ be not equal to zero.
The paper by Ulf Grenander [1] was the first one where this approach to extrapolation problem for stationary processes was proposed. The analogous causal case has been considered for interval fuzziness of the linear dynamic system with the parametric uncertainty of the specification only in the state matrix in the frameworks of the form of H-stability with the restricted variance of the random disturbance in the useful component of the signal model in [11]. Some aspects of a related causal case have been considered in [2,3] using convex optimization and sub differential computation methods. For this problem we may now state the following theorem which gives a technique for searching for least-favorable spectra in certain cases.
Theorem 2.1: The given problem has the saddle point due to the fact that $D(G^{ext}, h)$ is linear over $h$ and the set of the all $h(\lambda)$ with restricted integral dispersion
$$
\frac {1}{2 \pi} \int_ {- \infty} ^ {\infty} h (\lambda) \Psi (\lambda) \leq D _ {u}
$$
is convex weak compact and $D(G^{ext}, h)$ is square over $G^{ext}$, i.e. the conditions of convexity-concavity which are required when the well-known theorems from the game theory are used [14] are met and the classical condition of the existence of the saddle point is satisfied, i.e. we have the corresponding relation determining the saddle point:
$$
\min _ {G ^ {\text{ext}} \in K ^ {\text{ext}}} \max _ {h \in K} D \left(G ^ {\text{ext}}, h\right) = \max _ {h \in K} \min _ {G ^ {\text{ext}} \in K ^ {\text{ext}}} D \left(G ^ {\text{ext}}, h\right) \tag{7}
$$
Thus, the problem should be interpreted as an antagonistic interplay $\Gamma(D^{ext}, K, K^{ext})$, where the gain functional is $D(G^{ext}, h)$ with corresponding strategy spaces of two players: space of the first player named by nature $K$ striving to maximize $D(G^{ext}, h)$ and space of the second player named by investigator $K^{ext}$ striving to minimize $D(G^{ext}, h)$.
# III. THE SYSTEM OF THE RELATIONS, DETERMINING THE SADDLE POINT IN THE PROBLEM
Theorem 2.1 can be applied for finding solutions to extrapolation problems for the stationary stationary occasional process in the case of spectral uncertainty, when spectral density of disturbance is not known exactly. In this section without loss of generality we assume that the useful vector signal $\vec{x}(s)$ may be presented in the form of the scalar signal $x(s)$ and the spectral density of $x(s)$ signal may be presented as by:
$$
X _ {u} (\lambda) = T (\lambda) + h (\lambda),
$$
where $T(\lambda)$ is known non-negative component which satisfies the Paley-Wiener condition (6) and $h(\lambda) \in K$ is unknown component in the spectrum of signal. The solution to the problem of (5) can be found for this case by applying the results of [7] concerning the relationship of the minimax filtering problem with the solution to the Markov moments problem. Let us confine ourselves to considering the case when $T(\lambda) = 0$. A more general case treated similarly to this.
The system of the relations, determining the saddle point in this case is defined by the formulas (8) - (10)
$$
\Psi (\lambda) = \left| \varphi (\lambda) \right| ^ {2}; \tag {8}
$$
$$
X _ {u} ^ {+} (\lambda) = \frac {\left\| Q ^ {*} (\lambda) X _ {u} ^ {-} (\lambda) \right] _ {+} \mid^ {2}}{\sqrt {\alpha} \varphi (\lambda)}; \tag {9}
$$
$$
\int_ {- \infty} ^ {+ \infty} \frac {\left| [ Q (\lambda) X _ {u} ^ {+} (\lambda) ] _ {-} \right| ^ {2}}{\alpha \Psi (\lambda)} d \lambda = a; \tag {9.1}
$$
$$
G^{\text{ext}} (\lambda) = \frac{\left[ Q (\lambda) X _ {u} ^ {+} (\lambda) \right] _ {+}}{X _ {u} ^ {+} (\lambda)} = Q (\lambda) - \mu \frac{X _ {u} ^ { - } (\lambda)}{X _ {u} ^ {+}} \varphi ^ { * } (\lambda) \tag{10}
$$
where $\alpha$ is the Lagrange factor satisfying system of the relations, determining the saddle point of the minimax extrapolator [8, see paragraph 3.6.1]; $\varphi(\lambda)$ is the result of factorization $\Psi(\lambda) = \varphi^+(\lambda) \varphi^-(\lambda)$, $\varphi^*(\lambda) = \varphi^-(\lambda)$, $\mu$ is the maximum positive eigenvalue corresponding to the eigenfunction $X_u(\lambda)$ in the equation (9.1). The dispersion of the extrapolation error with the time period of extrapolation $T$ can be represented in the form
$$
D^{ext} (\lambda) = \frac{1}{2\pi} \int_{-\infty}^{+\infty} |G^{ext} (\lambda) - Q (\lambda)|^{2} h (\lambda) d\lambda = a \mu^{2}
$$
Here the equations (9-10) symbols for function $A_{+}(\lambda)$ and $A_{-}(\lambda)$ were introduced as correspond for the separation operation of $A(\lambda)$ function in the lower and upper analytic semi-plane correspondingly, where $A^{+}(\lambda)$ and $A^{-}(\lambda)$ are causal and noncausal parts of $A(\lambda)$ satisfying factorization $A(\lambda) = A^{+}(\lambda)A^{-}(\lambda)$, $h(\lambda)$ is the solution to the Markov moments problem concerning the spectral density perturbation is not known exactly related to minimax mean-square error criterion (4). In order to demonstrate the development the technique we propose the following example.
Example 3.1
Let consider the problem of optimal linear extrapolation when the spectral density of the perturbation is not known exactly about the object of the second order
$$
\ddot {s} = u (t)
$$
It is required, using the values of the measured signal $s(\tau)$ upon the results of the observations at $\tau \leq t$, to evaluate the value of the signal in the moment of time at $t_{ext} = t + T$. In fact, since $\varphi(\lambda) = (i\lambda)^2$; $\Psi(\lambda) = \lambda^4$; $Q(\lambda) = e^{i\lambda T}$; $a(t) = \delta(t - \tau)$.
Differentiation two times reduces the integral equation of Grenander [1, p.156] determining the unknown eigenvalue function $X_{u}(\lambda)$ to the differential equation with the boundary conditions
$$
\mu \ddot {s} (t) = s (T - t); s (0) = \dot {s} (0) = 0; 0 \leq t \leq T.
$$
From this relation we have the differential equation
$$
\mu^ {2} s ^ {(4)} (t) = s (t),
$$
where $\mu$ is the maximum positive eigen value corresponding to the eigen function $X_{u}(\lambda)$ in the equation (9.1). The common solution of the last equation can be represented in the form
$$
s (t) = A \cos {\frac {t}{\sqrt {\mu}}} + B \sin {\frac {t}{\sqrt {\mu}}} + C s h {\frac {t}{\sqrt {\mu}}} + D c h {\frac {t}{\sqrt {\mu}}}
$$
From the initial and boundary conditions $s(0) = \dot{s}(0) = 0; \ddot{s}(T) = \ddot{s}'(T) = 0$ we will have $B + C = 0; A + D = 0$;
$$
\left\{
\begin{array}{l}
- A \cos \frac {t}{\sqrt {\mu}} - B \sin \frac {t}{\sqrt {\mu}} + C \sinh \frac {t}{\sqrt {\mu}} + D \cosh \frac {t}{\sqrt {\mu}} = 0; \\
A \cos \frac {t}{\sqrt {\mu}} - B \sin \frac {t}{\sqrt {\mu}} + C \sinh \frac {t}{\sqrt {\mu}} + D \cosh \frac {t}{\sqrt {\mu}} = 0.
\end{array}
\right.
$$
Then a necessary and sufficient condition by virtue of which not the trivial solution of this system is possible is the condition
$$
\cos {\frac {T}{\sqrt {\mu}}} c h \frac {T}{\sqrt {\mu}} + 1 = 0
$$
The maximum positive eigenvalue $\mu$ corresponding to this equation is
$$
\mu = \frac {T ^ {2}}{x ^ {2}} \approx 0. 2 8 4 T ^ {2}; \cos x \cdot c h x = - 1.
$$
Carrying out simple, but rather cumbersome calculations will get the expression for the extrapolator
$$
G ^ {e x t} (\lambda) = \frac {(1 + \alpha \tilde {p}) (1 - \beta^ {2} \tilde {p} ^ {4})}{(1 - \alpha \tilde {p}) \tilde {p} ^ {2} \beta + (1 + \alpha \tilde {p}) e ^ {- \tilde {p}}},
$$
where
$$
\tilde {p} = i \lambda T; \gamma = \frac {\cos x + c h x}{\sin x - s h x} \approx - 1. 3 6 2;
$$
$$
\alpha = - \gamma / x \approx 0. 7 2 6; \beta = \frac {1}{x ^ {2}} \approx 0. 2 8 4.
$$
Guaranteed accuracy is determined by the standard extrapolation error
$$
\sigma_ {e x t} = \mu \sqrt {a} T ^ {2} \approx 0. 2 8 4 \sqrt {a} T ^ {2}.
$$
Comparison with the simplest extrapolates chosen for reasonable reasons shows that the obtained filter has a marked advantage. So linear prediction
$$
s (t + T) = s (t) + \dot {s} (t) T
$$
gives the mean square extrapolation error at the worst perturbation
$$
h (\lambda) = a \delta (\lambda), \mathrm {o r} u (t) = A, M A = 0, M A ^ {2} = a
$$
$$
\sigma_ {e x t} = 0. 5 \sqrt {a} T ^ {2},
$$
i.e. loses in 1.75 times. Square prediction
$$
s (t + T) = s (t) + \dot {s} (t) T + \frac {\ddot {s} (t)}{2} T ^ {2}
$$
gives the mean square extrapolation error at the worst disturbance
$$
h (\lambda) = \frac {a}{2} [ \delta (\lambda - \lambda_ {r}) + \delta (\lambda + \lambda_ {r}) ], \lambda_ {r} \approx 5. 5 / \mathrm {T} \mathrm {o r} u (t) = A \cos (\lambda_ {r} t),
$$
$$
MA = 0, MA^2 = a; \sigma_{ext} 0.53\sqrt{a} T^2,
$$
here $\lambda_r$ is the resonance frequency of the minimax filter of the extrapolation, at this point the least favorable spectral measure of an unknown scalar disturbance $h(\lambda)$ is concentrated mostly and loses in 1.87 times. The best forecast is the forecast being the average of linear and quadratic forecasts. At frequency $\lambda_r \approx 4.25 / T$ the mean-square extrapolation error does not exceed
$$
\sigma_ {e x t} \approx 0. 3 3 \sqrt {a} T ^ {2},
$$
i.e. loss of the optimal extrapolation does not exceed $17\%$.
## IV. CASE OF EXTRAPOLATION FOR DISCRETE TIME
Let us consider the dynamic model of a step-by-step vector process:
$$
\vec{\lambda}_n = A \vec{\lambda}_{n-1} + \vec{\xi}_n
$$
$$
y_n = \vec{C}^T \vec{\lambda}_n
$$
where $\vec{\lambda}_n$ - is the column - vector of the model condition, $y_n$ - is the measurement quantity; $\xi_n$ - is the column - vector of the excitation in the model; $A$ - is the transition matrix of the order of $r$; $\vec{C}$ - is the bonding column - vector; $n$ - is the time-moment.
Let us pass from vector record (12) to the scalar one by choosing the corresponding coefficients $A$, $\vec{C}$ and $P_{i}$ in the case if (12) is the observable system [6]. Let us confine ourselves to the case when
$$
\sum_ {i = 0} ^ {n} p _ {i} x _ {n - i} = u _ {n}, \tag {13}
$$
where $u_{n}$ - is the scalar disturbance with restricted dispersion. In $z$ - symbolization the latter representation is given by $\sigma_{ext} = \gamma \sigma_{u}, \sigma_{u} = \sqrt{D_{u}}$, where
$$
P (z) = \sum_ {i = 0} ^ {\infty} p _ {i} z ^ {i}.
$$
Let us label the desired evaluation $y_{n + N}$ by $l_{n}$. Then
$$
l _ {n} = \sum_ {i = 0} ^ {\infty} g _ {i} y _ {n - i} \tag {14}
$$
In $z$ - representation expression (14) will read as:
$$
l (z) = G ^ {e x t} (z) y (z)
$$
If the spectrum of $u(z)$ disturbance is presented
$$
h _ {u} (\omega) = \sum_ {n = - \infty} ^ {+ \infty} e ^ {- j \omega n} K _ {u _ {n}}
$$
where $K_{u_n} = M(u_i u_{i + n})$ - correlated moments, then the expression of the prediction error dispersion will be as follows:
$$
D_{N}\left(G^{\text{ext}},h_{u}(\omega)\right) = \frac{1}{2\pi} \int_{-\pi}^{\pi} \frac{h_{u}(\omega)}{\left|P\left(e^{-j\omega}\right)\right|^{2}} \left|G^{\text{ext}}\left(e^{-j\omega}\right) - e^{j\ω N}\right|^{2} d\omega \tag{15}
$$
The given problem has a saddlepoint due to the fact that $D_{N}(G,h_{u})$ is linear over $h_u$ and the set of all $h_u(\omega)$ with restricted integral dispersion $D = \frac{1}{2\pi}\int_{-\pi}^{\pi}h_u(\omega)d\omega$ is convex weak compact and $D_{N}(G^{ext},h_{u})$ is square over $G^{ext}(e^{-j\omega})$, i.e. the conditions of convexity - concavity, which required when the well-known theorem from the game theory is used [14,15], are met and
$$
\min _ {G ^ {e x t}} \max _ {h _ {u}} D _ {N} \left(G ^ {e x t}, h _ {u}\right) = \max _ {h _ {u}} \min _ {G ^ {e x t}} D _ {N} \left(G ^ {e x t}, h _ {u}\right) \tag {16}
$$
Minimum (15) gives Winer-Kolmogorov filter \[9\]:
$$
G^{ext}(z) = \frac{\left[ \frac{1}{z^{N}} x \right]_{+}}{x}
$$
where $x = x(z)$ is analytical function (together with the inverse one $\frac{1}{x(z)}$ ) inside an isolated unit circle, obtained by factorizing the function $X(\omega)$
$$
X (\omega) = \frac {h _ {u} (\omega)}{| P (e ^ {- j \omega}) | ^ {2}} = x (e ^ {- j \omega}) x (e ^ {j \omega})
$$
The maximum of expression (16) provides for an arbitrary spectral density $h_{u}(\omega)$ satisfying the constraint
$$
\frac {1}{2 \pi} \int_ {- \pi} ^ {\pi} h _ {u} (\omega) d \omega = D _ {u} < \infty
$$
only in the case [9]
$$
\gamma^ {2} X (\omega) = \left| e ^ {j \omega N} x ^ {+} - \left[ e ^ {j \omega N} x ^ {+} \right] _ {+} \right| ^ {2} \cdot \frac {1}{\left| P \left(e ^ {- j \omega}\right) \right| ^ {2}}. \tag {18}
$$
Lagrange factor $\gamma$ determines mean square error of the extrapolation
$$
l_{n} = Y_{n}(N) - \sum_{i=1}^{N-1} x_{N-i-1}(l_{n-i} - Y_{n}(N-i))
$$
Equation for $\gamma$ and nonzero $x^{+}(z)$ follows from expression (18):
$$
\gamma \sum_ {i = 0} ^ {n} p _ {i} x _ {n - i} = X _ {N - n - 1} 1 (N - n - 1), \tag {19}
$$
where $1(i)$ is a step function:
$$
1 (i) = \left\{ \begin{array}{ll} 0, & \text{i < 0,} \\ 1, & \text{i \geq 0.} \end{array} \right.
$$
Thus minimax prediction method concerning the algorithm does not depend on the level $D_{u}$, which makes its applicability domain wider.
Now let us consider a more compact representation of the minimax prediction process. Let us study the prediction in accordance with (13) in the absence of perturbations.
Equations (19) have a solution relative to an accuracy of an arbitrary multiplier. Therefore, $x_{N - 1} = 1$ may be added to (19). Then at $N = 1$
$$
l _ {n} = Y _ {n} (N),
$$
there is no difference between the minimax prediction for one step and the prediction without taking into consideration the perturbation and at $N > 1$:
$$
l_{n} = Y_{n}(N) - \sum_{i=1}^{N-1} x_{N-i-1} (l_{n-i} - Y_{n}(N-i)) ,
$$
Minimax prediction is calculated according to recurrent ratios with the filter memory depth of N-1 steps, as in the evaluation process only N-1 last values of the previous prediction estimations are memorized and the same number of values of prediction estimations according to the model, the perturbations being neglected. Thus, minimax predictions require corrections which should be more profound at long-term prediction.
However, it should be noted, that unlike in the Kalman filter, there should be a separate time-set run over recurrent ratios (20) for each N.
The initial output of algorithm (20) is evident: it is assumed that the first N-1 steps residuals in (20) $l_{n - i} - Y_n(N - i)$ equal zero. Formula (20) is the main in the suggested algorithm of minimax prediction. As will be seen from analytical examples, minimax predictions may not prove to be effective at low N. As N increases the efficiency of the prediction grows.
Let us consider the examples.
1. $y_{n} = y_{n - 1} + u_{n}$. In this case equation(19) can be expressed as:
$$
\gamma (x _ {n} - x _ {n - 1}) = x _ {N - n - 1}.
$$
It's solution at $\mathrm{N} > 1$ is as follows:
$$
x _ {n} = \sin {\frac {\pi (n + 1)}{2 N + 1}} \cdot \frac {1}{\sin (\pi N / 2 N + 1)}
$$
$$
\gamma = \frac {1}{2 \sin (\pi / 2 (2 N + 1))}
$$
The prediction equation can be written as
$$
l _ {n} = y _ {n} - \sum_ {i = 0} ^ {N - 1} x _ {N - i - 1} (l _ {n - i} - y _ {n - i})
$$
At high $N$ the maximal gain obtained by the minimax filter in comparison with the model prediction totals $\pi / 2$. The criterion of the comparison of the two filters is mean square error of the prediction under the most favorable perturbation spectrum.
2. $y_{n} = 2y_{n - 1} - y_{n - 2} + u_{n}$. Equation(19) can be expressed as
$$
\gamma (x _ {n} - 2 x _ {n - 1} + x _ {n - 2}) = x _ {N - n - 1}.
$$
Their solutions at $\mathrm{N} > 2$ may be presented by
$$
x _ {n} = \frac {1}{2} \bigg [ \frac {\sin \alpha (n + 1 - N / 2)}{\sin (\alpha N / 2)} + \frac {\operatorname {c h} \beta (n + 1 - N / 2)}{\operatorname {c h} (\beta N / 2)} \bigg ]
$$
where $\cos \alpha = 1 - (1 / 2\gamma); ch\beta = 1 + (1 / 2\gamma)$ and $\gamma$ may be calculated using the equation
$$
\sqrt {\gamma - \frac {1}{4}} c t g (\alpha N / 2) - \sqrt {\gamma + \frac {1}{4}} t h (\beta N / 2) = 1
$$
At $\mathrm{N} = 2$ $\gamma = \sqrt{2} +1,x_0 = \sqrt{2} -1,x_1 = 1.$ The maximum gain for our case is given by
$$
k = 3 / (\sqrt {2} + 1) \approx 1. 2 1.
$$
The prediction equations are given by
$$
l _ {n} = (N + 1) y _ {n} - N y _ {n - 1} - \sum_ {i = 0} ^ {N - 1} x _ {N - i - 1} (l _ {n - i} - (N + 1 - i) y _ {n} + (N - i) y _ {n - 1})
$$
At $N \to \infty$ a symptomatically $\gamma \to \infty, \alpha \to \beta \to 0, ctg(\alpha N / 2) = th(\beta N / 2)$. Therefore from the equation $th(\alpha N / 2) = tg(\alpha N / 2) = 1$ we obtain $(\alpha N / 2) \to \xi = 0.937552, \gamma \square 1 / \alpha^2$, the maximal gain obtained by the minimax filter is $k = 1.87$.
## V. ADAPTIVE AND MINIMAX PREDICTION
### a)Adaption based on the Kalman filter
In the assumptions that the constant transition matrix in is Hurwitz and known and the measurements are correct, the Kalman filter [12,14], giving the evaluations of the vector in (12) has the recurrent form:
$$
\vec {\lambda} _ {n} = \vec {\lambda} _ {n} ^ {e} + \frac {1}{\vec {c} ^ {T} K _ {n} ^ {e} \vec {c}} K _ {n} ^ {e} \vec {c} (y _ {n} - \vec {c} ^ {T} \vec {\lambda} _ {n} ^ {e})
$$
$$
K _ {n} = K _ {n - 1} ^ {e} - K _ {n} ^ {e} \vec {c c c} ^ {T} K _ {n} ^ {e} \frac {1}{\vec {c} ^ {T} K _ {n} ^ {e} \vec {c c c}} \tag {21}
$$
$$
\vec {\lambda} _ {n} ^ {e} = A \vec {\lambda} _ {n - 1}, K _ {n} ^ {e} = A K _ {n - 1} A ^ {T} + K _ {\xi_ {n}}
$$
Here the extrapolation of $y$ into $N$ steps, denoted by $y_{n}^{e}(N) = \vec{c}^{T}A^{N}\vec{\lambda}_{n}$. The adaptation is based on the selection of the elements of the matrix $K_{\xi_n}$ at every moment $n$, for example, the selection of the minimized sum of residuals squared
$$
\varphi \left(K _ {\xi_ {n}}\right) = \sum_ {k = 0} ^ {n - 1} \left[ y _ {n - k} - y _ {n - r} ^ {e} (r - k) \right]\rightarrow \min _ {K _ {\xi_ {n}}}, \tag {22}
$$
where $r = \dim \vec{\lambda}_n$
The condition of nonnegative definiteness should be imposed on $K_{\xi_n}$. Thus the time segment, measured back from the present moment, during which the measurements are used effectively, is regulated.
### b) Adaption based on the approximating model coefficients
Let us present formula (13) in the following way
$$
\vec {a} ^ {T} \vec {y} ^ {n} = u _ {n}, \tag {23}
$$
where $u_{n}$ are perturbations with zero mean and restricted dispersion and the following columns are introduced $\vec{a} = [a_0, \dots, a_N, \dots]^T$, $\vec{y}^n = [y_n, \dots, y_{n-N_y}, \dots]^T$.
Here it is assumed that $Y_{i}$ are precisely measured and coefficients are unknown.
In a number of cases the following representation may be known a priori
$$
\vec {a} = A _ {a \alpha} \vec {\alpha} \tag {24}
$$
where $\vec{\alpha}$ is column vector smaller than $\vec{a}$; $A_{aa}$ is known matrix.
The adaption of filter in ref. [12] which provides $\vec{\alpha}$ estimations, is given by
$$
\vec {\hat {a}} _ {n} = \vec {\hat {a}} _ {n - 1} + K _ {n} y ^ {n} (0 - \vec {\hat {a}} _ {n - 1} A _ {a \alpha} ^ {T} y ^ {n}) = (E - K _ {n} y ^ {n} y ^ {n T} A _ {a \alpha}) \vec {\hat {a}} _ {n - 1},
$$
$$
K _ {n} = K _ {n - 1} - K _ {n - 1} y ^ {n} y ^ {n T} K _ {n - 1} \frac {1}{1 + y ^ {n T} K _ {n - 1} y ^ {n}} \tag {25}
$$
where $\hat{\vec{\alpha}}_n$ is the estimation of $\vec{\alpha}_n$. $E$ is a unit matrix. $K_n$ is the covariance error matrix of estimations of the order $r_\alpha$. The initialization the algorithm is conducted according to the first $r_\alpha$ measurements, where $r_\alpha = \dim \vec{\alpha}$, in the assumption of uncorrelated $u_n$. The minimax approach is used after the adoption of coefficients of the under discussion model. In this case we are dealing with a combined adaptive-minimax model.
## VI. CONCLUSION
In this paper, various new techniques related to the adaptive and minimax methods of extrapolation of a stationary occasional sequence in the presence of disturbance with restricted dispersion have been presented along with some practical application examples. The results presented here are intended mainly to reduce the complexity involved in the prediction problem in game theory in the case of the absence of a priori information about the statistical characteristics of disturbance in the absence of measurement errors under scalar observation. Another important problem involved with the realization robust, steady minimax predictors becomes quite demanding in the conditions of interval fuzziness in the model parameters in the presence or absence of measurement errors under scalar observation. The problem of the existence of the interval saddle point in the extrapolation game in terms of the extreme properties of permissible interval spectral densities of the robust linear filters and nature is actualized in the context of prediction estimation of discrete and/or continuous-time economic processes in the presence of uncertainty. We also expect that the application results, which were discussed in this paper, will motivate further potential applications of Kalman and minimax filtering techniques in various fields of economic, engineering and econometric forecasting.
Generating HTML Viewer...
References
16 Cites in Article
Ulf Grenander (1963). A prediction problem in game theory.
M Moklayachuk,O Masyutka (2012). Estimation of Multidimensional Stationary Stochastic Sequences from Observations in Special Sets of Points.
S Makridakis,S Wheelwright (1978). Forecasting Metods and Applications.
Jurgen Franke (1984). ON THE ROBUST PREDICTION AND INTERPOLATION OF TIME SERIES IN THE PRESENCE OF CORRELATED NOISE.
M Crane,A Nudelman (1973). The Čebyšev-Markov problem with moments in a parallelepiped.
O Kurkin,Ju Korobochkin,S Shatalov (1990). Minimaksnaja obrabotka informacii [Minimax treatment of information.
O Kurkin (1981). Minimax linear filtration of a stationary random process with the restricted disturbancevariance//Radiotechnique and electronics.
O Kurkin (2001). Guaranteed Estimation Algorithms for Prediction and Interpolation of Random Processes.
Korobochkin Yu (1983). Minimax linear estimation of a stationary occasional sequence in the presence of disturbance with restricted dispersion//Radio technique and electronics.
I Sidorov (2018). Linear Minimax Filtering of a Stationary Random Process under the Condition of the Interval Fuzziness in the State Matrix of the System with a Restricted Variance.
M Athens,P Falb (1968). Optimal control. M.
A Fedotov (1990). Incorrect problems with occasional mistakes in the basic data.
E Sage,J Mels (1976). Communication Privacy Management Theory.
A Albert (1977). Regression, pseudo inversion and recurrent evaluation.
No ethics committee approval was required for this article type.
Data Availability
Not applicable for this article.
How to Cite This Article
Sidorov I.G.. 2026. \u201cAdaptive and Minimax Methods of Prediction Dynamic Systems using the Kalman Algorithm\u201d. Global Journal of Science Frontier Research - F: Mathematics & Decision GJSFR-F Volume 23 (GJSFR Volume 23 Issue F1): .
Explore published articles in an immersive Augmented Reality environment. Our platform converts research papers into interactive 3D books, allowing readers to view and interact with content using AR and VR compatible devices.
Your published article is automatically converted into a realistic 3D book. Flip through pages and read research papers in a more engaging and interactive format.
In the article we consider the problem of linear extrapolation of zero-mean widesense-stationary random process both discrete-time and continuous-time cases under conditions of the absence of a priori information about the statistical characteristics of disturbance in the absence of measurement errors under scalar observation only the restricted disturbance is assumed. We investigate a minimax approach, which guarantees the prediction of high quality at the least favorable disturbance spectrum. The simple implementation of an optimal adaptive minimax predictor and prediction based on Kalman -Bucy filter and their comparative characteristics has been obtained. Examples are given.
Our website is actively being updated, and changes may occur frequently. Please clear your browser cache if needed. For feedback or error reporting, please email [email protected]
Thank you for connecting with us. We will respond to you shortly.