All your Writing needs covered

Modeling and estimation of multivariate discrete and continuous time stationary processes

Calculate the price
of
your order:

275 words
+
Approximate price
$ 0.00

Modeling and estimation of multivariate discrete and continuous time stationary processes

Need an academic paper without any mistakes or plagiarism?
Our writers will create one from scratch for $13.00 $11.05/page!
311 professionals
online
Try it now!

1. Introduction

Stationary stochastic processes provide a significant instrument for modeling numerous temporal phenomena related to different fields of science. In particular, due to the evidence of long dependence structures in the real financial data, stationary processes possessing long-memory have been widely applied in mathematical finance.

When discrete time is considered, stationary data is typically modeled by applying autoregressive–moving-average (ARMA) processes or their extensions. One focal reason for popularity of ARMA processes is that for every stationary process with a vanishing autocovariance γ(·) and for every n ∈ ℕ there exists an ARMA process X such that γ X ( k ) = γ( k ) for | k | ≤ n . For a comprehensive overview of ARMA processes we mention [ 1 – 3 ]. The immense ARMA family include for example SARIMA (seasonal autoregressive integrated moving-average) models, where a seasonal ARMA process is obtained by differencing the original data. We mention also different GARCH (generalized autoregressive conditional heteroskedasticity) models originating from [ 4 ] and [ 5 ] that are commonly used in financial modeling taking account of the time-dependent volatility. ARMA processes, their extensions and estimation in these models have been concerned e. g., in [ 6 – 16 ], to name but a few. Moreover, in [ 17 ] we showed that all univariate strictly stationary processes indexed by the integers are characterized by the AR(1) (autoregressive model of order 1) equation

X t ϕ X t 1 = Z t , t .

However, contrary to the classical AR(1), the noise Z belonging to a certain class of stationary processes is not necessarily white. Established on the characterization, we proposed an estimation method for ϕ in the case of a square integrable stationary process. This method has several advantages over conventional ones such as maximum likelihood and least squares fitting of ARMA models. Furthermore, in [ 18 ] we applied our method in estimation of a generalization of the ARCH model involving a covariate process that can be interpreted as the liquidity of an asset. We would like to emphasize that the proposed method is applicable in estimation of essentially any square integrable one-dimensional stationary process. Hence, it also covers stationary solutions, which are often of central interest, of models of the vast ARMA family.

In the case of continuous time, the Ornstein-Uhlenbeck process X given by the Langevin equation

d X t = θ X t d t + d B t , t ,( 1 )

where θ > 0 and B is a two-sided Brownian motion, can be seen as the analog of the discrete time AR(1) process. By posing a suitable initial condition, (1) yields a stationary solution. The foregoing can be generalized, for example, by replacing Brownian motion with other stationary increment processes satisfying certain integrability conditions. One popular option as the replacement is fractional Brownian motion recovering the fractional Ornstein-Uhlenbeck process introduced in [ 19 ]. This kind of generalized Ornstein-Uhlenbeck processes are applied e. g., in mathematical finance to describe mean-reverting systems under the influence of shocks, and they are a highly active topic of research. Equations of type (1) with varying driving forces, and estimation in such models have been concerned e. g., in [ 20 – 32 ], to mention but a few. Furthermore, in [ 33 ] we showed that a generalized version of (1) characterizes all multivariate strictly stationary processes with continuous paths. Consequently, we proposed an estimation method for the parameter matrix of (1). The method is based on continuous time algebraic matrix Riccati equations (CAREs) and it is applicable in estimation of essentially any square integrable multivariate stationary process.

Algebraic Riccati equations, occurring naturally e. g., in optimal control and filtering theory, is an intensively studied topic in the literature on its own. In many applications, real-valued CAREs often take the symmetric form

3 hours!3 hours! In only 3 hours, we’ll deliver you a flawless & plagiarism-free paper on any subject Read More
B A + A B A C A + D = 0 ,( 2 )

where C and D are symmetric, and symmetric solutions A are to be found. For a general approach to algebraic Riccati equations the reader may consult for example [ 34 ]. The existence and uniqueness of a solution to (2) is a well-studied topic, especially when C and D are positive semidefinite [see e. g., [ 35 – 37 ]].

The main contribution of this paper is to extend the characterization and the consequent estimation method of discrete stationary processes of [ 17 ] to multivariate settings. Serving as an instrument for the characterization, we also define a multivariate discrete Lamperti transform giving a one-to-one correspondence between stationary and H -self-similar processes with a positive definite matrix H . The characterization leads to covariance based symmetric CAREs for the related parameter matrix providing us with a novel estimation method of multivariate discrete stationary processes. In outline, this results in the following correspondence between the continuous time case. When the concepts of noises are extended from the conventional ones, the reputed analogous AR(1) and Langevin equations characterize discrete and continuous time stationary processes, respectively. The characterizations provide us with models of stationary processes yielding symmetric CAREs for the corresponding parameter matrices. Furthermore, these equations can be applied in a similar manner in estimation in both cases. For readers’ interest and to highlight the obtained connection between discrete and continuous time, we also give the key results of [ 33 ].

The rest of the paper is organized as follows. In section 2. 1, we first give an AR(1) type of characterization covering all multivariate strictly stationary processes indexed by the integers. Consequently, under the assumption of square integrability, we obtain a set of symmetric CAREs for the model parameter matrix serving as a basis for estimation. Finally, we state theorems for consistency and asymptotic distribution of the parameter (matrix) estimator. In section 2. 2, we present the main results of [ 33 ], while at the same time comparing them to the results obtained in discrete time. For the reader’s convenience, all technical proofs are postponed to section 3.

2. Main Results

We begin with some preliminaries and a short notational introduction. The processes we consider in this paper are n -dimensional, real-valued and indexed by I ∈ {ℤ, ℝ}. For such a process Y we write Y = ( Y t ) t I , where the i th component of the random vector Y t is denoted by

Y t ( i )

. Equality of the distributions of two random vectors Y t and Z t is denoted by

Y t = law Z t

. Similarly, equality of two processes Y and Z in the sense of finite dimensional distributions is denoted by

15% Off15% Off your first order tailored to your instructions by clicking on this discount! Get a Discount
Y =( Y t ) t I = law ( Z t ) t I = Z

. Throughout the paper, we investigate strictly stationary processes meaning that

( X t + s ) t I = law ( X t ) t I

for every s I . Consequently, we omit the word “ strictly” and simply say that X is stationary. By writing A ≥ 0 or A > 0 we mean that the matrix A is positive semidefinite or positive definite, respectively. We denote an eigendecomposition of a symmetric matrix by A = Q Λ Q , where Λ = diag(λ i ). Furthermore, the L 2 vector norm and the corresponding induced matrix norm is denoted by ‖·‖.

In sections 2. 1 and 2. 2, we introduce models for discrete and continuous time stationary processes, respectively. Consequently, this leads to symmetric CAREs for the model parameter matrix of the form

B A + A B A C A + D = 0 ,( 3 )

where C, D ≥ 0, and we are solving the equation for a positive definite A . There exists a vast amount of literature on existence and uniqueness of a solution [see e. g., [ 35 ] or [ 36 ]] in the described setting. In particular, if C, D > 0, then there exists a unique positive semidefinite solution to (3). Furthermore, there exists several numerical methods for finding the positive semidefinite solution of (3) [see e. g., [ 38 , 39 ] or monograph [ 40 ]]. Hence, provided that the solution to (3) is unique, a prospective estimation method of A can be based on the equation.

2. 1. Discrete Time

In this subsection, we extend the characterization of discrete ( I = ℤ) stationary processes of [ 17 ] to multivariate settings. Consequently, we derive quadratic equations for the corresponding model parameter matrix providing us with a natural way to define an estimator for the parameter. Finally, we pose theorems for consistency and asymptotic distribution. A strong analog with the continuous time case I = ℝ covered in [ 33 ] is obtained. We start by providing some definitions.

D EFINITION 2. 1. Let G = ( G t ) t ∈ℤ be an n dimensional stationary increment process. We define a stationary process Δ G = (Δ t G ) t ∈ℤ by

Δ t G = G t G t 1 .

Next, we define a class of stationary increment processes having sub-exponentially deviating sample paths. These processes serve as the noise in the subsequent AR(1) type of characterization of stationary processes.

D EFINITION 2. 2. Let H > 0 be a positive definite n × n matrix, and let G = ( G t ) t ∈ℤ be an n dimensional stochastic process with stationary increments and G 0 = 0. If

For $13.00 $11.05/page, we’ll come up with flawless paper
based on your requirements
311 professionals
online
Read More
lim l k = l 0 e k H Δ k G ( 4 )

exists in probability and defines an almost surely finite random variable, we denote

G G H

.

R EMARK 2. 3. Lemma 3. 1 shows that existence of a logarithmic moment is sufficient for

G G H

for all H > 0. Particularly, this is the case if G is square integrable. On the other hand, an example of an one-dimensional stationary increment process G with G 0 = 0, but

G G H

for any H > 0 was provided in [ 41 ]. Moreover, Definition 2. 2 could also be stated without the assumption of positive definiteness. However, in the one-dimensional case with H ≤ 0, the convergence of (4) would imply G ≡ 0. As expected, we encounter a similar kind of dimensional degeneracy when considering e. g., symmetric matrices with non-positive eigenvalues. Hence, the assumption H > 0 may be regarded as natural. See also Remark 3. 3 .

The next theorem characterizes all multivariate stationary processes, including processes possessing long-memory.

T HEOREM 2. 4. Let H > 0 be a positive definite n × n matrix, and let X = ( X t ) t ∈ℤ be an n dimensional stochastic process. Then X is stationary if and only if

lim t e t H X t = 0

and

Δ t X =( e H I ) X t 1 + Δ t G ( 5 )

for

G G H

and t ∈ ℤ. Moreover, the process

G G H

is unique .

C OROLLARY 2. 5. Let H > 0 be a positive definite n × n matrix, and let X be stationary. Then X admits an AR(1) type of representation

X t Φ X t 1 = Δ t G ,( 6 )

where Φ = e H and

G G H

.

By using (6) and the expression (17) from the proof of Theorem 2. 4, it is straightforward to show that Δ G is centered and square integrable if and only if X is centered and square integrable, respectively. In what follows, we assume these two attributes and write

γ ( t )=????( X t X 0 )

and

r ( t )=????[( Δ t G )( Δ 0 G )]

. Furthermore, since

G t = k = 1 t Δ k G , t 1 ,

in this case also G is centered and square integrable, and we denote

v ( t )= cov ( G t )=????( G t G t )

. We would like to point out that centeredness can be assumed without loss of generality (see Remark 2. 11).

Under the discussed assumptions, we obtain an expression for γ( t ) in terms of the noise process.

R EMARK 2. 6. The autocovariance function γ( t ) is given by

γ ( t )= e t H k = t j = 0 e k H r ( k j ) e j H .

Furthermore, if G has independent components, we obtain

γ ( t )= e t H 2 k = t j = 0 e k H ( v ( k j + 1 )+ v ( k j 1 ) 2 v ( k j )) e j H .

The following lemma writes the quadratic equations for the model parameter Φ = e H presented in [ 17 ] in our multivariate setting.

LEMMA 2. 7. Let H > 0 be a positive definite n × n matrix, and let X be stationary of the form (6). Then

r ( t )= Φ γ ( t ) Φ γ ( t + 1 ) Φ Φ γ ( t 1 )+ γ ( t )( 7 )

for every t ∈ ℤ.

R EMARK 2. 8. In the proof of the lemma, we utilize the increment process Δ G similarly as in [ 17 ] yielding quadratic equations for Φ in terms of r and γ. One could consider an estimation method for Φ based on the above equations. However, for a general stationary X, (7) is a symmetric CARE only if t = 0, and even in this case, existence of a unique positive semidefinite solution is not guaranteed .

By adapting the approach of [ 33 ], we obtain a set of symmetric CAREs on which we construct an estimator for the model parameter Φ = e H . For this, we define the following matrix coefficients.

D EFINITION 2. 9. We set

B t = k = 1 t γ ( k 1 ) γ ( k ) C t = k = 1 t j = 1 t γ ( k j ) D t = v ( t ) 2 γ ( 0 )+ γ ( t )+ γ ( t )

for every t ∈ ℕ.

T HEOREM 2. 10. Let H > 0 be a positive definite n × n matrix, and set Θ = I e H . Let X = ( X t ) t ∈ℤ be stationary of the form (5). Then the CARE

B t Θ + Θ B t Θ C t Θ + D t = 0 ( 8 )

is satisfied for every t ∈ ℕ.

R EMARK 2. 11. Equations (7) and (8) are covariance based. Consequently, they hold also when X and G in Theorem 2. 4 are not centered .

R EMARK 2. 12. It is worth to emphasize the interplay between the parameter matrix H and the noise G in the context of a fixed stationary X . The noise in Theorem 2. 4 and Corollary 2. 5 is unique only after H > 0 is fixed. On the other hand, if we fix some covariances of the noise process such that the equations of Theorem 2. 10 (or Lemma 2. 7) yield a positive definite solution, then we may set H according to this solution. Furthermore, the corresponding noise is now given by Corollary 2. 5 and its covariance satisfies the set prerequisites. In practice, the parameter matrix H is estimated by assuming some information on the noise, and by estimating the autocovariance of the observed stationary process X . The estimation will discussed in more detail at the end of this subsection. See also the following examples illustrating how to obtain familiar noise terms in the case of two stationary ARMA type of processes .

We give a couple of examples on how some basic multivariate processes of ARMA type can be presented in the form (6), and how to derive the corresponding noise G together with its covariance function v .

E XAMPLE 2. 13. Let X be an n dimensional stationary AR(1) type of process given by

X t ϕ X t 1 = ϵ t ,

with 0 Q Λ Q , ‖ϕ‖ and ϵ ~ iid (0, Σ). Then, we may set

H = Q diag ( log λ i ) Q

giving Φ = ϕ. Now Δ G = ϵ and

G t = k = 1 t ϵ k

. Furthermore ,

v ( t )= k = 1 t cov ( ϵ k )= t Σ

for t ≥ 1.

E XAMPLE 2. 14. Let X be an n dimensional stationary ARMA (1, q ) type of process given by

X t ϕ X t 1 = ϵ t + θ 1 ϵ t 1 ++ θ q ϵ t q ,

with 0 and ϵ ~ iid (0, Σ). Similarly as above, we may set Φ = ϕ and now Δ G equals to the MA ( q ) process on the right. Consequently, for t ≥ 1,

G t = k = 1 t ϵ k + θ 1 ϵ k 1 ++ θ q ϵ k q

and

v ( t )= i , j = 0 q max ( 0 , t | i j |) θ i Σ θ j ,

where θ 0 = I .

In [ 17 ], we proposed an estimation method of one-dimensional stationary processes based on equations (7). In particular, we showed that the method is applicable except in some special class of stationary processes. In [ 42 ], we provided a comprehensive analysis of the class, and proved that it consists of highly degenerate processes. On the other hand, due to the strong dependence structure, the failure of different estimation methods is expected. Fundamentally, a stationary process X belongs to the class if there exists two values H and

H ~

such that the corresponding processes Δ G and

Δ G ~

in (6) have identical autocovariance functions. Next, we state a lemma showing that these degenerate processes have a special characteristic also under the new set of Equations (8).

LEMMA 2. 15. Let X be a one-dimensional stationary process and let H > 0 be fixed. Set Φ = e H and

X t Φ X t 1 = Δ t G , t .

If the equation

γ ( t ) Φ 2 ( γ ( t + 1 ) γ ( t 1 )) Φ + γ ( t ) r ( t )= 0 ( 9 )

yields the same two solutions

Φ , Φ ~> 0

for every t ∈ ℤ, then also the equation

C t Θ 2 2 B t Θ D t = 0 ( 10 )

yields the same two solutions Θ = 1 − Φ and

Θ ~= 1 Φ ~

for every t ∈ ℕ.

The following remark illuminates the connection between the coefficient matrices of Definition 2. 9 and uniqueness of the solution to (8), which we discussed in the beginning of section 2.

R EMARK 2. 16. By Lemma 3. 8, the matrix Θ is positive definite. Since

C t =????[ k = 1 t X k 1 ( k = 1 t X k 1 )]= cov ( k = 1 t X k 1 ),

the matrix C t is positive semidefinite. Furthermore, if the smallest eigenvalue of v ( t ) grows enough in time , D t becomes positive definite [see [ 33 ]]. This is the case e. g., when the noise G has independent components with growing variances .

For estimation, it is desirable that (8) admits a unique positive semidefinite solution. In this case, the solution is the correct parameter matrix Θ by the construction. Moreover, it guarantees that a convergent numerical scheme has the desired limit Θ. In the sequal, we simply assume that t is chosen in such a way that C t , D t > 0 ensuring the uniqueness of the solution, and we omit the subindex t from Equation (8). We have justified the assumption of positive definiteness in detail in continuous time [see section 2. 1 and Remark 2. 11 in [ 33 ]]. Furthermore, we assume that v ( t ) is known and the stationary process X is observed up to the time T > t , and the coefficient matrices B, C , and D are estimated from these observations by replacing the autocovariances γ(·) with some estimators

γ ^ T (·)

. The coefficient estimators are denoted by

B ^ T , C ^ T

, and

D ^ T

, and we set

Δ T B = B ^ T B , Δ T C = C ^ T C , Δ T D = D ^ T D .

Next, we define an estimator

Θ ^ T

for the matrix Θ = I − Φ = I e H . The proofs of the related asymptotic results allow a certain amount of flexibility in the definition. Thus, we give a definition that probably is the most convenient from the practical point of view. Consistency and the rate of convergence of

Θ ^ T

are inherited from autocovariance estimators

γ ^ T (·)

of the observed stationary process. In addition, the limiting distribution is obtained as a linear function of the limiting distribution of the autocovariance estimators.

D EFINITION 2. 17. The estimator

Θ ^ T

is defined as the unique positive semidefinite solution to the perturbed CARE

B ^ T Θ ^ T + Θ ^ T B ^ T Θ ^ T C ^ T Θ ^ T + D ^ T = 0

whenever

C ^ T , D ^ T > 0

. Otherwise, we set

Θ ^ T = 0

.

T HEOREM 2. 18. Let C, D > 0. Assume that

max s { 0 , 1 ,, t } γ ^ T ( s ) γ ( s ) 0 .

Then

Θ ^ T Θ 0 ,

where

Θ ^ T

is given by Definition 2. 17 .

T HEOREM 2. 19. Let l ( T ) be a rate function. If

l ( T )[ vec ( γ ^ T ( 0 ) γ ( 0 )) vec ( γ ^ T ( 1 ) γ ( 1 )) vec ( γ ^ T ( t ) γ ( t ))] l a w Z ,

where Z is a ( t + 1) n 2 dimensional random vector, then:

(1) Let

Z ~

be the permutation of elements of Z corresponding to the order of elements of

[ vec (( γ ^ T ( 0 ) γ ( 0 ))) vec (( γ ^ T ( 1 ) γ ( 1 ))) vec (( γ ^ T ( t ) γ ( t ))).]

Define a linear mapping

L 1 :( t + 1 ) n 2 3 n 2

by

L 1 ( Z )=[ k = 0 t 1 ( t k )[ Z ( k n 2 + 1 ) Z (( k + 1 ) n 2 )]+ k = 1 t 1 ( t k )[ Z ~( k n 2 + 1 ) Z ~(( k + 1 ) n 2 )] k = 1 t [ Z (( k 1 ) n 2 + 1 ) Z ~( k n 2 + 1 ) Z ( k n 2 ) Z ~(( k + 1 ) n 2 )][ Z ( t n 2 + 1 )+ Z ~( t n 2 + 1 ) Z (( t + 1 ) n 2 )+ Z ~(( t + 1 ) n 2 )] 2 [ Z ( 1 ) Z ( n 2 )]],

where

1 0

is an empty sum. Then

l ( T ) vec ( Δ T C , Δ T B , Δ T D ) l a w L 1 ( Z ).

(2) If D, C > 0 and

Θ ^ T

is given by Definition 2. 17, then

l ( T ) vec ( Θ ^ T Θ ) l a w L 2 ( L 1 ( Z )),

where

L 2 : 3 n 2 n 2

is a linear mapping expressible in terms of Θ, t and r .

2. 2. Continuous Time

We have collected the main results (Theorems 2. 21, 2. 28, 2. 32 and 2. 33) of [ 33 ] concerning continuous time multivariate stationary processes into this subsection. In addition, in order to complete the analog between discrete and continuous time, we derive quadratic equations for the model parameter (Proposition 2. 24) that correspond to the equations of Lemma 2. 7. Throughout the subsection, we assume that the considered processes have continuous paths almost surely and hence, the related stochastic integrals can be interpreted as pathwise Riemann-Stieltjes integrals. Again, we start by defining the class

G H

of stationary increment processes for H > 0.

D EFINITION 2. 20. Let H > 0 be a positive definite n × n matrix, and let G = ( G t ) t ∈ℝ be an n dimensional stochastic process with stationary increments and G 0 = 0. If

lim s s 0 e H u d G u

exists in probability and defines an almost surely finite random variable, we denote

G G H

.

As in discrete time, it can be shown that existence of some logarithmic moments ensure that

G G H

for all H > 0. In particular, square integrability of G suffices, which is the case in our second moment based estimation method.

The next theorem is the continuous time counterpart of Theorem 2. 4 showing that all stationary processes are characterized by the Langevin equation, whereas in discrete time, the characterization was given by an AR(1) type of equation.

T HEOREM 2. 21. Let H > 0 be a positive definite n × n matrix, and let X = ( X t ) t ∈ℝ be an n dimensional stochastic process. Then X is stationary if and only if

X 0 = 0 e H u d G u

and

d X t = H X t d t + d G t ,( 11 )

for

G G H

and t ∈ ℝ. Moreover, the process

G G H

is unique .

C OROLLARY 2. 22. From Theorem 2. 21 it follows that X is the unique stationary solution

X t = e H t t e H u d G u ( 12 )

to (11).

In order to apply Theorem 2. 21 in estimation, we pose the assumption

sup s [ 0 , 1 ]????( G s 2 ).

This guarantees that

G G H

for all H > 0, and square integrability of X and G . On the other hand, if X is square integrable, then G is also. In addition and without loss of generality, we assume that the processes are centered. Again, we write

γ ( t )=????( X t X 0 )

and

v ( t )=????( G t G t )

. Now, the autocovariance function of the following stationary process is well-defined.

D EFINITION 2. 23. Let G = ( G t ) t ∈ℝ be a centered square integrable stationary increment process and let δ > 0. We define a stationary process

Δ δ G =( Δ t δ G ) t

by

Δ t δ G = G t G t δ

and the corresponding autocovariance function r δ by

r δ ( t )=????[( Δ t δ G )( Δ 0 δ G )].

As in discrete time (Lemma 2. 7), we obtain quadratic equations for the model parameter H in terms of r δ and γ. The equations could potentially be used to construct an estimator for H and they might be also of independent interest.

P ROPOSITION 2. 24. Let H > 0 be a positive definite n × n matrix, and let X be of the form (12). Then

r δ ( t )= 2 γ ( t ) γ ( t + δ ) γ ( t δ )+( t t + δ γ ( s ) d s t δ t γ ( s ) d s ) H + H ( t δ t γ ( s ) d s t t + δ γ ( s ) d s )+ H ( t δ t ( s t + δ ) γ ( s ) d s + t t + δ ( t s + δ ) γ ( s ) d s ) H

for every t ∈ ℝ.

R EMARK 2. 25. The advantage of the equations above is that we have to consider γ( s ) only for s ∈ [ t − δ, t + δ], but as in the discrete case, for a general stationary X we obtain a symmetric CARE only when t = 0. In addition, similarly as above, we could set in discrete time

Δ t k G : = G t G t k , k

. However, this would lead to more complicated equations in Lemma 2. 7 .

A significant difference compared to the discrete time Equations (7) occurs in the univariate case. Namely, the first order term with respect to H vanishes.

C OROLLARY 2. 26. The univariate case yields

r δ ( t )= 2 γ ( t ) γ ( t + δ ) γ ( t δ )+ H 2 ( t δ t ( s t + δ ) γ ( s ) d s + t t + δ ( t s + δ ) γ ( s ) d s )

for every t ∈ ℝ.

One could base a univariate estimation method on the above equations without the concern of existence of a unique positive solution. However, since we wish to treat also multivariate settings, we present the most central results of [ 33 ] that are obtained from Theorem 2. 21 by considering the noise G directly. First, we define matrix coefficients corresponding to Definition 2. 9. Consequently, we write symmetric CAREs for the parameter H that are similar to the CAREs (8) for the discrete time parameter Θ.

D EFINITION 2. 27. We set

B t = 0 t γ ( s ) γ ( s ) d s C t = 0 t 0 t γ ( s u ) d u d s D t = v ( t ) 2 γ ( 0 )+ γ ( t )+ γ ( t )

for every t ≥ 0.

T HEOREM 2. 28. Let H > 0 be a positive definite n × n matrix, and let X = ( X t ) t ∈ℝ be stationary of the form (12). Then the CARE

B t H + H B t H C t H + D t = 0 ( 13 )

is satisfied for every t ≥ 0.

R EMARK 2. 29. As in discrete time, the Equations (13) and (14) are covariance based and hence, they hold also for a non-centered stationary X .

R EMARK 2. 30. Contrary to the discrete time Equations (8), the first order term vanishes in the univariate setting as in (13).

Again, we assume that t is chosen in such a way that C t , D t > 0 ensuring the existence of a unique positive semidefinite solution. We have discussed this assumption in detail in [ 33 ]. We define an estimator

H ^

T for the model parameter matrix H identically to the discrete time by replacing the autocovariances γ(·) in the matrix coefficients with their estimators

γ ^ T (·)

. The below given definition differs slightly from the definition in [ 33 ], but the same asymptotic results still apply.

D EFINITION 2. 31. The estimator

H ^

T is defined as the unique positive semidefinite solution to the perturbed CARE

B ^ T H ^ T + H ^ T B ^ T H ^ T C ^ T H ^ T + D ^ T = 0

whenever

C ^ T , D ^ T > 0

. Otherwise, we set

H ^

T = 0.

As in discrete time, asymptotic properties of

H ^

T are inherited from the autocovariance estimators. However, due to the continuous time setting, instead of pointwise convergence, we have to consider functional form of convergence of

γ ^ T (·)

. In [ 33 ], we have provided sufficient conditions in the case of Gaussian noise G with independent components, under which the assumptions of the following theorems are satisfied. In particular, the results are valid for fractional Brownian motion that is widely applied in the field of mathematical finance.

T HEOREM 2. 32. Let C, D > 0. Assume that

sup s [ 0 , t ] γ ^ T ( s ) γ ( s ) 0 .

Then

H T ^ H 0 ,

where

H ^

T is given by Definition 2. 31 .

T HEOREM 2. 33. Let Y = ( Y s ) s ∈[0, t ] be an n 2 dimensional stochastic process with continuous paths almost surely and let l ( T ) be a rate function. If

l ( T ) vec ( γ ^ T ( s ) γ ( s )) l a w Y s

in the uniform topology of continuous functions, then:

(1) Let

Y ~

s be the permutation of elements of Y s that corresponds to the order of elements of

vec (( γ ^( s ) γ ( s )))

. Then

l ( T ) vec ( Δ T C , Δ T B , Δ T D ) l a w [ 0 t ( t s )( Y s + Y ~ s ) d s 0 t ( Y s Y ~ s ) d s Y t + Y ~ t 2 Y 0 ]=: L 1 ( Y ).

(2) If C, D > 0 and

H ^

T is given by Definition 2. 31, then

l ( T ) vec ( H ^ T H ) l a w L 2 ( L 1 ( Y )),

where

L 2 : 3 n 2 n 2

is a linear mapping expressible in terms of H , t and the covariance function of G .

3. Proofs

In the following, we denote the smallest eigenvalue of H > 0 by λ min . Consequently

e k H = Q diag ( e λ i k ) Q = e λ m i n k

for a negative k .

3. 1. Discrete Time

The proof of the next lemma follows the lines of the proof of Theorem 2. 2. in [ 41 ] that concerns the one-dimensional continuous time case. However, in our setting, we obtain a weaker sufficient condition for

G G H

for all H > 0.

LEMMA 3. 1. Let G = ( G t ) t ∈ℤ be an n dimensional stationary increment process with G 0 = 0. Assume that

????(| log G 1 ????{ G 1 > 1 }| 1 + δ )

for some δ > 0. Then

G G H

for all positive definite n × n matrices H .

P ROOF . Let H > 0. We apply the Borel-Cantelli lemma together with Markov’s inequality to show that

e k H Δ k G 0

almost surely as k → −∞. Let ϵ > 0 be fixed below.

( e k H Δ k G > ϵ )( e λ m i n k Δ k G > ϵ )=( e λ m i n k G 1 > ϵ )=( G 1 > ϵ e λ m i n k )=( log G 1 > log ϵ λ m i n k ),

since Δ G is stationary and G 0 = 0. Furthermore,

k ( log ϵ k λ m i n ) C k

for some C > 0 and k k ϵ . Thus, for k k ϵ ,

( e k H Δ k G > ϵ )( log G 1 C k )=( log G 1 ????{ G 1 > 1 } C k )????| log G 1 ????{ G 1 > 1 }| 1 + δ ( C k ) 1 + δ c 1 ( k ) 1 + δ

giving the wanted result. We conclude the proof by noting that

k = 0 e k H Δ k G k = 0 e 1 2 k H e 1 2 k H Δ k G sup k e 1 2 k H Δ k G k = 0 e 1 2 λ m i n k

almost surely.

Next, we extend the concept of self-similarity to discrete time multivariate processes. While the following two definitions are natural, to the best of our knowledge they are not widely acknowledged in the literature.

D EFINITION 3. 2. Let H > 0 be a positive definite n × n matrix, and let

Y =( Y e t ) t

be an n dimensional stochastic process. Then Y is H self-similar if

( Y e t + s ) t = l a w ( e s H Y e t ) t

for every s ∈ ℤ.

R EMARK 3. 3. Similarly as in Definition 2. 2, positive definite matrices serve as natural counterparts of the conventional positive scalar valued exponents of self-similarity. In addition ,

lim l Y e l

does not converge (see the proof of the auxiliary Lemma 3. 7) in the case of a general H self similar Y with non-positive definite H . It seems intuitive to expect the term to be convergent. However, posing such requirement leads to degeneracy of Y e. g., when H is symmetric with negative eigenvalues [cf. Remark 3. 2 in [ 33 ]]. Furthermore, this indicates a possibility of dimensional reduction. For details on continuous time self-similar processes, we refer to [ 43 ]. See also the discussion on non-positive exponents of self-similarity in [ 44 ].

The following transform and the corresponding theorem giving one-to-one correspondence between self-similar and stationary processes were originally introduced by Lamperti in the univariate continuous time setting [ 45 ].

D EFINITION 3. 4. Let H > 0 be a positive definite n × n matrix, and let X = ( X t ) t ∈ℤ and

Y =( Y e t ) t

be n dimensional stochastic processes. We define

( L H X ) e t = e t H X t

and

( L H 1 Y ) t = e t H Y e t .

T HEOREM 3. 5. The operator

L H

together with its inverse

L H 1

define a bijection between n dimensional stationary processes and n dimensional H self similar processes .

P ROOF . First, let X be stationary and set

Z e t =( L H X ) e t

. Then

[ Z e t 1 + s Z e t 2 + s Z e t m + s ]=[ e ( t 1 + s ) H X t 1 + s e ( t 2 + s ) H X t 2 + s e ( t m + s ) H X t m + s ]= law [ e ( t 1 + s ) H X t 1 e ( t 2 + s ) H X t 2 e ( t m + s ) H X t m ]=[ e s H Z e t 1 e s H Z e t 1 e s H Z e t m ]

for every m ∈ ℕ, t ∈ ℤ m and s ∈ ℤ. Hence, Z is H -self-similar.

Now, let Y be H -self-similar and set

Z t =( L H 1 Y ) t

. Then

[ Z t 1 + s Z t 2 + s Z t m + s ]=[ e ( t 1 + s ) H Y e t 1 + s e ( t 2 + s ) H Y e t 2 + s e ( t m + s ) H Y e t m + s ]= law [ e t 1 H Y e t 1 e t 2 H Y e t 2 e t m H Y e t m ]=[ Z t 1 Z t 2 Z t m ]

for every m ∈ ℕ, t ∈ ℤ m and s ∈ ℤ. Hence, Z is stationary completing the proof.

R EMARK 3. 6. The form of the conventional continuous time Lamperti transform is not directly applicable in the discrete time setting due to the scaling of time. Hence, in order to stay within the given discrete parameter sets, we use exponential clocks with self-similar processes in Definitions 3. 2 and 3. 4 .

Before the proof of Theorem 2. 4 we state an auxiliary lemma.

LEMMA 3. 7. Let H > 0 be a positive definite n × n matrix, and let

( Y e t ) t

be an n dimensional H self-similar process . We define a process G = ( G t ) t ∈ℤ by

G t ={ k = 1 t e k H Δ k Y e k , t 1 0 , t = 0 k = t + 1 0 e k H Δ k Y e k , t 1 .

Then

G G H

.

P ROOF . It is straightforward to verify that

Δ t G = e t H Δ t Y e t

for every t ∈ ℤ. In addition

lim l k = l 0 e k H Δ k G = lim l k = l 0 Δ k Y e k = Y 1 lim l Y e l 1 ,

where by self-similarity of Y

( Y e l 1 ϵ )=( e H ( l 1 ) Y 1 ϵ )( e λ m i n ( l 1 ) Y 1 ϵ ) 0 .

Hence, we set

lim l k = l 0 e k H Δ k G = Y 1 .

P ROOF OF T HEOREM 2. 4. Assume that

lim t e t H X t = 0

and (5) holds for

G G H

. Then, by using (5) repeatedly

X t = e H X t 1 + Δ t G = e ( n + 1 ) H X t n 1 + j = 0 n e j H Δ t j G = e ( n + 1 ) H X t n 1 + e t H k = t n t e k H Δ k G = e t H ( e ( t n 1 ) H X t n 1 + k = t n t e k H Δ k G )

for every n ∈ ℕ. Since, as n → ∞, the limit of the sum above is well-defined, and

lim n e ( t n 1 ) H X t n 1 = 0

, we obtain that

X t = e t H k = t e k H Δ k G .( 14 )

Let m ∈ ℕ, t ∈ ℤ m , and s ∈ ℤ. Then, by stationary increments of G , we have

[ e ( t 1 + s ) H j = M t 1 e ( j + s ) H Δ j + s G e ( t m + s ) H j = M t m e ( j + s ) H Δ j + s G ]= law [ e t 1 H j = M t 1 e j H Δ j G e t m H j = M t m e j H Δ j G ]

for every − M t i }. Since the random vectors above converge in probability as M → ∞, we obtain that

[ X t 1 + s X t m + s ]=[ e ( t 1 + s ) H j = t 1 e ( j + s ) H Δ j + s G e ( t m + s ) H j = t m e ( j + s ) H Δ j + s G ]= law [ e t 1 H j = t 1 e j H Δ j G e t m H j = t m e j H Δ j G ]=[ X t 1 X t m ]

and hence, X is stationary.

Next, assume that X is stationary. Then, by Theorem 3. 5 there exists a H -self-similar Y such that

Δ t X = e t H Y e t e ( t 1 ) H Y e t 1 =( e H I ) X t 1 + e t H Δ t Y e t .

Defining G as in Lemma 3. 7 completes the proof of the other direction.

To prove uniqueness, we use (15). Assume that, for

G , G ~ G H

,

e t H X t = k = t e k H Δ k G = k = t e k H Δ k G ~

for every t ∈ ℤ. Then

e t H X t e ( t 1 ) H X t 1 = e t H Δ t G = e t H Δ t G ~.

Since e tH is invertible and both processes start from zero, we conclude that

G = G ~

.

P ROOF OF L EMMA 2. 7. We have that

Δ t G ( Δ 0 G )=( X t Φ X t 1 )( X 0 X 1 Φ ).

Taking expectations yields

r ( t )= Φ γ ( t ) Φ γ ( t + 1 ) Φ Φ γ ( t 1 )+ γ ( t ).

P ROOF OF T HEOREM 2. 10. Let

Δ t X = Θ X t 1 + Δ t G .

Then for t ∈ ℕ we have

G t = k = 1 t Δ k G = k = 1 t Δ k X + Θ k = 1 t X k 1 = X t X 0 + Θ k = 1 t X k 1 .

Hence

cov ( G t )= cov ( X t X 0 )+????[( X t X 0 ) k = 1 t X k 1 ] Θ + Θ ????[ k = 1 t X k 1 ( X t X 0 )]+ Θ ????[ k = 1 t X k 1 ( k = 1 t X k 1 )] Θ

giving (8) since

????[( X t X 0 ) k = 1 t X k 1 ]= k = 1 t γ ( t k + 1 ) γ ( k + 1 )= k = 1 t γ ( k ) γ ( k 1 )

and

cov ( X t X 0 )=????[( X t X 0 )( X t X 0 )]= 2 γ ( 0 ) γ ( t ) γ ( t )= 2 γ ( 0 ) γ ( t ) γ ( t )

P ROOF OF L EMMA 2. 15. Assume that

Φ ~= e H ~

satisfies (9) for every t ∈ ℤ and set

X t Φ ~ X t 1 = Δ t G ~, t .

Consequently

r ~( t )= γ ( t ) Φ ~ 2 ( γ ( t + 1 ) γ ( t 1 )) Φ ~+ γ ( t )= r ( t ), t ,

where

r ~( t )

is the autocovariance function of

( Δ t G ~) t

. Now, since

G 0 = G ~ 0 = 0

, we obtain that

var ( G t )= var ( k = 1 t Δ k G )= k , j = 1 t cov ( Δ k G , Δ j G )= k , j = 1 t r ( k j )= var ( k = 1 t Δ k G ~)= var ( G ~ t )

for all t ∈ ℕ. Hence, both Θ and

Θ ~

are solutions to (10).

LEMMA 3. 8. The matrix Θ = I e H is positive definite .

P ROOF . Let a be a real vector of length n , and let H = Q Λ Q be an eigendecomposition of H . Then

a ( I e H ) a = a 2 a e H a ,

where

| a e H a | a 2 e λ m i n a 2

completing the proof.

In order to show that

Θ ^ T

is consistent, we simply need to find suitable bounds for Δ T B , Δ T C and Δ T D in terms of the autocovariance estimators. After that, the same strategy as in [ 33 ] can be applied.

LEMMA 3. 9. Set

M t , T = max s { 0 , 1 ,, t } γ ^ T ( s ) γ ( s ).

Then the coefficients of the perturbed CARE satisfy

Δ T D 4 M t , T Δ T C t 2 M t , T Δ T B 2 t M t , T ,

P ROOF . First, we recall first that

γ ^ T ( s ) γ ( s )= γ ^ T ( s ) γ ( s )= γ ^ T ( s ) γ ( s ).

Now, since v ( t ) is known,

Δ T D 2 γ ^ T ( 0 ) γ ( 0 )+ γ ^ T ( t ) γ ( t )+ γ ^ T ( t ) γ ( t ) 4 M t , T .

Moreover

Δ T C k = 1 t j = 1 t γ ^ T ( k j ) γ ( k j ) t 2 M t , T .

Finally

Δ T B k = 1 t γ ^ T ( k 1 ) γ ( k 1 )+ γ ( k ) γ ^ T ( k ) 2 t M t , T .

P ROOF OF T HEOREM 2. 18. The result follows by replacing

sup s [ 0 , t ] γ ^ T ( s ) γ ( s )

with M t, T in Corollary 3. 14 and in the proof of Theorem 2. 9 of [ 33 ]. The details are left to the reader.

P ROOF OF T HEOREM 2. 19. For the first part of the theorem, we notice that

C t = k = 1 t j = 1 t γ ( k j )= k = 1 t l = k t k 1 γ ( l )= l = 1 t 1 k = 1 t + l γ ( l )+ l = 0 t 1 k = l + 1 t γ ( l )= 1 t 1 ( t + l ) γ ( l )+ l = 0 t 1 ( t l ) γ ( l )= l = 0 t 1 ( t l ) γ ( l )+ l = 1 t 1 ( t l ) γ ( l ),

where

0 1

and

1 0

are interpreted as empty sums. Now we have that

Δ T C = k = 0 t 1 ( t k )( γ ^ T ( k ) γ ( k ))+ k = 1 t 1 ( t k )( γ ^ T ( k ) γ ( k )) Δ T B = k = 1 t γ ^ T ( k 1 ) γ ( k 1 ) γ ^ T ( k )+ γ ( k ) Δ T D = 2 ( γ ( 0 ) γ ^ T ( 0 ))+ γ ^ T ( t ) γ ( t )+ γ ^ T ( t ) γ ( t )

and furthermore

l ( T ) vec ( Δ T C , Δ T B , Δ T D )= l ( T )[ k = 0 t 1 ( t k ) vec ( γ ^( k ) γ ( k ))+ k = 1 t 1 ( t k ) vec (( γ ^( k ) γ ( k ))) k = 1 t vec ( γ ^( k 1 ) γ ( k 1 )) vec (( γ ^( k ) γ ( k ))) 2 vec ( γ ^( 0 ) γ ( 0 ))+ vec ( γ ^( t ) γ ( t ))+ vec (( γ ^( t ) γ ( t )))]= L 1 [ l ( T )[ vec ( γ ^ T ( 0 ) γ ( 0 )) vec ( γ ^ T ( 1 ) γ ( 1 )) vec ( γ ^ T ( t ) γ ( t ))]] law L 1 (Z)

by the continuous mapping theorem. For the second part of the theorem, the proof of the continuous time case of [ 33 ] can be applied just by replacing

sup s [ 0 , t ] γ ^ T ( s ) γ ( s )

with M t, T in the definition of the set A T .

3. 2. Continuous Time

We only provide the proof of Proposition 2. 24, while the other proofs can be found from [ 33 ]

P ROOF OF P ROPOSITION 2. 24. Integrating (11) from 0 to t gives

G t = X t X 0 + H 0 t X s d s .

Hence

Δ t δ G ( Δ 0 δ G )=( X t X t δ + H t δ t X s d s )( X 0 X δ + δ 0 X s d s H ).

Taking expectations yields

r δ ( t )= 2 γ ( t ) γ ( t + δ ) γ ( t δ )+ δ 0 γ ( t s ) γ ( t δ s ) d s H + H t δ t γ ( s ) γ ( s + δ ) d s + H t δ t δ 0 γ ( s u ) d u d s H ,

where the first order terms can be treated with a simple change of variables. For the second order term we obtain that

t δ t δ 0 γ ( s u ) d u d s = t δ t s s + δ γ ( x ) d x d s = t δ t t δ x γ ( x ) d s d x + t t + δ x δ t γ ( x ) d s d x = t δ t ( x t + δ ) γ ( x ) d x + t t + δ ( t x + δ ) γ ( x ) d x .

Author Contributions

The author confirms being the sole contributor of this work and has approved it for publication.

Conflict of Interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

I would like to thank Lauri Viitasaari for his comments and suggestions.

References

1. Brockwell PJ, Davis RA. Time Series: Theory and Methods . New York, NY: Springer Science & Business Media (1991). doi: 10. 1007/978-1-4419-0320-4

2. Hamilton JD. Time Series Analysis . Vol. 2. Princeton: Princeton University Press (1994).

3. Neusser K. Time Series Econometrics . Springer (2016). doi: 10. 1007/978-3-319-32862-1

4. Engle RF. Autoregressive conditional heteroskedasticity with estimates of the variance of United Kingdom inflation. Econometr J Econometr Soc . (1982)50: 987–1007. doi: 10. 2307/1912773

5. Bollerslev T. Generalized autoregressive conditional heteroskedasticity. J. Econometr . (1986)31: 307–27. doi: 10. 1016/0304-4076(86)90063-1

6. Hannan EJ. The asymptotic theory of linear time-series models. J Appl Probabil . (1973)10: 130–45. doi: 10. 1017/S0021900200042145

7. Tiao GC, Tsay RS. Consistency properties of least squares estimates of autoregressive parameters in ARMA models. Ann Stat . (1983)11: 856–71. doi: 10. 1214/aos/1176346252

8. Mikosch T, Gadrich T, Kluppelberg C, Adler RJ. Parameter estimation for ARMA models with infinite variance innovations. Ann Stat . (1995)23: 305–26. doi: 10. 1214/aos/1176324469

9. Mauricio JA. Exact maximum likelihood estimation of stationary vector ARMA models. J Am Stat Assoc . (1995)90: 282–91. doi: 10. 1080/01621459. 1995. 10476511

10. Ling S, McAleer M. Asymptotic theory for a vector ARMA-GARCH model. Econometr Theory . (2003)19: 280–310. doi: 10. 1017/S0266466603192092

11. Francq C, Zakoian JM. Maximum likelihood estimation of pure GARCH and ARMA-GARCH processes. Bernoulli . (2004)10: 605–37. doi: 10. 3150/bj/1093265632

12. Bollerslev T. Glossary to ARCH (GARCH). CREATES Res Pap . (2008)49: 1–44. doi: 10. 2139/ssrn. 1263250

13. Han H, Kristensen D. Asymptotic theory for the QMLE in GARCH-X models with stationary and nonstationary covariates. J Bus Econ Stat . (2014)32: 416–29. doi: 10. 1080/07350015. 2014. 897954

14. Baillie RT, Bollerslev T, Mikkelsen HO. Fractionally integrated generalized autoregressive conditional heteroskedasticity. J Econometr . (1996)74: 3–10. doi: 10. 1016/S0304-4076(95)01749-6

15. Ling S, Li WK. On fractionally integrated autoregressive moving-average time series models with conditional heteroscedasticity. J Am Stat Assoc . (1997)92: 1184–94. doi: 10. 1080/01621459. 1997. 10474076

16. Zhang MY, Russell JR, Tsay RS. A nonlinear autoregressive conditional duration model with applications to financial transaction data. J Econometr . (2001)104: 179–207. doi: 10. 1016/S0304-4076(01)00063-X

17. Voutilainen M, Viitasaari L, Ilmonen P. On model fitting and estimation of strictly stationary processes. Modern Stochast Theory Appl . (2017)4: 381–406. doi: 10. 15559/17-VMSTA91

18. Voutilainen M, Viitasaari L, Ilmonen P, Torres S, Tudor C. On the ARCH model with stationary liquidity. (2020). Available online at: https://link. springer. com/content/pdf/10. 1007/s00184-020-00779-x. pdf

19. Cheridito P, Kawaguchi H, Maejima M. Fractional Ornstein-Uhlenbeck processes. Electron J Probabil . (2003)8: 1–44. doi: 10. 1214/EJP. v8-125

20. Hu Y, Nualart D. Parameter estimation for fractional Ornstein-Uhlenbeck processes. Stat Probabil Lett . (2010)80: 1030–8. doi: 10. 1016/j. spl. 2010. 02. 018

21. Kleptsyna ML, Le Breton A. Statistical analysis of the fractional Ornstein-Uhlenbeck type process. Stat Inference Stochast Process . (2002)5: 229–48. doi: 10. 1023/A: 1021220818545

22. Azmoodeh E, Viitasaari L. Parameter estimation based on discrete observations of fractional Ornstein-Uhlenbeck process of the second kind. Stat Inference Stochast Process . (2015)18: 205–27. doi: 10. 1007/s11203-014-9111-8

23. Bajja S, Es-Sebaiy K, Viitasaari L. Least squares estimator of fractional Ornstein-Uhlenbeck processes with periodic mean. J Korean Stat Soc . (2017)46: 608–22. doi: 10. 1016/j. jkss. 2017. 06. 002

24. Balde MF, Es-Sebaiy K, Tudor CA. Ergodicity and drift parameter estimation for infinite-dimensional fractional Ornstein-Uhlenbeck process of the second kind. Appl Math Opt . (2020)81: 785–814. doi: 10. 1007/s00245-018-9519-4

25. Brouste A, Iacus SM. Parameter estimation for the discretely observed fractional Ornstein-Uhlenbeck process and the Yuima R package. Comput Stat . (2013)28: 1529–47. doi: 10. 1007/s00180-012-0365-6

26. Douissi S, Es-Sebaiy K, Tudor CA. Hermite Ornstein-Uhlenbeck processes mixed with a Gamma distribution. Publicationes Mathematicae Debrecen . (2020)96: 1–22. doi: 10. 5486/PMD. 2020. 8443

27. Hu Y, Nualart D, Zhou H. Parameter estimation for fractional Ornstein-Uhlenbeck processes of general Hurst parameter. Stat Inference Stochast Process . (2019)22: 111–42. doi: 10. 1007/s11203-017-9168-2

28. Nourdin I, Tran TD. Statistical inference for Vasicek-type model driven by Hermite processes. Stochast Process Appl . (2019)129: 3774–91. doi: 10. 1016/j. spa. 2018. 10. 005

29. Sottinen T, Viitasaari L. Parameter estimation for the Langevin equation with stationary-increment Gaussian noise. Stat Inference Stochast Process . (2018)21: 569–601. doi: 10. 1007/s11203-017-9156-6

30. Tanaka K. Maximum likelihood estimation for the non-ergodic fractional Ornstein-Uhlenbeck process. Stat Inference Stochast Process . (2015)18: 315–32. doi: 10. 1007/s11203-014-9110-9

31. Applebaum D, et al. Infinite dimensional Ornstein-Uhlenbeck processes driven by Lévy processes. Probabil Surv . (2015)12: 33–54. doi: 10. 1214/14-PS249

32. Magdziarz M. Fractional Ornstein-Uhlenbeck processes. Joseph effect in models with infinite variance. Phys A Stat Mech Appl . (2008)387: 123–33. doi: 10. 1016/j. physa. 2007. 08. 016

33. Voutilainen M, Viitasaari L, Ilmonen P, Torres S, Tudor C. Vector-valued generalised Ornstein-Uhlenbeck processes. arXiv[Preprint]. arXiv: 1909. 02376 . (2019).

34. Lancaster P, Rodman L. Algebraic Riccati Equations . New York, NY: Oxford University Press (1995).

35. Kucera V. A contribution to matrix quadratic equations. IEEE Trans Automat Control . (1972)17: 344–7. doi: 10. 1109/TAC. 1972. 1099983

36. Wonham WM. On a matrix Riccati equation of stochastic control. SIAM J Control . (1968)6: 681–97. doi: 10. 1137/0306044

37. Sun Jg. Perturbation theory for algebraic Riccati equations. SIAM J Matrix Anal Appl . (1998)19: 39–65. doi: 10. 1137/S0895479895291303

38. Byers R. Solving the algebraic Riccati equation with the matrix sign function. Linear Algebra Appl . (1987)85: 267–79. doi: 10. 1016/0024-3795(87)90222-9

39. Laub A. A Schur method for solving algebraic Riccati equations. IEEE Trans Automat Control . (1979)24: 913–21. doi: 10. 1109/TAC. 1979. 1102178

40. Bini DA, Iannazzo B, Meini B. Numerical Solution of Algebraic Riccati Equations . Vol. 9. Philadelphia: SIAM (2012). doi: 10. 1137/1. 9781611972092

41. Viitasaari L. Representation of stationary and stationary increment processes via Langevin equation and self-similar processes. Stat Probabil Lett . (2016)115: 45–53. doi: 10. 1016/j. spl. 2016. 03. 020

42. Voutilainen M, Viitasaari L, Ilmonen P. Note on AR(1)-characterisation of stationary processes and model fitting. Modern Stochast Theory Appl . (2019)6: 195–207. doi: 10. 15559/19-VMSTA132

43. Embrechts P, Maejima M. Selfsimilar Processes . Princeton: Princeton University Press (2002). doi: 10. 1515/9781400825103

44. Pipiras V, Taqqu MS. Long-Range Dependence and Self-Similarity . Vol. 45. Cambridge: Cambridge University Press (2017). doi: 10. 1017/CBO9781139600347

45. Lamperti J. Semi-stable stochastic processes. Trans Am Math Soc . (1962)104: 62–78. doi: 10. 1090/S0002-9947-1962-0138128-7

Basic features

  • Free title page and bibliography
  • Unlimited revisions
  • Plagiarism-free guarantee
  • Money-back guarantee
  • 24/7 support

On-demand options

  • Writer's samples
  • Part-by-part delivery
  • Overnight delivery
  • Copies of used sources
  • Expert Proofreading

Paper format

  • 275 words per page
  • 12pt Arial/Times New Roman
  • Double line spacing
  • Any citation style (APA, MLA, CHicago/Turabian, Havard)

Guaranteed originality

We guarantee 0% plagiarism! Our orders are custom made from scratch. Our team is dedicated to providing you academic papers with zero traces of plagiarism.

Affordable prices

We know how hard it is to pay the bills while being in college, which is why our rates are extremely affordable and within your budget. You will not find any other company that provides the same quality of work for such affordable prices.

Best experts

Our writer are the crème de la crème of the essay writing industry. They are highly qualified in their field of expertise and have extensive experience when it comes to research papers, term essays or any other academic assignment that you may be given!

Calculate the price of your order

You will get a personal manager and a discount.
We'll send you the first draft for approval by at
Total price:
$0.00

Expert paper writers are just a few clicks away

Place an order in 3 easy steps. Takes less than 5 mins.

error: Content is protected !!
Open chat
1
Need Homework Help? Let's Chat
MyNursingHomeworkHelp
Need Help With Your Assignment? Lets Talk