Linear recurrence with constant coefficients

(Redirected from Linear recursive sequences)

In mathematics (including combinatorics, linear algebra, and dynamical systems), a linear recurrence with constant coefficients[1]: ch. 17 [2]: ch. 10  (also known as a linear recurrence relation or linear difference equation) sets equal to 0 a polynomial that is linear in the various iterates of a variable—that is, in the values of the elements of a sequence. The polynomial's linearity means that each of its terms has degree 0 or 1. A linear recurrence denotes the evolution of some variable over time, with the current time period or discrete moment in time denoted as t, one period earlier denoted as t − 1, one period later as t + 1, etc.

The solution of such an equation is a function of t, and not of any iterate values, giving the value of the iterate at any time. To find the solution it is necessary to know the specific values (known as initial conditions) of n of the iterates, and normally these are the n iterates that are oldest. The equation or its variable is said to be stable if from any set of initial conditions the variable's limit as time goes to infinity exists; this limit is called the steady state.

Difference equations are used in a variety of contexts, such as in economics to model the evolution through time of variables such as gross domestic product, the inflation rate, the exchange rate, etc. They are used in modeling such time series because values of these variables are only measured at discrete intervals. In econometric applications, linear difference equations are modeled with stochastic terms in the form of autoregressive (AR) models and in models such as vector autoregression (VAR) and autoregressive moving average (ARMA) models that combine AR with other features.

Definitions

edit

A linear recurrence with constant coefficients is an equation of the following form, written in terms of parameters a1, ..., an and b:

 

or equivalently as

 

The positive integer   is called the order of the recurrence and denotes the longest time lag between iterates. The equation is called homogeneous if b = 0 and nonhomogeneous if b ≠ 0.

If the equation is homogeneous, the coefficients determine the characteristic polynomial (also "auxiliary polynomial" or "companion polynomial")

 

whose roots play a crucial role in finding and understanding the sequences satisfying the recurrence.

Conversion to homogeneous form

edit

If b ≠ 0, the equation

 

is said to be nonhomogeneous. To solve this equation it is convenient to convert it to homogeneous form, with no constant term. This is done by first finding the equation's steady state value—a value y* such that, if n successive iterates all had this value, so would all future values. This value is found by setting all values of y equal to y* in the difference equation, and solving, thus obtaining

 

assuming the denominator is not 0. If it is zero, the steady state does not exist.

Given the steady state, the difference equation can be rewritten in terms of deviations of the iterates from the steady state, as

 

which has no constant term, and which can be written more succinctly as

 

where x equals yy*. This is the homogeneous form.

If there is no steady state, the difference equation

 

can be combined with its equivalent form

 

to obtain (by solving both for b)

 

in which like terms can be combined to give a homogeneous equation of one order higher than the original.

Solution example for small orders

edit

The roots of the characteristic polynomial play a crucial role in finding and understanding the sequences satisfying the recurrence. If there are   distinct roots   then each solution to the recurrence takes the form   where the coefficients   are determined in order to fit the initial conditions of the recurrence. When the same roots occur multiple times, the terms in this formula corresponding to the second and later occurrences of the same root are multiplied by increasing powers of  . For instance, if the characteristic polynomial can be factored as  , with the same root   occurring three times, then the solution would take the form  [3]

Order 1

edit

For order 1, the recurrence   has the solution   with   and the most general solution is   with  . The characteristic polynomial equated to zero (the characteristic equation) is simply  .

Order 2

edit

Solutions to such recurrence relations of higher order are found by systematic means, often using the fact that   is a solution for the recurrence exactly when   is a root of the characteristic polynomial. This can be approached directly or using generating functions (formal power series) or matrices.

Consider, for example, a recurrence relation of the form  

When does it have a solution of the same general form as  ? Substituting this guess (ansatz) in the recurrence relation, we find that   must be true for all  .

Dividing through by  , we get that all these equations reduce to the same thing:

 

which is the characteristic equation of the recurrence relation. Solve for   to obtain the two roots  ,  : these roots are known as the characteristic roots or eigenvalues of the characteristic equation. Different solutions are obtained depending on the nature of the roots: If these roots are distinct, we have the general solution

 

while if they are identical (when  ), we have

 

This is the most general solution; the two constants   and   can be chosen based on two given initial conditions   and   to produce a specific solution.

In the case of complex eigenvalues (which also gives rise to complex values for the solution parameters   and  ), the use of complex numbers can be eliminated by rewriting the solution in trigonometric form. In this case we can write the eigenvalues as   Then it can be shown that

 

can be rewritten as[4]: 576–585 

 

where

 

Here   and   (or equivalently,   and  ) are real constants which depend on the initial conditions. Using    

one may simplify the solution given above as

 

where   and   are the initial conditions and

 

In this way there is no need to solve for   and  .

In all cases—real distinct eigenvalues, real duplicated eigenvalues, and complex conjugate eigenvalues—the equation is stable (that is, the variable   converges to a fixed value [specifically, zero]) if and only if both eigenvalues are smaller than one in absolute value. In this second-order case, this condition on the eigenvalues can be shown[5] to be equivalent to  , which is equivalent to   and  .

General solution

edit

Characteristic polynomial and roots

edit

Solving the homogeneous equation

 

involves first solving its characteristic polynomial

 

for its characteristic roots λ1, ..., λn. These roots can be solved for algebraically if n ≤ 4, but not necessarily otherwise. If the solution is to be used numerically, all the roots of this characteristic equation can be found by numerical methods. However, for use in a theoretical context it may be that the only information required about the roots is whether any of them are greater than or equal to 1 in absolute value.

It may be that all the roots are real or instead there may be some that are complex numbers. In the latter case, all the complex roots come in complex conjugate pairs.

Solution with distinct characteristic roots

edit

If all the characteristic roots are distinct, the solution of the homogeneous linear recurrence

 

can be written in terms of the characteristic roots as

 

where the coefficients ci can be found by invoking the initial conditions. Specifically, for each time period for which an iterate value is known, this value and its corresponding value of t can be substituted into the solution equation to obtain a linear equation in the n as-yet-unknown parameters; n such equations, one for each initial condition, can be solved simultaneously for the n parameter values. If all characteristic roots are real, then all the coefficient values ci will also be real; but with non-real complex roots, in general some of these coefficients will also be non-real.

Converting complex solution to trigonometric form

edit

If there are complex roots, they come in conjugate pairs and so do the complex terms in the solution equation. If two of these complex terms are cjλt
j
and cj+1λt
j+1
, the roots λj can be written as

 

where i is the imaginary unit and M is the modulus of the roots:

 

Then the two complex terms in the solution equation can be written as

 

where θ is the angle whose cosine is α/M and whose sine is β/M; the last equality here made use of de Moivre's formula.

Now the process of finding the coefficients cj and cj+1 guarantees that they are also complex conjugates, which can be written as γ ± δi. Using this in the last equation gives this expression for the two complex terms in the solution equation:

 

which can also be written as

 

where ψ is the angle whose cosine is γ/γ2 + δ2 and whose sine is δ/γ2 + δ2.

Cyclicity

edit

Depending on the initial conditions, even with all roots real the iterates can experience a transitory tendency to go above and below the steady state value. But true cyclicity involves a permanent tendency to fluctuate, and this occurs if there is at least one pair of complex conjugate characteristic roots. This can be seen in the trigonometric form of their contribution to the solution equation, involving cos θt and sin θt.

Solution with duplicate characteristic roots

edit

In the second-order case, if the two roots are identical (λ1 = λ2), they can both be denoted as λ and a solution may be of the form

 

Solution by conversion to matrix form

edit

An alternative solution method involves converting the nth order difference equation to a first-order matrix difference equation. This is accomplished by writing w1,t = yt, w2,t = yt−1 = w1,t−1, w3,t = yt−2 = w2,t−1, and so on. Then the original single nth-order equation

 

can be replaced by the following n first-order equations:

 

Defining the vector wi as

 

this can be put in matrix form as

 

Here A is an n × n matrix in which the first row contains a1, ..., an and all other rows have a single 1 with all other elements being 0, and b is a column vector with first element b and with the rest of its elements being 0.

This matrix equation can be solved using the methods in the article Matrix difference equation. In the homogeneous case yi is a para-permanent of a lower triangular matrix [6]

Solution using generating functions

edit

The recurrence

 

can be solved using the theory of generating functions. First, we write  . The recurrence is then equivalent to the following generating function equation:

 

where   is a polynomial of degree at most   correcting the initial terms. From this equation we can solve to get

 

In other words, not worrying about the exact coefficients,   can be expressed as a rational function  

The closed form can then be derived via partial fraction decomposition. Specifically, if the generating function is written as  

then the polynomial   determines the initial set of corrections  , the denominator   determines the exponential term  , and the degree   together with the numerator   determine the polynomial coefficient  .

Relation to solution to differential equations

edit

The method for solving linear differential equations is similar to the method above—the "intelligent guess" (ansatz) for linear differential equations with constant coefficients is   where   is a complex number that is determined by substituting the guess into the differential equation.

This is not a coincidence. Considering the Taylor series of the solution to a linear differential equation:

 

it can be seen that the coefficients of the series are given by the  -th derivative of   evaluated at the point  . The differential equation provides a linear difference equation relating these coefficients.

This equivalence can be used to quickly solve for the recurrence relationship for the coefficients in the power series solution of a linear differential equation.

The rule of thumb (for equations in which the polynomial multiplying the first term is non-zero at zero) is that:

  and more generally  

Example: The recurrence relationship for the Taylor series coefficients of the equation:

 

is given by

 

or

 

This example shows how problems generally solved using the power series solution method taught in normal differential equation classes can be solved in a much easier way.

Example: The differential equation

 

has solution

 

The conversion of the differential equation to a difference equation of the Taylor coefficients is

 

It is easy to see that the  -th derivative of   evaluated at   is  .

Solving with z-transforms

edit

Certain difference equations - in particular, linear constant coefficient difference equations - can be solved using z-transforms. The z-transforms are a class of integral transforms that lead to more convenient algebraic manipulations and more straightforward solutions. There are cases in which obtaining a direct solution would be all but impossible, yet solving the problem via a thoughtfully chosen integral transform is straightforward.

Stability

edit

In the solution equation

 

a term with real characteristic roots converges to 0 as t grows indefinitely large if the absolute value of the characteristic root is less than 1. If the absolute value equals 1, the term will stay constant as t grows if the root is +1 but will fluctuate between two values if the root is −1. If the absolute value of the root is greater than 1 the term will become larger and larger over time. A pair of terms with complex conjugate characteristic roots will converge to 0 with dampening fluctuations if the absolute value of the modulus M of the roots is less than 1; if the modulus equals 1 then constant amplitude fluctuations in the combined terms will persist; and if the modulus is greater than 1, the combined terms will show fluctuations of ever-increasing magnitude.

Thus the evolving variable x will converge to 0 if all of the characteristic roots have magnitude less than 1.

If the largest root has absolute value 1, neither convergence to 0 nor divergence to infinity will occur. If all roots with magnitude 1 are real and positive, x will converge to the sum of their constant terms ci; unlike in the stable case, this converged value depends on the initial conditions; different starting points lead to different points in the long run. If any root is −1, its term will contribute permanent fluctuations between two values. If any of the unit-magnitude roots are complex then constant-amplitude fluctuations of x will persist.

Finally, if any characteristic root has magnitude greater than 1, then x will diverge to infinity as time goes to infinity, or will fluctuate between increasingly large positive and negative values.

A theorem of Issai Schur states that all roots have magnitude less than 1 (the stable case) if and only if a particular string of determinants are all positive.[2]: 247 

If a non-homogeneous linear difference equation has been converted to homogeneous form which has been analyzed as above, then the stability and cyclicality properties of the original non-homogeneous equation will be the same as those of the derived homogeneous form, with convergence in the stable case being to the steady-state value y* instead of to 0.

See also

edit

References

edit
  1. ^ Chiang, Alpha (1984). Fundamental Methods of Mathematical Economics (Third ed.). New York: McGraw-Hill. ISBN 0-07-010813-7.
  2. ^ a b Baumol, William (1970). Economic Dynamics (Third ed.). New York: Macmillan. ISBN 0-02-306660-1.
  3. ^ Greene, Daniel H.; Knuth, Donald E. (1982), "2.1.1 Constant coefficients – A) Homogeneous equations", Mathematics for the Analysis of Algorithms (2nd ed.), Birkhäuser, p. 17.
  4. ^ Chiang, Alpha C., Fundamental Methods of Mathematical Economics, third edition, McGraw-Hill, 1984.
  5. ^ Papanicolaou, Vassilis, "On the asymptotic stability of a class of linear difference equations," Mathematics Magazine 69(1), February 1996, 34–43.
  6. ^ Zatorsky, Roman; Goy, Taras (2016). "Parapermanent of triangular matrices and some general theorems on number sequences". J. Int. Seq. 19: 16.2.2.