Constant Coefficient

where d is a constant coefficient, p≫1 is a positive integer (generally p≠n), and a dot denotes differentiation with respect to time, t.

From: Encyclopedia of Vibration , 2001

Ordinary Differential Equations

Ilpo Laine , in Handbook of Differential Equations: Ordinary Differential Equations, 2008

1.4.2 A non-linear case: a glimpse at the Briot–Bouquet theory

As an example of a non-linear case, we first consider

(1.4.29) z w = λ w + p 10 z + j , k = 1 p jk z j w k , λ C

with constant coefficients. This is the most simple case of what are usually called as Briot–Bouquet equations.

THEOREM 1.4.1

If λ    ℂ\ℕ, then (1.4.29) admits a local analytic solution in a neighborhood of z  =   0. The solution satisfying the initial condition w(0)   =   0 is unique.

PROOF

For a proof based on the Cauchy majorant method, see [74], pp. 55–56.

This theorem is a special case of more general systems of differential equations having a singular point of regular type at z  =   0. We may write such a system in the form

(1.4.30) z y j = f j z , y 1 , y n , j = 1 , , n ,

where the right-hand sides fj (z, y 1, …, yn )   = fj (z, y) are analytic around (z, y)   =   (0, 0), and assume that the initial conditions fj (0,0)   =   0 are satisfied for j  =   1, …, n. Systems of this type are called Briot–Bouquet systems. Of course, we may write (1.4.30) in vectorial form as

(1.4.31) z Y = F z Y ,

where Y  = t (y)     n and F(z,Y)   = t (f 1(z,Y), …, fn (z,Y)). Denoting K    (k 1,   …, k n )     (ℕ     {0}) n ,   |K|   k 1  +     + k n , A jK   t (a jK (1),   …, a jK (n))     n ,   ||Y||     max j |y j | and Y K y 1 k 1 y n kn , the power series expansion

(1.4.32) F z Y = j + K 1 A jK z j Y K

converges in some domain around the origin, say |z|   < r 0,   Y  < R 0. Then we obtain the following

THEOREM 1.4.2

If none of the eigenvalues of the matrix

A = f 1 y 1 0 0 f 1 y n 0 0 f n y 1 0 0 f n y n 0 0

is a natural number, then Eq. (1.4.31) admits a unique solution of the form

(1.4.33) Y z = j = 1 C j z j , C j C n ,

analytic in a neighborhood of the origin.

PROOF

See [97], pp. 261–263.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S1874572508800089

Calculus

Seifedine Kadry , in Mathematical Formulas for Industrial and Mechanical Engineering, 2014

5.37 Second-Order Differential Equations

Homogeneous linear equation with constant coefficients: y + b y + c y = 0 . The characteristic equation is λ 2 + b λ + c = 0 .

If λ 1 λ 2 (distinct real roots) then y = c 1 e λ 1 x + c 2 e λ 2 x .

If λ 1 = λ 2 (repeated roots) then y = c 1 e λ 1 x + c 2 x e λ 1 x .

If λ 1 = α + β i and λ 2 = α β i are complex numbers (distinct real roots) then y = e α x ( c 1 cos β x + c 2 sin β x ) .

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124201316000051

Second- and higher-order equations

Henry J. Ricardo , in A Modern Introduction to Differential Equations (Third Edition), 2021

4.2.1 The structure of solutions

If we take the same RLC circuit that we considered at the beginning of the preceding section and hook up a generator supplying alternating current to it, Kirchhoff's Voltage Law will now take the form L d 2 I d t 2 + R d I d t + 1 C I = d E d t , where E is the applied nonconstant voltage. An equation of this kind is called a nonhomogeneous second-order linear equation with constant coefficients . (The nonzero right-hand side of such an equation is often called the forcing function or the input. The solution of the equation is the output. See Section 2.2.)

To get a handle on solving a nonhomogeneous linear equation with constant coefficients, let's think a bit about the difference between a nonhomogeneous equation

(4.2.1) a y + b y + c y = f ( t )

and its associated homogeneous equation a y + b y + c y = 0 . If y is the general solution of the homogeneous system, then y doesn't quite "reach" all the way to f ( t ) under the transformation L ( y ) = a y + b y + c y . It stops short at 0. Perhaps we could enhance the solution y in some way so that operating on this new function does give us all of f. We have to be able to capture the "leftover" term f ( t ) .

For nonhomogeneous second-order equations with constant coefficients, the proper form of the Superposition Principle is the following:

If y 1 is a solution of a y + b y + c y = f 1 ( t ) and y 2 is a solution of a y + b y + c y = f 2 ( t ) , then y = c 1 y 1 + c 2 y 2 is a solution of a y + b y + c y = c 1 f 1 ( t ) + c 2 f 2 ( t ) for any constants c 1 and c 2 .

From the Superposition Principle follows a fundamental truth about the solutions of linear equations with constant coefficients:

The general solution, y GNH , of a linear nonhomogeneous equation a y + b y + c y = f ( t ) is obtained by finding a particular solution, y PNH , of the nonhomogeneous equation and adding it to the general solution, y GH , of the associated homogeneous equation: y GNH = y GH + y PNH .

We can prove this easily using operator notation, in which L ( y ) = a y + b y + c y :

1.

First note that L ( y GH ) = 0 and L ( y PNH ) = f ( t ) by definition.

2.

Then if y = y GH + y PNH , we have L ( y ) = L ( y GH + y PNH ) = L ( y GH ) + L ( y PNH ) = 0 + f ( t ) = f ( t ) , so y is a solution of Eq. (4.2.1).

3.

Now we must show that every solution of Eq. (4.2.1) has the form y = y GH + y PNH . To do this, we assume that y* is an arbitrary solution of L ( y ) = f ( t ) and y PNH is a particular solution of L ( y ) = f ( t ) . If we let z = y y PNH , then

L ( z ) = L ( y y p ) = L ( y ) L ( y p ) = f ( t ) f ( t ) = 0 ,

which shows that z is a solution to the homogeneous equation L ( y ) = 0 . Because z = y y PNH , it follows that y = z + y PNH . Since y is an arbitrary solution of the nonhomogeneous equation, the expression z + y PNH includes all solutions. (See Problem 23 of Exercises 1.3 and Problem 35 of Exercises 2.2 for related results.)

Let's go through a few simple examples to develop some intuition for the solutions of nonhomogeneous equations.

Example 4.2.1 Solving a Nonhomogeneous Equation

If we are given the nonhomogeneous equation y + 4 y + 5 y = 10 e 2 x cos x , the general solution will be made up of the general solution of the associated homogeneous equation and a particular solution of the nonhomogeneous equation: y GNH = y GH + y PNH . The characteristic equation λ 2 + 4 λ + 5 = 0 has roots 2 ± i , so we know that y GH = e 2 x ( c 1 cos x + c 2 sin x ) . We can verify that a particular solution of the nonhomogeneous equation is 5 x e 2 x sin x . Therefore, the general solution of the nonhomogeneous equation is y = e 2 x ( c 1 cos x + c 2 sin x ) + 5 x e 2 x sin x .

In the preceding example, a particular solution appeared magically. The next example hints at how we may find y PNH by examining the forcing function on the right-hand side of the equation. Sections 4.3 and 4.4 will provide systematic procedures for determining a particular solution of a nonhomogeneous equation.

Example 4.2.2 Solving a Nonhomogeneous Equation

Suppose we want to find the general solution of y + 3 y + 2 y = 12 e t . Because the characteristic equation of the associated homogeneous equation is λ 2 + 3 λ + 2 = 0 , with roots −1 and −2, we know that the general solution of the homogeneous equation is y GH = c 1 e t + c 2 e 2 t .

Now we look carefully at the form of the nonhomogeneous equation. In looking for a particular solution y PNH , we can ignore any terms of the form e t or e 2 t because they are part of the homogeneous solution and won't contribute anything new. But somehow, after differentiations and additions, we have to wind up with the term 12 e t . We guess that y = c e t for some undetermined constant c. Substituting this expression into the left-hand side of the nonhomogeneous equation, we get ( c e t ) + 3 ( c e t ) + 2 ( c e t ) = 6 c e t . If we choose c = 2 , then y PNH = 2 e t is a particular solution of the nonhomogeneous equation.

Putting these two components together, we can write the general solution of the nonhomogeneous equation as y GNH = y GH + y PNH = c 1 e t + c 2 e 2 t + 2 e t .

The intelligent guessing used in the preceding example can be formalized into the method of undetermined coefficients, which will be discussed in the next section. But, as we'll see, this method is effective only when the forcing function f ( t ) in the equation a y + b y + c y = f ( t ) is of a special type.

Exercises 4.2

A

For each of the nonhomogeneous differential equations in Problems 1–5, verify that the given function y p is a particular solution.

1.

y + 3 y + 4 y = 3 x + 2 ; y p = 3 4 x 1 16

2.

y 4 y = 2 e 3 x ; y p = 3 5 e 3 x

3.

3 y + y 2 y = 2 cos x ; y p = 5 13 cos x + 1 13 sin x

4.

y + 5 y + 6 y = x 2 + 2 x ; y p = 1 6 x 2 + 1 18 x 11 108

5.

y + y = sin x ; y p = 1 2 x cos x

6.

If x 1 ( t ) = 1 2 e t is a solution of x ¨ + x ˙ = e t and x 2 ( t ) = t e t is a solution of x ¨ + x ˙ = e t , find a particular solution of x ¨ + x ˙ = e t + e t and verify that your solution is correct.

7.

Given that y p = x 2 is a solution of y + y 2 y = 2 ( 1 + x x 2 ) , use the Superposition Principle to find a particular solution of y + y 2 y = 6 ( 1 + x x 2 ) and verify that your solution is correct.

8.

If y 1 = 1 + x is a solution of y y + y = x and y 2 = e 2 x is a solution of y y + y = 3 e 2 x , find a particular solution of y y + y = 2 x + 4 e 2 x . Verify that your solution is correct.

B

9.

Find the general solution of the equation given in Problem 1.

10.

Find the general solution of the equation given in Problem 2.

11.

Find the general solution of the equation given in Problem 3.

12.

Find the general solution of the equation given in Problem 4.

13.

Find the general solution of the equation given in Problem 5.

14.

Find the general solution of the equation y + y 2 y = 6 ( 1 + x x 2 ) given in Problem 7.

15.

Find the general solution of the equation x ¨ + x ˙ = e t + e t given in Problem 6.

16.

Find the form of a particular solution of y y = x by intelligent guessing and use this information to solve the IVP y y = x , y ( 0 ) = y ( 0 ) = 0 .

C

17.

Suppose x ( t ) satisfies the IVP

x ¨ + π 2 x = f ( t ) = { π 2 , 0 t 1 0 , t > 1

with x ( 0 ) = 1 and x ˙ ( 0 ) = 0 . Determine the continuously differentiable solution for t 0 . (This means that the solution has a continuous derivative function. Note that it will have a discontinuous second derivative at t = 1 .)

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128182178000117

Differential Equations

Robert G. Mortimer , in Mathematics for Physical Chemistry (Fourth Edition), 2013

Solution of the Equation of Motion

A homogeneous ordinary linear differential equation with constant coefficients can be solved as follows:

1.

Assume the trial solution

(12.15) z ( t ) = e λ t ,

where λ is a constant.
2.

Substitute the trail solution into the equation and produce an algebraic equation in λ called the characteristic equation.

3.

Find the values of λ that satisfy the characteristic equation. For an equation of order n, there will be n values of λ , where n is the order of the equation. Call these values λ 1 , λ 2 , , λ n . These values produce n versions of the trial solution that satisfy the equation.

4.

Write the solution as a linear combination

(12.16) z ( t ) = c 1 e λ 1 t + c 2 e λ 2 t + + c n e λ n t .

Example 12.2

Find a solution to the differential equation

d 2 y d x 2 + d y d x - 2 y = 0 .

Substitution of the trial solution y = e λ x gives the equation

λ 2 e λ x + λ e λ x - 2 e λ x = 0 .

Division by e λ x gives the characteristic equation.

λ 2 + λ - 2 = 0 .

The solutions to this equation are

λ = 1 , λ = - 2 .

The solution to the differential equation is

y ( x ) = c 1 e x + c 2 e - 2 x ,

where c 10 and c 2 are constants.

The solution in the previous example is a family of solutions, one solution for each set of values for c 1 and c 2 . A solution to a linear differential equation of order n that contains n arbitrary constants is known to be a general solution, which is a family of functions that includes almost every solution to the differential equation. Our solution is a general solution, since it contains two arbitrary constants. A solution to a differential equation that contains no arbitrary constants is called a particular solution.

Exercise 12.2

Find the general solution to the differential equation

d 2 y d x 2 - 3 d y d x + 2 y = 0 .

We frequently have additional information that will enable us to pick a particular solution out of a family of solutions. Such information consists of knowledge of boundary conditions and initial conditions. Boundary conditions arise from physical requirements on the solution, such as conditions that apply to the boundaries of the region in space where the solution applies. Initial conditions arise from knowledge of the state of the system at some initial time.

We now apply the method to the solution of the equation of motion of the harmonic oscillator. We substitute the trial solution into Eq. (12.12):

(12.17) d 2 e λ t d t 2 + k m e λ t = 0 ,

(12.18) λ 2 e λ t + k m e λ t = 0 .

Division by e λ t gives the characteristic equation

(12.19) λ 2 + k m = 0 .

The solution of the characteristic equation is

(12.20) λ = ± i k m 1 / 2 ,

where i = - 1 , the imaginary unit.

The general solution is

(12.21) z = z ( t ) = c 1 exp + i k m 1 / 2 t + c 2 exp - i k m 1 / 2 t ,

where c 1 and c 2 are arbitrary constants. The solution must be real, because imaginary and complex numbers cannot represent physically measurable quantities. From a trigonometric identity

(12.22) e i ω t = cos ( ω t ) + i sin ( ω t ) ,

we can write

(12.23) z = c 1 cos ( ω t ) + i sin ( ω t ) + c 2 cos ( ω t ) - i sin ( ω t ) ,

where

(12.24) ω = k m 1 / 2 .

We let c 1 + c 2 = b 1 and i ( c 1 - c 2 ) = b 2 .

(12.25) z = b 1 cos ( ω t ) + b 2 sin ( ω t ) .

Exercise 12.3

Show that the function of Eq. (12.25) satisfies Eq. (12.12).

To obtain a particular solution, we require some Initial conditions. We require one initial condition to evaluate each arbitrary constant. Assume that we have the conditions at t  =   0:

(12.26a) z ( 0 ) = 0 ,

(12.27) v z ( 0 ) = v 0 ,

where v 0 is a constant. For our first initial conditions,

(12.28) z ( 0 ) = 0 = b 1 cos ( 0 ) + b 2 sin ( 0 ) = b 1 = 0 ,

(12.29) z ( t ) = b 2 sin ( ω t ) .

The velocity is given by

(12.30) v z ( t ) = d z d t = b 2 ω cos ( ω t ) .

From our second initial condition

v z ( 0 ) = v ( 0 ) = b 2 ω cos ( 0 ) = b 2 ω .

This gives

(12.31) b 2 = v 0 ω .

Our particular solution is

(12.32) z ( t ) = v 0 ω sin ( ω t )

as depicted in Figure 12.1.

Figure 12.1. The position and velocity of a harmonic oscillator as functions of time.

The motion given by this solution is called uniform harmonic motion. It is periodic, repeating itself over and over. During one period, the argument of the sine changes by 2 π . We denote the period (the length of time required for one cycle of the motion) by τ :

(12.33) 2 π = ω τ = k m 1 / 2 τ ,

(12.34) τ = 2 π m k 1 / 2 .

The reciprocal of the period is called the frequency, denoted by ν 4 :

(12.35) ν = 1 2 π k m = ω 2 π .

The frequency gives the number of oscillations per second. The circular frequency ω gives the rate of change of the argument of the sine or cosine function in radians per second.

Example 12.3

A mass of 0.100   kg is suspended from a spring with a spring constant k  =   1.50   N   m−1. Find the frequency and period of oscillation.

ν = 1 2 π k m = 1 2 π 1.50 N m - 1 0.100 kg = 0.616 s - 1 , τ = 1 ν = 1.62 s.

The solution of the equation of motion of the harmonic oscillator illustrates a general property of classical equations of motion. If the equation of motion and the initial conditions are known, the motion of the system is determined as a function of time. We say that classical equations of motion are deterministic.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124158092000124

Ordinary differential equations

Brent J. Lewis , ... Andrew A. Prudil , in Advanced Mathematics for Engineering Students, 2022

Method of undetermined coefficients

This method applies to the equation with constant coefficients

(2.62)

where r ( x ) is of a special form as detailed in the following rules:

(a)

Basic rule. If r ( x ) in Eq. (2.62) is one of the functions in the first column of Table 2.4, choose the corresponding function for y p and determine its undetermined coefficients by substitution into Eq. (2.62).

Table 2.4. Choice for the y p function.

Term in r ( x ) Choice for y p
ke γx Ce γx
kx n (n = 0,1,...) K n x n  +K n−1 x n−1 + ... +K 1 x +K o
k cos ω x k sin ω x } K cos ω x + M sin ω x
k e α x cos ω x k e α x sin ω x } e α x ( K cos ω x + M sin ω x )
(b)

Modification rule. If the choice for y p is also a solution of the homogeneous equation corresponding to Eq. (2.62), then multiply the choice for y p by x (or x 2 if this solution corresponds to a double root of the characteristic equation of the homogeneous equation).

(c)

Sum rule. If r ( x ) is a sum of functions in several lines of Table 2.4, then choose for y p the corresponding sum of functions.

Example 2.2.12

(rule a) Solve y + 5 y = 25 x 2 .

Solution. Using Table 2.4, y p = K 2 x 2 + K 1 x + K o , yielding y p = 2 K 2 . Therefore, 2 K 2 + 5 ( K 2 x 2 + K 1 x + K o ) = 25 x 2 . Equating coefficients on both sides gives 5 K 2 = 25 , 5 K 1 = 0 , 2 K 2 + 5 K o = 0 , so that K 2 = 5 , K 1 = 0 , and K o = 2 . Therefore, y p = 5 x 2 2 . Thus, the general solution is given by y = y h + y p = A cos 5 x + B sin 5 x + 5 x 2 2 . [answer]

Example 2.2.13

(rule b) Solve y y = e x .

Solution. The characteristic equation ( λ 2 1 ) = ( λ 1 ) ( λ + 1 ) = 0 has the roots 1 and −1, so that y h = c 1 e x + c 2 e x . Normally Table 2.4 suggests y p = C e x (which is already a solution of the homogeneous equation). Thus, rule (b) applies, where y p = C x e x . As such, y p = C ( e x x e x ) and y p = C ( 2 e x + x e x ) . Substituting these expressions into the differential equation gives

Image 36
. Simplifying, one obtains C = 1 / 2 . Thus, the general solution is given by y = c 1 e x + c 2 e x 1 2 x e x . [answer]

Example 2.2.14

(rules b and c) Solve y + 2 y + y = e x x , y ( 0 ) = 0 , y ( 0 ) = 0 .

Solution. The characteristic equation λ 2 + 2 λ + 1 = ( λ + 1 ) 2 = 0 has a double root λ = 1 . Hence, y h = ( c 1 + c 2 x ) e x (see Section 2.2.2). For the x-term on the right-hand side, Table 2.4 suggests the choice K 1 x + K o . However, since λ = 1 is a double root, rule b suggests the choice C x 2 e x (instead of C e x ) so that y p = K 1 x + K o + C x 2 e x . As such, y p + 2 y p + y p = 2 C e x + K 1 x + ( 2 K 1 + K o ) = e x x , which implies C = 1 2 , K 1 = 1 and K o = 2 . The general solution is y = y h + y p = ( c 1 + c 2 x ) e x + 1 2 x 2 e x + 2 x .

For the initial conditions, y = ( c 1 + c 2 c 2 x ) e x + ( x 1 2 x 2 ) e x 1 .

Therefore, y ( 0 ) = c 1 + 2 = 0 (so that c 1 = 2 ) and y ( 0 ) = c 1 + c 2 1 = 0 (so that c 2 = 1 ). Thus, y = ( 2 + x ) e x + 1 2 x 2 e x + ( 2 x ) . [answer]

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128236819000101

Differential — Difference Equations

KENNETH L. COOKE , in International Symposium on Nonlinear Differential Equations and Nonlinear Mechanics, 1963

4 Exponential Solutions

One method of obtaining solutions of equations with constant coefficients is the method of exponential solutions. For example, the function u(t) = exp (st) is a solution of the equation

(4.1) u ( t ) = c u ( t 1 )

provided s is a root of the characteristic equation

(4.2) s = c e s

By combining such solutions, one can obtain more general solutions in series form.

In general the transcendental characteristic equations which are encountered have infinitely many roots, and there are questions of convergence of series of solutions. Also, it becomes somewhat difficult to determine the coefficients entering into the series from the initial function g(t). One of the best ways of handling this problem is by means of the Laplace transform, as we shall explain below.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123956514500222

FORMS OF DIMENSIONLESS RELATIONS

In Applied Dimensional Analysis and Modeling (Second Edition), 2007

The Heuristic Reasoning Method.

Very often the constant exponents—and rarely even the constant coefficient (Example 13-16)—of a monomial can be found merely by heuristic reasoning. This way at the most we only have to determine the single constant coefficient, an activity which requires only one measurement, regardless of the number of dimensionless variables in the relation. It must be mentioned, though, that this reasoning does sometimes call for rather high levels of sophistication, imagination, an inquisitive if not iconoclastic mind, and perhaps some knowledge of the relevant physical laws.

The following examples demonstrate this process and its beneficial and powerful results.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123706201500198

Functional Equations in Applied Sciences

In Mathematics in Science and Engineering, 2005

3.8.1 Solution of the homogeneous equation

In order to find the general solution of the constant coefficients homogeneous Equation ( Equation (3.30) with h(x) = 0) we solve its associated characteristic equation

(3.31) k = 0 n a k z n k = 0 ,

and we distinguish the following four cases:

Single real roots:

If z 1 is a single real root of (3.31), then its additive contribution to the general solution of (3.30) is C 1 z 1 x .

Multiple real roots:

If z 1 is a multiple root with multiplicity index p, then its additive contribution to the general solution is ( C 1 x p 1 + C 2 x p 2 + + C p ) z 1 x .

Pairs of conjugate complex roots:

If z 1 = ρ(cos α+i sin α) is a single complex root of (3.31), then the additive contribution of the pair of z 1 and its conjugate to the general solution of (3.30) is ρx [A cos (αx) + B sin (αx)].

Pairs of multiple conjugate roots:

If z 1 = ρ(cos α + i sin α) is a multiple complex root of (3.31) with multiplicity index p, then the additive contribution of the pair of z 1 and its conjugate to the general solution of (3.30) is

(3.32) ρ x [ ( A 1 x p 1 + A 2 x p 2 + + A p ) cos ( α x ) + + ( B 1 x p 1 + B 2 x p 2 + + B p ) sin ( α x ) ) ] .

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0076539205800062

Differential Equations

S.M. Blinder , in Guide to Essential Math (Second Edition), 2013

8.4 Second-Order Differential Equations

We will consider here linear second-order equations with constant coefficients, in which the functions p ( x ) and q ( x ) in Eq. (8.2) are constants. The more general case gives rise to special functions, several of which we will encounter later as solutions of partial differential equations. The homogeneous equation, with f ( x ) = 0 , can be written

(8.55) d 2 y dx 2 + a 1 dy dx + a 2 y = 0 .

It is convenient to define the differential operator

(8.56) D d dx

in terms of which

(8.57) Dy = dy dx and D 2 y = d 2 y dx 2 .

The differential equation (8.55) is then written

(8.58) D 2 y + a 1 Dy + a 2 y = 0 ,

or, in factored form,

(8.59) ( D - r 1 ) ( D - r 2 ) y = ( D - r 2 ) ( D - r 1 ) y = 0 ,

where r 1 , r 2 are the roots of the auxilliary equation

(8.60) r 2 + a 1 r + a 2 = 0 .

The solutions of the two first-order equations

(8.61) ( D - r 1 ) y = 0 or dy dx + r 1 y = 0

give

(8.62) y = const e r 1 x ,

while

(8.63) ( D - r 2 ) y = 0 or dy dx + r 2 y = 0

gives

(8.64) y = const e r 2 x .

Clearly, these are also solutions to Eq. (8.59). The general solution is the linear combination

(8.65) y ( x ) = c 1 e r 1 x + c 2 e r 2 x .

In the case that r 1 = r 2 = r , one solution is apparently lost. We can recover a second solution by considering the limit:

(8.66) lim r 1 r 2 e r 2 x - e r 1 x r 2 - r 1 = r e rx = x e rx .

(Remember that the partial derivative / r does the same thing as d / dr , with every other variable held constant.) Thus the general solution for this case becomes

(8.67) y ( x ) = c 1 e rx + c 2 x e rx .

When r 1 and r 2 are imaginary numbers, say ik and - ik , the solution (8.65) contains complex exponentials. Since, by Euler's theorem, these can be expressed as sums and difference of sine and cosine, we can write

(8.68) y ( x ) = c 1 cos kx + c 2 sin kx .

Many applications in physics, chemistry, and engineering involve a simple differential equation, either

(8.69) y ( x ) + k 2 y ( x ) = 0 or y ( x ) - k 2 y ( x ) = 0 .

The first equation has trigonometric solutions cos kx and sin kx , while the second has exponential solutions e kx and e - kx . These results can be easily verified by "reverse engineering." For example, assuming that y ( x ) = cos kx , then y ( x ) = - k sin kx and y ( x ) = - k 2 cos kx . It follows that y ( x ) + k 2 y ( x ) = 0 .

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124071636000084

Linear Filters—General Properties with Applications to Continuous-Time Processes

Lambert H. Koopmans , in The Spectral Analysis of Time Series, 1995

4.4 INVERTING LINEAR FILTERS

A commonly occurring question in time series analysis is the following: A linear filter L with transfer function B(λ) has been applied to a time series X(t), to which it was matched, resulting in an output Y(t),

(4.25) L ( X ( t ) ) = Y ( t )

When is it possible to apply another linear transformation, say L *, to Y(t) and exactly reproduce X(t)? This question has a simple answer with important consequences. Again we restrict attention to weakly stationary time series.

Note that if

B ( λ ) 0 for all λ ,

then, taking L * to be the filter with transfer function B *(λ) = 1/B(λ), it follows that

| B ( λ ) | 2 F X ( d λ ) < ,

and

| B * ( λ ) B ( λ ) | 2 F X ( d λ ) = F X ( d λ ) <

Consequently, the filter L * L is well defined and is identical to the do-nothing filter I which has transfer function identically equal to 1. By (4.21) it follows that L * is matched to Y(t) and

X ( t ) = L * L ( X ( t ) ) = L * ( Y ( t ) )

The spectrum of X(t) can be recovered from that of Y(t) by the expression

(4.26) F X ( d λ ) = ( 1 / | B ( λ ) | 2 ) F Y ( d λ )

Thus, the invertibility of L follows immediately from the condition B(λ) ≠ 0 for all λ.

If B(λ) = 0 on a set of frequencies A, then, necessarily, the output spectrum will have zero power on A, since

F Y ( A ) = A | B ( λ ) | 2 F X ( d λ )

Thus, any power the X(t) process might have had for frequencies in A will be lost. Once lost, this power can never be recovered by an inverting filter, since a linear filter cannot create power or transfer it from another frequency range. However, if L * is taken to be the linear filter with transfer function

B * ( λ ) = { 1 / B ( λ ) , λ A , 0 , λ A ,

say, then L *(Y(t)) will be well defined and the spectrum of L *(Y(t)) will agree with that of X(t) on A c.

There is an important application of these results to the statistical estimation of spectra. As mentioned before, it is often necessary to prefilter a time series before estimating the spectrum. The prefiltering can be carried out by a (compound) linear filter L and the transfer function B(λ) of L can be computed by the techniques described above. The spectrum of interest is that of the input X(t). However, estimates will actually be obtained for the output spectrum. If fX (λ) and fY (λ) denote the spectral density functions of input and output, and f ˆ X ( λ ) and f ˆ Y ( λ ) denote statistical estimates of these parameters, then it is reasonable to take

(4.27) f ˆ X ( λ ) = ( 1 / | B ( λ ) | 2 ) f ˆ Y ( λ )

That is, relation (4.26) is used to define a procedure for correcting the estimates for filter bias. This method is quite successful except at frequencies for which |B(λ)|2 is near zero. For such frequencies very small variations in f ˆ Y ( λ ) , due, for example, to noise or the inherent variability of the estimation process, are greatly magnified leading to essentially useless estimates.

A second question, similar to the one stated above, but not to be confused with it, can be posed as the following problem: Given a linear filter L with transfer function B(λ) and a time series X(t), find a linear filter L * which matches X(t) such that the time series

(4.28) Y ( t ) = L * ( X ( t ) )

satisfies the equation

(4.29) X ( t ) = L ( Y ( t ) )

This is a stochastic functional equation which is to be solved for Y(t). There can be a variety of solutions or no solution depending on the interval of time for which the equation is to be satisfied, the properties of L and of X(t). We will be concerned here only with weakly stationary (steady state) solutions which are defined on the interval (− ∞, ∞). We now indicate the conditions under which (4.28) will provide such a solution to the equation.

Note that there is only one function B *(λ) which can qualify as the transfer function of L * and produce the result

L ( Y ( t ) ) = L ( L * ( X ( t ) ) ) = X ( t )

This function is

B * ( λ ) = 1 / B ( λ )

Thus, the key condition for the existence of the solution to (4.29) is the matching condition for L* and X(t);

(4.30) ( 1 / | B ( λ ) | 2 ) F X ( d λ ) <

For integrals, we assume that the convention 0/0 = 0 holds. Thus, this condition can be satisfied even when B(λ) = 0 on a set A provided it is also true that FX (A) = 0. If this condition is satisfied, the solution to (4.29) is given by

(4.31) Y ( t ) = ( 1 / B ( λ ) ) e i λ t Z X ( d λ )

To see this, suppose A = (λ: B(λ) = 0}. Then, since we have assumed that FX (A) = 0, we will have E|ZX ()|2 = FX () = 0, thus ZX () = 0 for all λ ∈ A. It follows that

Y ( t ) = A c ( 1 / B ( λ ) ) e i λ t Z X ( d λ )

Then, as before, L matches Y(t) and

L ( Y ( t ) ) = A c B ( λ ) ( 1 / B ( λ ) ) e i λ t Z X ( d λ ) = A c e i λ t Z X ( d λ ) = e i λ t Z X ( d λ ) = X ( t )

If X(t) has continuous spectrum and the spectral density function fX (λ) is bounded, then (4.30) will be satisfied if the following condition holds;

( 1 / | B ( λ ) | 2 ) d λ <

The solution can then be put in the form of a convolution integral, since 1/B(λ) has a square integrable Fourier transform h(u);

1 B ( λ ) = h ( u ) e i λ u d u

It follows that

(4.32) Y ( t ) = ( h ( u ) e i λ u d u ) e i λ t Z X ( d λ ) = h ( u ) ( e i λ ( t u ) Z X ( d λ ) ) d u = h ( u ) X ( t u ) d u

This yields an explicit time domain representation for the solution.

If Y(t) is to depend only on the present and past of the X(t) process, i.e., if the filter is to be realizable, additional conditions will have to be imposed on L. We will see an example of this shortly. Solution (4.31) is unique for every forcing function X(t) satisfying (4.30) if B(λ) ≠ 0 for all λ.

If B(λ) is zero on a set A of positive measure, then many solutions (no longer of the form (4.28)) can exist. For example, suppose that X(t) has a continuous spectrum with spectral density fX (λ) and let U(t) be any weakly stationary process with continuous spectrum for which the spectral density satisfies

f U ( λ ) = 0 on A c

By the Schwarz inequality, |EZX (dλ)ZU ()|2fX (λ) fU (λ)()2 = 0, since we must have fX (λ) = 0 on A in order to satisfy (4.30). Thus, U(t) and X(t) are uncorrelated processes. Let

Y ( t ) = A c ( 1 / B ( λ ) ) e i λ t Z X ( d λ ) + U ( t )

This process has spectral density

f Y ( λ ) = ( 1 / | B ( λ ) | 2 ) f X ( λ ) + f U ( λ )

and, since |B(λ)|2 fU (λ) = 0, we obtain

| B ( λ ) | 2 f Y ( λ ) = f X ( λ )

It follows that Y(t) matches L. Moreover, since B(λ)ZU () = 0 for all λ,

L ( Y ( t ) ) = A c e i λ t Z X ( d λ ) + B ( λ ) e i λ t Z U ( d λ ) = e i λ t Z X ( d λ ) = X ( t )

Thus, Y(t) is a weakly stationary solution of (4.29). Needless to say, the important situation is the one in which the solution is unique and can be expressed as a convolution integral. We give an illustration of the application of these results in an important special case.

Example 4.5

Linear Constant Coefficient Differential Equations

In the notation of Example 4.4 an nth order, linear, constant coefficient differential equation with forcing function X(t) can be written in the form

(4.33) a n D n ( Y ( t ) ) + a n 1 D n 1 ( Y ( t ) ) + + a 1 D ( Y ( t ) ) + a 0 Y ( t ) = X ( t )

We assume that X(t) has continuous spectrum with bounded, nonzero spectral density function. In particular X(t) could be taken to be the continuous-time white noise process ε(t). The condition under which a unique weakly stationary solution Y(t) exists is extremely simple and pleasant as we now show.

The characteristic equation of (4.33) is

P ( z ) = 0 ,

where P(z) is the characteristic polynomial defined in Example 4.4. A unique solution of the differential equation exists if none of the roots of the characteristic equation lie on the imaginary axis (i.e., have zero real parts). The solution can always be expressed as a convolution filter operating on X(t),

Y ( t ) = h ( u ) X ( t u ) d u ,

and the filter is always stable. Moreover, if all of the roots of the characteristic equation have negative real parts, the solution only depends on the past and present of the X(t) process, i.e., the convolution filter is realizable.

The first two statements follow immediately from the fact that if P(z) ≠ 0 for z on the imaginary axis, then the transfer function of the differential operator, B(λ) = P(), is bounded away from zero for all λ. Thus, 1/|B(λ)|2 is bounded from above and goes to zero as |λ| → ∞ at least as fast as 1/λ2. It follows that

( 1 / | B ( λ ) | 2 ) d λ < ,

and (4.31) and (4.32) apply. Note that if the real part of any root of P(z) = 0 is zero, then (4.30) cannot hold since we have assumed fX (λ) to be nonzero. Thus, the condition that the real parts of all roots be nonzero is necessary as well as sufficient for the existence of a solution of the differential equation.

The argument demonstrating the realizability and stability of the solution will be sketched for the second-order equation (n = 2). This derivation contains all of the salient features of the general case.

By dividing both sides of the differential equation by the coefficient of D 2(Y(t)), we obtain an (equivalent) equation of the form

D 2 ( Y ( t ) ) + a D ( Y ( t ) ) + b Y ( t ) = X ( t )

with characteristic equation

P ( z ) = z 2 + a z + b = 0

Let zj = αj + j , j = 1, 2, denote the two roots of this equation. The characteristic polynomial can also be written in factored form as P(z) = (z − z 1)(z − z 2).

Suppose, first, that z 1z 2. Then, by a partial fraction expansion,

1 P ( z ) = 1 ( z z 1 ) ( z z 2 ) = A 1 z z 1 + A 2 z z 2

The constants are easily computed by putting both sides of this equation over a common denominator and equating the coefficients of like powers of z. In this case it can be shown that A 2 = − A 1. Then,

1 B ( λ ) = 1 P ( i λ ) = A 1 i ( λ β 1 ) α 1 + A 2 i ( λ β 2 ) α 2

This is the transfer function B *(λ) of L *.

Let

B j ( λ ) = 1 / ( i ( λ β j ) α j ) , j = 1 , 2

Then, the impulse response function h(u) of L * will be

h ( u ) = A 1 h 1 ( u ) + A 2 h 2 ( u ) ,

where

B j ( λ ) = h j ( u ) e i λ u d u

Now, a simple integration establishes that

(4.34) h j ( u ) = { e ( α j + i β j ) u , u > 0 , 0 , u 0 , if α j < 0 , = { 0 , u 0 , e ( α j + i β j ) u , u < 0 , if α j > 0

It is immediate from this and the expression for h(u) that the filter will be realizable, i.e., h(u) = 0 for u ≤ 0, if and only if the real parts α1 and α2 are negative. If one root has positive real part and the other negative real part (which can only happen if both roots are real), h(u) will be nonzero for all u and the convolution filter will depend on both the future and the past of the process. If both real parts are positive, the filter will depend only on the future.

Note that although h 1(u) and h 2(u) are complex-valued, since A 2 = − A 1 and complex roots appear in conjugate pairs for algebraic equations with real coefficients, h(u) will always be real-valued. Moreover, since αj < 0 if u > 0 and αj > 0 if u < 0 in (4.34), we always have

| h ( u ) | d u <

Thus, in all cases, the convolution filter is stable.

In the case of a repeated root (z 1 = z 2), necessarily β1 = β2 = 0. Then, B *(λ) is simply

1 / B ( λ ) = 1 / ( i λ α 1 ) 2

This is the transfer function of two applications of the convolution filter with impulse response function (4.34). Since this is a convolution filter which is realizable when the component filters are, it follows that condition α1 < 0 is again necessary and sufficient for realizability. The filter is also stable since the impulse response function (4.34) is square integrable.

For a general, nth order differential equation, the partial fraction expansions are longer and may contain higher powers of the terms Bj (λ), but otherwise the argument is the same. The impulse response function of the convolution filter will always be composed of linear combinations of the functions hj (u) given by (4.34) and higher-order convolutions hj (k) (u) as defined in the last section. Thus, it is possible to obtain an explicit time-domain solution to any linear, constant coefficient differential equation excited by a weakly stationary forcing function which satisfies the conditions given above.

If the forcing function is a continuous-time white noise process and the zeros of P(z) have negative real parts, the resulting weakly stationary process is called a continuous-time autoregression for reasons that will become clear in Chapter 7. These processes depend on only a finite number of parameters and have been the subject of a number of interesting applications of spline functions to time series. An expository paper which discusses these applications and provides a good bibliography is by Davis (1972).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124192515500065