In mathematics, a Carleman matrix is a matrix used to convert function composition into matrix multiplication. It is often used in iteration theory to find the continuous iteration of functions which cannot be iterated by pattern recognition alone. Other uses of Carleman matrices occur in the theory of probability generating functions, and Markov chains.

Definition

edit

The Carleman matrix of an infinitely differentiable function   is defined as:

 

so as to satisfy the (Taylor series) equation:

 

For instance, the computation of   by

 

simply amounts to the dot-product of row 1 of   with a column vector  .

The entries of   in the next row give the 2nd power of  :

 

and also, in order to have the zeroth power of   in  , we adopt the row 0 containing zeros everywhere except the first position, such that

 

Thus, the dot product of   with the column vector   yields the column vector  , i.e.,

 

Generalization

edit

A generalization of the Carleman matrix of a function can be defined around any point, such as:

 

or   where  . This allows the matrix power to be related as:

 

General Series

edit
Another way to generalize it even further is think about a general series in the following way:
Let   be a series approximation of  , where   is a basis of the space containing  
Assuming that   is also a basis for  , We can define  , therefore we have  , now we can prove that  , if we assume that   is also a basis for   and  .
Let   be such that   where  .
Now
 
Comparing the first and the last term, and from   being a base for  ,   and   it follows that  

Examples

edit
Rederive (Taylor) Carleman Matrix
edit

If we set   we have the Carleman matrix. Because
 
then we know that the n-th coefficient   must be the nth-coefficient of the taylor series of  . Therefore  
Therefore  
Which is the Carleman matrix given above. (It's important to note that this is not an orthornormal basis)

Carleman Matrix For Orthonormal Basis
edit

If   is an orthonormal basis for a Hilbert Space with a defined inner product  , we can set   and   will be  . Then  .

Carleman Matrix for Fourier Series
edit

If   we have the analogous for Fourier Series. Let   and   represent the carleman coefficient and matrix in the fourier basis. Because the basis is orthogonal, we have.

 .


Then, therefore,   which is

 

Properties

edit

Carleman matrices satisfy the fundamental relationship

  •  

which makes the Carleman matrix M a (direct) representation of  . Here the term   denotes the composition of functions  .

Other properties include:

  •  , where   is an iterated function and
  •  , where   is the inverse function (if the Carleman matrix is invertible).

Examples

edit

The Carleman matrix of a constant is:

 

The Carleman matrix of the identity function is:

 

The Carleman matrix of a constant addition is:

 

The Carleman matrix of the successor function is equivalent to the Binomial coefficient:

 
 

The Carleman matrix of the logarithm is related to the (signed) Stirling numbers of the first kind scaled by factorials:

 
 

The Carleman matrix of the logarithm is related to the (unsigned) Stirling numbers of the first kind scaled by factorials:

 
 

The Carleman matrix of the exponential function is related to the Stirling numbers of the second kind scaled by factorials:

 
 

The Carleman matrix of exponential functions is:

 
 

The Carleman matrix of a constant multiple is:

 

The Carleman matrix of a linear function is:

 

The Carleman matrix of a function   is:

 

The Carleman matrix of a function   is:

 
edit

The Bell matrix or the Jabotinsky matrix of a function   is defined as[1][2][3]

 

so as to satisfy the equation

 

These matrices were developed in 1947 by Eri Jabotinsky to represent convolutions of polynomials.[4] It is the transpose of the Carleman matrix and satisfy

  which makes the Bell matrix B an anti-representation of  .

See also

edit

Notes

edit
  1. ^ Knuth, D. (1992). "Convolution Polynomials". The Mathematica Journal. 2 (4): 67–78. arXiv:math/9207221. Bibcode:1992math......7221K.
  2. ^ Jabotinsky, Eri (1953). "Representation of functions by matrices. Application to Faber polynomials". Proceedings of the American Mathematical Society. 4 (4): 546–553. doi:10.1090/S0002-9939-1953-0059359-0. ISSN 0002-9939.
  3. ^ Lang, W. (2000). "On generalizations of the stirling number triangles". Journal of Integer Sequences. 3 (2.4): 1–19. Bibcode:2000JIntS...3...24L.
  4. ^ Jabotinsky, Eri (1947). "Sur la représentation de la composition de fonctions par un produit de matrices. Applicaton à l'itération de e^x et de e^x-1". Comptes rendus de l'Académie des Sciences. 224: 323–324.

References

edit