Talk:Arnoldi iteration
This article is rated C-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||
|
"Typically, the Ritz eigenvalues converge to the extreme eigenvalues of A".
What does "extreme" eigenvalues mean?
- It means the largest eigenvalues, in modulus. Look at the action of the matrix A on its eigenbasis: the eigenvector corresponding to the largest eigenvalue will be stretched more than the other ones, so that after a few applications of A to the initial vector b the resulting vector will end up pointing in the direction of the eigenvector corresponding to the most stretching. --IterateImprove (talk) 18:38, 27 September 2018 (UTC)
When A is symmetric (-> Lanczos ), the answer is obvious as eigenvalues are real, and they can be ordered. What about the unsymmetric case, where eigenvalues are complex?
Largest/smallest absolute value, real part, imaginary part?
Well, I have no answer to this (yet).
- The eigenvalue that dominates is the one with largest absolute value: suppose A can be diagonalized as
- where D is diagonal. Call u0 the column of U corresponding to the largest (in absolute value) eigenvalue in D. Then
- which shows that the component of b along u0 grows the fastest. -IterateImprove (talk) 18:38, 27 September 2018 (UTC)
I think that the m and n represent the same symbol here, but not confident, so leaving it. Someone more math inclined wanna take a look?
--Meawoppl (talk) 23:55, 17 March 2010 (UTC)
- They are not the same thing. The symbol m is the dimension of the space A acts on, while n is the dimension of the Krylov space. If you choose the Krylov vectors and the columns of A span the same space (at least if A is full rank). --IterateImprove (talk) 18:38, 27 September 2018 (UTC)
how to select the Rize values as the wanted eigenvalue?
editI computed the Rize values from H (the upper Hessenberg matrix), and noticed that not all the Rize values are the eigenvalues of initial matrix A. By QR method, all the Rize values can be computed at the same time, but how to choose them as the wanted eigenvalues????? —Preceding unsigned comment added by 158.132.95.219 (talk) 08:39, 6 October 2010 (UTC)
- I'm not sure exactly how you set up your calculation, you should be able to recover all the eigenvalues if you use the full Krylov space (i.e. set ), and a good approximation to the first few largest ones if . --IterateImprove (talk) 18:38, 27 September 2018 (UTC)
The Arnoldi iteration
editI am not sure if your version of the Arnoldi algorithm is correct. If I am right, when I implement this, it misses the first column. Hence it will not result in a upper triangular matrix.
- The python code for the Arnoldi iteration is wrong, I'll fix it. The pseudocode is correct. --IterateImprove (talk) 18:38, 27 September 2018 (UTC)
I think something is wrong with the number.. Wait, give me a minute.. I must figure something out — Preceding unsigned comment added by Zwep (talk • contribs) 16:11, 7 November 2012 (UTC)
82.139.124.93 (talk) 10:52, 8 November 2012 (UTC)
I think it would be right to give the algorithm an extra stop condition: when the norm that normalizes the new vector is zero. If you do not implement this, large errors will occur.
- You are right. If that happens it means that the Krylov space is sufficient to span the image of A, so we should check for that. In practical implementations one is interested in the case where the dimension of the Krylov space is much less than the dimension of the space on which A acts, so it usually does not happen. I think it's better to include this observation, I'll do it. --IterateImprove (talk) 18:38, 27 September 2018 (UTC)
dimensions of the Q matrix not correct?
editIn "properties of the Arnoldi Iteration", either that Q has dimensions n-by-m, or the equality is
- rather than
which one is it? YetAnotherBunny (talk) 04:26, 23 August 2013 (UTC)
- If we are considering the full Krylov space, those are all square matrices, , if instead we are considering only a subspace then neither. The correct equation is
- which is consistent: the two matrices on the left hand side are and , while the ones on the right are and . --IterateImprove (talk) 18:38, 27 September 2018 (UTC)
Matrix Dimension
editI think, must by n by n-1, not n+1 by n. Because using algorithm for finding , you can evaluate only if k = n+1
- The matrix is : the result of can have components along . --IterateImprove (talk) 18:38, 27 September 2018 (UTC)