Talk:Eigenvalues and eigenvectors/Archive 1

Latest comment: 14 years ago by 66.75.47.88 in topic Continuous spectrum
Archive 1 Archive 2 Archive 3

Relation to broader principles

Eigenfunctions could be given more shrift (beyond the standing wave problem) some where in this page, given their immense importance (e.g., Shrodinger's equation yielding eigenfunctions for discrete energy eigenvalues). But given the quality of this page, it is appropriate to discuss such an addition first.

Fixing up this page

Now that we have all this information in one place we need to figure out the correct structure. --MarSch 3 July 2005 12:20 (UTC)

I agree, this page is too haphazard and it was hard to find what I was looking for (specifically how to find them). Fresheneesz 19:15, 10 December 2005 (UTC)


When I try to print the main article page, the first instances of 'eigenvector' and 'eigenvalue', in the opening para., do not appear on the hardcopy (neither do the speaker-like symbols that precede these word/links). Is this an error, or am I using an inapt coding to view, please? (Otherwise, it's a great article.) pellis@london.edu 12:11, 28 April 2006

Inifite-dimensional spaces

The concept of eigenvectors can be extended to linear operators acting on infinite-dimensional Hilbert spaces or Banach spaces.

Why is this called an extension? It looks like a straightforward application of the definition. Also, is there really a restriction to Banach spaces in infinite-dimensional cases? In any case, "Hilbert spaces or Banach spaces" is redundant since all Hilbert spaces are Banach spaces. Josh Cherry 3 July 2005 17:17 (UTC)

You're right, the definition of eigenvectors and -values does not depend on the dimension. But the spectrum of an operator is an extension of eigenvalues.

Power method

For the matrix having 2 on the diagonal and 0 off diagonal, the power method as explained in this article will fail to converge to any eigenvector, since applying the matrix is the same as multiplying by two, and this is done an infinite number of times. Am I getting something wrong? Oleg Alexandrov 4 July 2005 00:06 (UTC)

Uhm, yes, well, I have to admit that "converge" is used in a very loose sense here. I tried to amend the text. It is still work in progress anyway. Somewhere on Wikipedia it should also be mentioned that the method fails with matrices like
 
which has two eigenvalues of equal magnitude. Sigh, so much to do ... -- Jitse Niesen (talk) 4 July 2005 00:37 (UTC)


The claim that insolvability of the quintic by radicals implies no algorithm to solve polynomials in general is wrong. I have rephrased that section. Stevelinton 09:56, 29 August 2005 (UTC)

Physical significance of eigenvectors

I find the article to be fairly accessible to those who are not math gurus but too much emphasis in the article is on the abtruse language generally employed by mathematicians to describe what I colloquially term "eigenstuff". What may help is if someone adds some of the physical significances of eigenvalues and eigenvectors, such as:

  • The moment of inertia tensor always produces orthogonal eigenvectors which correspond to principal rotational axes about which the equations of rotational motion become very simple. The eigenvalues are the scale factors of the moment of inertia equation in such a case.
  • A matrix which describes an approximation of a conjugated pi system (The Hueckel Approximation, if anyone is curious) often appears as a determinant, and setting it equal to zero yields the eigenvalues (allowed energies) and the eigenvectors (normalized but not necessarily mutually orthogonal in whatever vector space it spans, not that chemists care about the vector space involved) yield the contribution of each atom in the pi system to the associated molecular orbital represented by the eigenvalue.
  • In heat transfer the eigenvectors of a matrix can be used to show the direction of heat flow. I am not entirely sure in this case what the eigenvalues represent. My thermodynamics didn't extend to matrix versions of the Fourier equation, so... (???)
  • Someone mentioned stress-strain systems where the eigenvectors represent the direction of either greatest or least strain or stress, depending, IIRC, on the way the matrix is set-up. Again, not having done engineering in detail I can't be more specific.
  • More generally someone mentioned to me once that in spectroscopy you can set up a matrix which represents the molecule under study, and in Infrared in particular, because it effectively acts as a series of coupled springs which vibrate at discrete energies, the eigenvalues here give the energies (wavenumbers) for the peaks and the eigenvectors tell what kind of motion is happening. Again, without quantum spectroscopy in detail I can't be more specific than this.

I would like to encourage Wikipedians in general, if they are editing math-specific entries, to try and relate the significance, either mathematically, or physically, of the concept or method in less abtruse language than is typical in linear algebra texts. :)

--142.58.101.46 4 July 2005 16:23 (UTC)

Well, I'm a math person, and I find the majority of what you just said far more "abstruse" than the math jargon. It's all subjective!! 216.167.142.30 20:40, 23 October 2005 (UTC)
  • Be bold and start editing. I agree that sometimes mathematicians push abstraction to the point where nobody else understands what is going on. Oleg Alexandrov 5 July 2005 02:36 (UTC)

I think the mathematics and the physics belong in seperate articles. The scope of this one is just too large. Perhaps the maths could go in "Spectrum (Linear Algebra)", the physics applications in "Eigenvalues and Eigenvectors in Physics", and other applications in "Applications of eigenspaces outside physics"? From a mathmatical point of view, the chain of associations sort of goes Linear Operators (-> Spectra) -> Differential Operators -> Differential Equations (-> Solutions) -> Applications, while in physics it goes Eigenstuff -> {Energy states, Principal axes, Modes, Observables, ...}. I think that combining these into one article will make it hard for physics or mathematics readers (or both) to navigate and stay motivated.

Also, the categories and names in physics and mathematics are quite different. Physicists think of rotations in R^3 and observables in Hilbert space as very different things, but to mathematicians they're all linear operators, though the observables are a special type called Hermitian. R. E. S. Polkinghorne

Eigenfunctions belong here too?

Yes? No? --HappyCamper 06:12, 11 August 2005 (UTC)

Singular Values ?

A reference to singular values and singular value decomposition could be of interest too ? --Raistlin 16:16, 13 August 2005 (UTC)

Table of contents

Oleg Alexandrov's edit summary (full disclosure: complaining about my subsubsubsection):

I'd rather not have this as a subsubsubsection. That could be correct, but does not look good in the toc

I agree that it looks bad in the TOC, but it seems wrong that we should be compromising the article in order to improve the æsthetics of a standardised TOC. Has this problem been solved satisfactorily elsewhere on WP? (The obvious suggestion would be an option for subsubsubsections (or whatever) not to be mentioned in the TOC at all.) —Blotwell 09:26, 18 August 2005 (UTC)

There's a (very short) discussion about the possibility of limiting the table of contents to display only up to a certain level of headings at Wikipedia talk:Section#Choosing TOC level, but not much more than wishful thinking at the moment. I agree with User:Blotwell that we shouldn't disrupt structural tags in the article in order to make the table of contents look prettier. —Caesura(t) 18:05, 7 December 2005 (UTC)

Projections

projection onto a line: vectors on the line are eigenvectors with eigenvalue 1 and vectors perpendicular to the line are eigenvectors with eigenvalue 0.

Isn't that true only of orthogonal projections? Consider

 

Doesn't that qualify as a projection onto a line? Yet it has no eigenvectors perpendicular to the line. Josh Cherry 14:35, 20 August 2005 (UTC)

You are right, I fixed it.--Patrick 09:16, 21 August 2005 (UTC)

intuitive definition

the definition as given is, i think, unnecessarily hard to visualize. i would say that an eigenvector is any unit vector which is transformed to a multiple of itself under a linear transformation; the corresponding eigenvalue is the value of the multiplier. that is really clear and intuitive. the fact that a basis can be formed of eigenvectors, and that the transformation matrix is diagonal in that basis, is a derived property. starting off with diagonal matrices and basis vectors is unnecessarily complicated. ObsidianOrder 22:49, 21 August 2005 (UTC)

P.S. that way, eigenvalues and eigenvectors in infinite-dimensional spaces are not an "extension" either, they just follow from the exact same definition. ObsidianOrder

I agree. Josh Cherry 01:26, 22 August 2005 (UTC)
Yes.. the idea of forming a basis of eigenvectors does NOT belong in the intro. Here's an idea -- perhaps there should not be an intro? Perhaps the first part of the definition, the non-formal, geometrical part that you describe, can serve as the intro? Or.. if the format that needs to be followed is that there must be an intro, then nevertheless it should go from less technical to more technical, so the intro should have your non-formal geometric definition and in the definition section we can put the formal definition. The whole thing about diagonalizing a matrix should be mentioned, but LATER. Kier07 03:00, 22 August 2005 (UTC)
I think the most intuitive idea of eigenvalues and eigenvectors comes from principal component analysis, and is reflected in the main applications. You want to express the transformation in terms of invariant axes, which correspond to eigenvectors, and how these contract or expand corresponds to the eigenvalues. --JahJah 02:56, 22 August 2005 (UTC)
Perhaps. Here's another idea... would it be possible to have a visual for the intro, instead of this verbal nonsense? We could give a linear transformation from R^2 --> R^2, and show in a picture what this does to a sampling of vectors -- including invariant axes. Kier07 20:35, 22 August 2005 (UTC)
At the moment we're giving the mathematical coördinate-free definition in terms of linear transformations, which obscures the connection with many of the applications. Often matrices don't represent transformations, but bilinear forms—this is when they have two covariant/two contravariant indices rather than one of each, if you're into tensors—and we should point out that eigenvectors are physically meaningful in this case too. This is where all the applications that give you symmetric matrices come from. (There is a relation between transformations and bilinear forms, of course, but it wouldn't be helpful to go into it in the introduction.)
Following on from this, here's the idea I had been thinking about for an introductory visual: if you have a symmetric matrix and apply it to the unit sphere, then you get an ellipsoid whose principal axes have direction corresponding to the eigenvectors and magnitude corresponding to the eigenvalues. (Check: the product of the magnitudes is proportional to the volume, which is right because the product of eigenvalues is the determinant which is the volume scale factor.) This should be easy/fun/illuminating to illustrate. —Blotwell 05:50, 23 August 2005 (UTC)

motivation?

A little while ago I suggested a sentence in the introduction to give some motivation, something really simple to tell the reader why they should care. It was along the lines of:

eigenvectors/eigenvalues etc allow us to replace a complicated object (a linear transformation or matrix) with a simpler one (a scalar).

This simple idea, expressed in very plain english, explains why we care about eigenvalues/vectors, without confusing the uninitiated with terminology like bases, diagonal entries, etc. The sentence was deleted at some point, and I feel the omission with some pain. Does anyone else agree that such a sentence would be a Good Thing? Dmharvey File:User dmharvey sig.png Talk 02:30, 23 August 2005 (UTC)

Me, I agree. Though I'm no pedagogue. —Blotwell 05:52, 23 August 2005 (UTC)

Where eigenvectors come from

In the section on "Identifying eigenvectors", it was not immediately clear where the vector (1, 1, -1) came from. It turned out it was the solution for  , and this turned out to be pretty easy. But it would be nice to explicitly state where to find the "sets of simultaneous linear equations" to be solved.

WillWare 23:14, 25 August 2005 (UTC)

Improving further

I have done a lot of work with respect to the intro and definition. I think some of the things I wrote are not optimized and are redundant with infos somewhere else on the page. Now I am tired and would like to see other authors contribute to a better structured article. 131.220.68.177 09:31, 8 September 2005 (UTC)

nonsense

Do we want to make any mention here about the fact that many disciplines have started to use "eigenvalue" as a buzz word? I was talking to a philosophy professor recently, a man with numerous published books, and when he began talking about "eigenvalues" I asked him how he was using the term. He was completely taken aback, and finally admitted he had no idea what the term meant, but it was in common use in his field.

Please do, with examples. The abuse of "parameter" may finally be dying; let's kill this one now. Septentrionalis 02:11, 14 September 2005 (UTC)
You're kidding. Wow. I've never heard of arts-type disciplines using the term "eigenvalue". I'll have to ask the philosophy students I know if their professors do this. I think the overuse of the term may come from the fact that it shows up so much in many sciences (for valid reasons). --24.80.119.229 18:43, 18 September 2005 (UTC)
Eigenwert which is German for eigenvalue has a broader sense than in mathematical English. It can be also translated as distinct value see discussion in German at [1] because eigen is a very common prefix in German and can be used in a very broad sense. In that sense one could say The eigenvalue of the president's speech was more formal than objective. See for example Eigenwert des Ästhetischen at [2]. So sometimes, the term eigenvalue can be used in this sense for example:
Thus one can say that research and therefore science fulfills a function and thereby reproduces a stable eigenvalue of modem society. One cannot simply refrain from research without triggering catastrophic consequences -- catastrophe understood here as the reorientation towards other eigenvalues.[3]
I am no philosopher but as far as I understand this. It means that science is a value of a modern society which makes it clearly distinct from a more archaic society. Vb 09:58, 19 September 2005 (UTC)
The paragraph above is a translation into very unidiomatic, and almost unreadable, English; the whole text almost reads like a Babelfish exercise. The use of the claque calque "eigenvalue" in this context is simply an error, as is "reproduces". Eigenwert would have been acceptable, so would "distinctive value". In short, Vb has given a diagnosis, not a justification. Septentrionalis 17:28, 19 September 2005 (UTC)
I presume that you meant calque. Josh Cherry 00:06, 20 September 2005 (UTC)
I utterly agree this is a diagnosis not a justification. One needs more examples of this kind of usage to begin some article/section/disambig on this. Vb81.209.204.158 16:32, 20 September 2005 (UTC)

a duplicate/related article

There's a recently spawned article Symbolic computation of matrix eigenvalues which I don't much like; it seems to be a spinoff of this project, maybe? It needs attention or cleanup or something. linas 01:01, 14 September 2005 (UTC)

Yes indeed. This info was previously within this article. I put it in a separate article because I thought this info maybe interesting for grad students fighting with matrix diagonalization but not for someone interested in the general topic of this article. I think this is the same for eigenvalue algorithm. Maybe the good idea would be to merge Symbolic computation of matrix eigenvalues with eigenvalue algorithm. Vb 10:13, 14 September 2005 (UTC)

Featured article?

I think now the article is getting quite good. However I don't think this is enough for getting featured. I think the following points have to be addressed before submitting for peer review:

  1. Copy edit : I think the English is not the best. I am no native speaker and I cannot manage it. That's the reason why I put the copy-edit flag on the page.
  2. Too technical : Some info on the page are too technical. In particular many properties are stated without explaining the reader why they are important and in which context they are used. In particular, the sections Decomposition theorem, Other theorems, Conjugate eigenvector, Generalized eigenvalue problem and Eigenvalues of a matrix with entries from a ring should be re-written to be more accessible to a broader audience.

I think much of the info here is good and interesting but sound a bit too much like a grad book for mathematicians. Vb 16:16, 21 September 2005 (UTC)

Problem with one of the examples

The following passage occurs as an explanatory example:

If one considers the transformation of the rope as time passes, its eigenvectors (or eigenfunctions if one assumes the rope is a continuous medium) are its standing waves well known to musicians, who label them as various notes. The standing waves correspond to particular oscillations of the rope such that the shape of the rope is scaled by a factor (the eigenvalue) as the time evolves.

A string of random length stretched at random tension can produce a sound that corresponds to a particular frequency. But depending on various factors the string can perform a simple vibration in which the entire length of the string moves in the same direction at the same time, or it can produce more complex vibrations (like the one illustrated in the movie). The musical qualities of these various possible variations are emphasized in the reader's mind by the mention of musicians because a pure sine wave sound vibration is not musically beautiful. Not only will the reader's mind potentially be sidetracked by that line of thought, but it is not exactly true that a musician will label any frequency as a "note" -- particularly if the musician has perfect pitch and the frequency being prduced is somewhere midway between a A and an A flat. I don't want to change this passage without being aware of what the original writer was trying to accomplish. P0M 05:37, 23 September 2005 (UTC)

I am no musician. This example I wrote is a typical physicist's example which has not much to do with music. Here the objective was only to provide an example of eigenvector in an infinite dimensional space which could be understood by nonspecialists. If someone is able to make a better link with music, I would be very happy if he could do so. I would learn something! However I think it is important to keep in mind that one should not go into deep details here. This is not the place to present the whole theory of vibrating strings. Vb07:51, 23 September 2005 (UTC)

If you weren't trying to say something tricky about higher harmonics, Martin guitar vs. Sears guitar, etc., then how about:

If one considers the transformation of the rope as time passes, its eigenvectors (or eigenfunctions if one assumes the rope is a continuous medium) are its standing waves -- the things that, mediated by the surrounding air, humans can experience as the twang of a bow string or the plink of a guitar . The standing waves correspond to particular oscillations of the rope such that the shape of the rope is scaled by a factor (the eigenvalue) as the time evolves.

P0M 15:11, 23 September 2005 (UTC)

Yes that's true I had also higher harmonics in mind. However, I believe it is not worth to tell more than a line about it or maybe just a link. What you wrote is from my point of view well done. I would replace "the things that," by "which" but my English is for sure not as good as yours. My problem is also that I am a bit afraid of telling nonsense about music: I have no idea about Martin and Sears guitar. But if you do why not making a footnote about such details: they are for sure interesting (at least to me) even if they would disturb a bit from the main topic of this article. Vb09:04, 24 September 2005 (UTC)

I made the basic change. If something were to be added maybe we could say something like, "Waves of various degrees of complexity may be present, analogous to anything from the dull plunk of a cigar box string instrument to a chord from a Stradivarius violin"? (Martin guitars are not quite that good, and certainly not that expensive anyway. Sears used to sell cheap guitars that hardly "sang" at all -- sorry, that is definitely POV. ;-) How do the higher harmonics get set up in quantum situations? P0M 01:48, 27 September 2005 (UTC)

  • In general, different harmonics are characterized by their number of nodes. The first has no node. In the example of the movie above this is the fourth harmonic. In the hydrogen atom problem, this is similar. For quantum numbers l=0 (label s), each label n numbers of nodal surfaces and therefore corresponds to higher harmonics. For more details look at spherical harmonics. The complexity comes from the fact that the wave is 3D but I guess this must be a bit like the patterns formed within a spherical wind instrument Vb 09:30, 30 September 2005 (UTC)

Involutions and projections

I am very sorry but I don't see why all that info about involutions and projections belong here. OK they have eigenvalue -1, 0 or 1. And?... Do you really think it must be put on the same level as Spectral theorem or Applications? I believe not. If nobody provides here any convincing argument, I'll remove this section quite soon. Vb 17:57, 10 October 2005 (UTC)

==Linear involutions and projections==

Involutions, that is transformations such that  , and projections, that is transformations such that  , if they are linear, have the following simple properties

  • The eigenvalues of an involution can only be 1 or −1.
  • The eigenvalues of a projection operator can only be 0 or 1.

Such kind of transformations are ubiquitous in mathematics and applied disciplines.

Projections exist in pairs of which the sum is the identity, with a corresponding pair of involutions: plus and minus their difference. Conversely, the mean value of an involution and the identity is a projection.

The simplest examples are the identity ( ) which is an involution as well as a projection, the inversion ( ) which is an involution, and the zero operator ( ) which is a projector.

If the elements v are functions defined on a set A, then other examples of involutions are those that, for a given subset B of A, negate the function value at B, and keep it fixed at the rest of A. Similarly, examples of projections are those that, for a given subset B of A, make the function value at B zero, and keep it fixed at the rest of A.

In the finite-dimensional case involutions and reflections are represented by diagonalizable matrices, similar to diagonal matrices where, as follows from the above, all the values on the diagonal (the eigenvalues) are 1 and/or −1 for involutions and 1 and/or 0 for projections.

change of title

"Eigenvalue, eigenvector, and eigenspace" has been moved to "Eigenvalue, eigenvector and eigenspace". I don't understand why. I am not a native speaker but I know that BibTeX use this rule for the coma when citing a paper with more than one author and that the editors of Phys. rev. A always use that rule for a list of more than two items. Though this change may be correct I believe it was not necessary. This change has some influence because many links to this page are getting now grandsons of the article instead of sons and this has influence! Could someone tell me whether one should keep the current spelling. If noobdy can I will make the mv back. Vb 09:00, 11 October 2005 (UTC)

Why don't you ask User:MarSch who did the move (if I read the history correctly)? The second comma in "Eigenvalue, eigenvector, and eigenspace" is a serial comma, and, as that article says, opinion is divided on whether one needs it or not (I am not a native speaker either). -- Jitse Niesen (talk) 12:56, 11 October 2005 (UTC)
Thank you Jiste for your explanation. I understood something: English is not that easy! :-) I don't do anything and let the native speakers quarrel about that! Vb 09:55, 12 October 2005 (UTC)

Mona Lisa

But what does that make the blue vector?

the blue vector is not an eigenvector Vb 09:54, 14 October 2005 (UTC)

I think the last line of the text describing the Mona Lisa should read: "They, along with all vectors with same scale value parallel to the red vector, form the eigenspace for this eigenvalue"--69.118.239.204 03:18, 24 November 2005 (UTC)

Separating different parts

Could the mathematical and physical sections be a little more distinct? And also the linear/nonlinear. Much of the things about the linear case don't hold when you move to nonlinear differential operators, e.g. and it seems ridiculous to worry about qualifying everything. While technically the linear case is just a "special case" of the nonlinear case, this is a bit like saying analytic functions are a "special case" of R^2-R^2 real-differentiable functions, so why waste time on them. Most people will mean the linear case when they think of "eigenvector", and there is so much you can say in just the mathematical context of vector spaces and linear transformations. 216.167.142.30 20:40, 23 October 2005 (UTC)

I don't understand your point. Many applications of the concept are non linear and the concept of eigenvectors can easily be introduced to lay persons in the most complex context. I believe the choice which has been made is more pedagogic. Many who know what an eigenvector is think it appears in linear algebra only (I was thinking this after my first lecture in algebra): this is untrue and I think this is one of the reason why the article was recognized as a featured article. You are speaking about a clearer distinction of physics and math. Physics appear in the examples and the applications only. So why do you bother? Vb 20:51, 23 October 2005 (UTC)
Well, I have a ph.d. in math and had never heard of "nonlinear eigenvalues", so don't feel bad. My point is, when the article attempts to present a formal definition of things. It's one thing to introduce a touchy-feely idea of the concept to a general audience; it's another to try to introduce a formal definition later. My gripe is that the general intuitive concept is introduced as being inclusive of both linear and nonlinear, but then when the formal definition comes around, the linear definition is given, and then later on down, it's mentioned "oh, yes, it can be nonlinear" but no definition of what this might mean is given. So, as it stands, as a mathematician, I have been made aware of the fact that such a thing as "nonlinear eigenvalue problems" exist, but I have no idea what they are or what they mean. Does this makes sense? 216.167.142.30 21:02, 23 October 2005 (UTC)
I'm beginning to wonder if the article isn't becoming too technical altogether. Compare to fractal. At that article, after a great deal of work and compromise, the article settled into a general readable form for non-math or non-science types, but without avoiding or sidestepping important issues relating to math or science. Most of the technical details about theory and applications got shuttled off to other articles (e.g. those defining dimension, relationship to chaos theory and dynamical systems, applications to science, etc, etc) The current article for EEE seems to start off attempting to emulate the fractal article, but then quickly goes into a myriad of technical issues, even algorithm ones (how to compute complex eigenvalues), and the general big picture seems to be lost. There is more than enough relatively non-technical stuff to talk about to fill out a whole article, without avoiding technical considerations, and there is certainly enough to fill out several technical articles (e.g. the theoretical presentation of EEE in the sole context of vector spaces and linear transformations; the practical and matix algorithms in finite-dimensional case; the application to linear differential equations; the nonlinear eigenvalue problem; applications to physics; etc, etc) ALL of these could form separate articles. As it is, each of these has a "stub" here, so the article reads like several stubs following each other. 216.167.142.30 20:58, 23 October 2005 (UTC)
I suppose my point is, the term "EEE" have probably achieved a certain cultural currency outside of mathematics proper, in the way that "fractal" has achieved a cultural currency outside of mathematics proper. So, in these cases, the main article for such a term (such as "fractal") will have to address the large non-math audience, while other articles address other things. This article will have to address the large non-math audience, but as it is, there is no article set up anywhere for writing in a style for a mathematical audience. At the least, the article starts talking to one audience, and quickly starts talking to another, and ends up succeeding with neither. 216.167.142.30 21:06, 23 October 2005 (UTC)

Well I have a PhD in Physics however I hate authoritative arguments. The point is not whether non linear transformations are important but whether one needs linearty to define the concept of eigenvalue. I think not. In fact I believe the picture of Mona Lisa is enough to explain what it is and that the definition T(v_l)=l*v_l is general enough without assuming anything about the nature of T. However what you say is not true: many non linear transformation are common and their eigenvalues are important quantities. In the example to the vibrating string, the equations of movement must not be linear. If one consider transformations corresponding to time-evolution operators in dynamical systems, the equations of movement are in general not linear. In the case of quantum mechanics, studying the motion of an open subsystem leads also to nonlinear equations of motion. The study of the eigenvalues of the corresponding operators is a very broad field of research. About the level of mathematical and technical details: I think many students are exactly looking for this kind of information. Your next criticism is about the stub character of the article. You forget that the article has some daughter pages like eigenvalue algorithm, eigenface, etc... However if you want to rewrite this article, please be bold! But please have a look at the comments on the FAC page and try to keep the expressed remarks into account. Both of us are not good judges for deciding what laymen in maths are expecting from this article. The remarks expressed there could provide a kind of red line for this. Vb 16:00, 25 October 2005 (UTC)

Vb, it's also not quite clear to me what is meant with nonlinear eigenvalues. I suppose that even if T is a nonlinear operator, you can define λ to be an eigenvalue whenever T(v) = λv. But can you explain in more detail why such a definition would be useful? -- Jitse Niesen (talk) 16:46, 25 October 2005 (UTC)

Something missing

The article currently has a group of words that appears to be intended as a sentence:

Sometimes, when \mathcal{T} is a nonlinear transformation, or when it is not possible or difficult to provide a basis set.

All that is there is a "when" clause.

Assuming that this is the desired clause and it's the end of the sentence that is missing, then it would be better to change "or difficult to" to "or is difficult to." P0M 06:21, 26 October 2005 (UTC)

blue arrow in mona lisa

I would prefer if it was horizontal in the left picture and then in the transformed picture it would be parallel to the top and bottom sides still. This would be slightly clearer since it is easier to see what exactly happened. --MarSch 09:51, 26 October 2005 (UTC)


perhaps include normal modes

I think one of the best example of the use of eigenvectors and values in finding normal modes for complex oscillations. Its a decently intuitive use of linear algebra. What do you think?

True. Be bold : do it! Vb 09:34, 1 November 2005 (UTC)


notation

The use of bolding and script is non-useful. Vectors are not bolded, except by some physicists sometimes, but even they tire of it. Furthermore the script T for a simple transformation is odd. Notation should be as simple as possible without frills that don't mean anything. What do you think? --MarSch 11:33, 1 November 2005 (UTC)

As far as I know (I am physicist) bolded vectors v are still standard even if I prefer the notation with an arrow  . Here in this article, the notation v has been used for the vectors and v for their representation as vertical arrays in a particular basis set (an alternative notation used is  ). The same has been done for the transformations   opposed to their matrix representations T in a particular basis set (an alternative notation used is  ). I am well aware one can forget about this nuance. I am however persuaded this is not pedagogic. In particular I think it is important to use distinct notations for the general definition   and its representation in the linear and finite dimensional case  . Both objects are really different   is a transformation, a function or a functional and T is an two-dimensional array of scalars. Using a common notation for both would be IMO misleading. Vb 12:43, 1 November 2005 (UTC)

I agree with MarSch.

Crust 21:21, 1 November 2005 (UTC)


There is no reason to go out of our way and use calligraphic T here. The transformation and the matrix representation are equivalent (in certain basic circumstances), and the noncalligraphic T is only used once as to make a notational distinction rather useless. I am going to change these back when I have the time. Dysprosia 12:31, 16 April 2006 (UTC)

footnotes

I find the bottom two footnotes should be in the text and not footnotes at all.--MarSch 11:35, 1 November 2005 (UTC)

There are three footnotes. The first one is a citation which has nothing to do with the article but only with the source of one picture. The second is there to precise which kind of transformation we are intending to speak about. This precision is very technical and IMO does not belong to the lead. The third one is about the fact that zero vectors are not considered as eigenvectors. These both very technical statements must be said but not so early in the article. Historically it replaced parentheses because some reviewers in the FAC pointed out that these comments were diminishing the text flow. Vb 12:58, 1 November 2005 (UTC)
In parenthesis they would also be clumsy, but at least they would be in the right place. I think the fact that the transformation is from a vector space to itself is pretty essential to understanding the concept of an eigenvector. A transformation from one space to another would land you non-sense. The non-zero claim should be explained. Since the zero vector is always an eigenvector of a linear transformation it is excluded explicitly. The flow is now very bad, since when you see the note, you have to click it to see what it says and then back again, only to discover that the note should really be in the text. --MarSch 13:20, 1 November 2005 (UTC)
Well it breaks the flow. You are right. But the ones who look at these are usually assumed to be expert and therefore able to cope with is inconvenient. But specifying things which are relevant to experts only in the body of the text is also breaking the flow. I think the Mona Lisa pictures shows which kind of transformations we are speaking about. Moreover in the non linear case zero vectors could be also eigenvectors even if one does not really need to mention this in this article. -- Vb 14:03, 1 November 2005 (UTC)
Aah, the nonlinear zero eigenvector izzz convincing. Thanks. I'm sticking in the other though. --MarSch 13:02, 3 November 2005 (UTC)

german term

This sentence seems odd: "Today the more distinctive term "eigenvalue" is standard in the field, even in German, which used "eigenwert" in the past." If you click on the German version of this article, you discover that it never uses the term "eigenvalue", only "eigenwert". So is the German version out of date, or is the English version wrong in claiming that Germans actually say "eigenvalue"?

I have changed this. Eigenwert is still the correct German term for eigenvalue. However some Germans may use the word "eigenvalue" but this is definitively not correct. Vb 12:49, 1 November 2005 (UTC)
You are both right to some degree. Eigenwert is the correct term in German and this will not change in the future AFAIK. However, the huge majority of serious scientific publications in Germany (above undergraduate niveau) is written in English nowadays.

eigenvalues of non-linear transformations?

The article defines eigenvalues for an arbitrary transformation T that is not necessarily linear. Is this a common usage? I have only ever encountered the terms eigenvalue, spectrum, etc. with linear operators. It looks like all the examples in the article refer to linear operators.

Crust 15:18, 1 November 2005 (UTC)

The point is not whether eigenvalues of non linear transformations are important but whether one needs linearity to define the concept of eigenvalue. I think not. However many non linear transformations are common and their eigenvalues are important quantities. In the example to the vibrating string, the equations of movement must not be linear. If one consider transformations corresponding to time-evolution operators in dynamical systems, the equations of movement are in general not linear. In the case of quantum mechanics, studying the motion of an open subsystem leads also to nonlinear equations of motion. The study of the eigenvalues of the corresponding operators is a very broad field of research. Vb 15:35, 1 November 2005 (UTC)
Can you expand a bit on this subject? Take for instance the string. It can indeed be modeled by a nonlinear equation. But what do you mean by eigenvalue in this case and why is it important? It would be very helpful if you could give some references.
By the way, I don't understand the caption of the picture of the standing wave: "… In this case the eigenvalue is time dependent." Why is this? What is the equation you are using to model the rope? -- Jitse Niesen (talk) 17:44, 1 November 2005 (UTC)


Vb, sure the definition of eigenvalue is coherent even if T is non-linear, but that doesn't necessarily mean that it makes sense to allow T non-linear (or more to the point, that at least a non-trivial minority of mathematicians allow this). It seems to me that in the nonlinear case, very little of interest is going to carry over. For instance, let R be the real numbers and consider T:R->R given by T(x) = x^2. Then T has a continuum of eigenvalues, one for each positive real, which seems pathological for an operator on a one-dimensional space.
By the way, the time translation operator in the string example is linear. We actually have a time translation operator T_s for each real number s with T_s(f) for f=f(x,t) given by T_s(f)(x,t) = f(x,t+s), which is obviously linear in f. The answer to Jitse Niesen's question is that the eigenvalues of the operators T_s do depend on s (which is not at all surprising).
Crust 19:50, 1 November 2005 (UTC)
I am very sorry Crust but I don't understand what you mean. In my opinion, in the sting case T_t is non linear if T_t(f+g)!=T_t(f)+T_t(g). This is usually the case for real ropes. One example: high harmonics are much faster damped than low harmonics, thus after a long enough time T_t(f+g)=T_t(f) if f and g are low and high harmonic signals respectively. This is the reason why when you pick a guitar string you usually produce only the lowest harmonic vibration (because all higher harmonics present in the initial signal are damped very early). Vb 12:26, 2 November 2005 (UTC)
Crust, I see what you mean. The eigenvalues of T_s indeed depend on s, but not on t. The operator T_s is linear if one assumes that the string is modelled by the wave equation
 
However, more realistic models yield nonlinear equations, and hence a nonlinear time-translation operator. -- Jitse Niesen (talk) 14:07, 2 November 2005 (UTC)

Well Jitse I have thought a lot about nice examples of nonlinear cases. I think two examples of eigenvectors are the molecular orbitals (eigenvectors of the Fock operator) and resonances as defined by Fesbach and Fano (eigenvectors of the effective Hamiltonian in Feshbach-Fano partitioning). In both case the nonlinear problem can be reduced to an iterative sequence of linear problems (the SCF algorithm). I think also eigenvectors of nonlinear operator are important properties of nonlinear dynamical systems in the neighbourhood of stability islands in the phase space. However, the main point why I have introduced the concept of eigenvectors in the most general case is not because of those advanced topics but because I didn't want to explain the reader what is a linear transformation or a basis set before explaining the basic concept of eigenvector. The usual definition of eigenvalues and eigenvectors comes as an advanced topic in linear algebra after defining all the concepts like basis sets, matrix, etc... Coming from such a definition, eigenfunctions would have appeared as a generalization and not like a direct application of the concept. Vb 09:11, 2 November 2005 (UTC)

I am not familiar with either the Fock operator or the Feshback-Fano partitioning. However, as far as I can see from the Wikipedia articles on this topic, all operators are still linear. I think most of the foundations of quantum mechanics would break down if the Hamiltonian were not a linear (and even self-adjoint) operator. Are you talking about the Hamiltonian being nonlinear, or some other operator? -- Jitse Niesen (talk) 14:07, 2 November 2005 (UTC)
First both operators are not fundamental to quantum mechanics. The Fock operator is used as a first approximation introduced to produce a basis set (the Slater determinants made of molecular orbitals). The Feshbach-Fano Hamiltonian are introduced to deal with sub quantum systems. The Fock operator is explicitly depending on the function it is applied to. F(phi) is not simply a matrix multiplication. In the Feshbach-Fano partitioning case, H_eff depends explicitly of the eigenvalue E and the equation   is a non linear equation. Vb 14:46, 2 November 2005 (UTC)
I will assume that you're right for the Fock operator. However, in the equation  , something else is happening, in that the operator depends on the eigenvalue. Such equations are indeed studied, the simplest one being the so-called quadratic eigenvalue problem  , where A, B, and C are matrices. But these equations are not of the form T(v) = λv. In dynamical systems, one looks at eigenvalues of the linearized operator (at least, in all cases I'm familiar with). -- Jitse Niesen (talk) 20:51, 2 November 2005 (UTC)
I can see your pedagogical point about not wanting to explain the concept of "linear transformation" (by the way, I don't see why you need to talk about "basis sets"). But that would also argue against mentioning "nonlinear". I don't understand what you mean with "Coming from such a definition, eigenfunctions would have appeared as a generalization and not like a direct application of the concept." If you come from the definition: an eigenvalue is a λ for which there is a nonzero v such that Tv = λv, then eigenfunctions are a direct application, aren't they? -- Jitse Niesen (talk) 14:07, 2 November 2005 (UTC)
When I was preparing the Mona Lisa picture I have tried different transformation avalaible on my graphic program (xv). I found out that most transformations avalaible were nonlinear so I think that nonlinear transformations are something pupils can meet simply by trying out special graphic effects. Of course we can omit nonlinear in the list of examples of transformation but I don't know whether this helps anybody. Of course you are right when you say Tv = λv can be used as well for function: if T is not defined as a matrix of course as it is usually the case in standard introductions to the topic. Vb 14:46, 2 November 2005 (UTC)
Omitting nonlinear would mean that Wikipedia does not contradict all the standard texts; it seems obvious to me that that is a good thing. Of course, readers can try out nonlinear transformations, but I still think that "eigenvectors" for nonlinear transformation are by far less useful than in the linear case, and that even calling them eigenvectors is an abuse of language. -- Jitse Niesen (talk) 20:51, 2 November 2005 (UTC)

Vb, in your guitar string example I think T_t(g) is very close to zero for large t. So the fact that T_t(f+g) is very close to T_t(f) for large t is a consequence of linearity, not a counterexample. I don't know much physics, so I certainly can't speak with authority, but it looks like the Fock operator is also linear (the fact that is a redirect to Fock matrix would certainly seem to suggest this). Crust 22:04, 2 November 2005 (UTC)

Does anyone (other than Vb) think it is appropriate to allow non-linear functions in the definition of eigenvalue? I think that Jitse Niesen's comment above is exactly right; I have never seen this elsewhere. Can anyone find a source that defines eigenvalue this way? As an aside, let me say thanks to Vb for putting a lot of work into this article. Crust 15:45, 3 November 2005 (UTC)

I thought about that and I think Crust and Jitse are right. Let's get rid of non linear but also I think it would be a bad idea to limit from the begining on this concept to linear transformation because, in principle, we don't need the concept of linearity for defining eigenvalues. Vb 12:45, 4 November 2005 (UTC)

Now that even Vb semi-agrees, I've gone ahead and cleaned up the article on this point. Crust 16:04, 4 November 2005 (UTC)

No I don't agree with Crust's change. Though I really understand that eigenvectors of nonlinear transformations don't need to be discussed here there is also no reason to introduce the concept of linearity before it is necessary to do so. Most basic concepts related to eigenvalues and eigenvectors can be understood without understanding what is a linear transformation. Vb 16:48, 4 November 2005 (UTC)

Vb, the point isn't that "eigenvectors of nonlinear transformations" don't need to be discussed; the point is that that is an oxymoron. I agree that for a non-mathematical reader linear transformation is a little more intimidating than transformation (which is itself probably already intimidating), but I think the first priority needs to be write an accurate article. Readability and accessability to a general audience are important, but we have no business writing things that are just plain wrong. An alternate way to phrase it might be to say "In linear algebra, a branch of mathematics, ..." and then use the word transformation with the link actually pointing to "linear transformation". I'm not sure which is the less intimidating way to do it.

Crust 18:33, 4 November 2005 (UTC)

OK, I've tried another wording in an attempt to please Vb (= 131.220.68.177). It occurs to me that readers who find linear transformation intimidating will probably also be intimidated by vector space, so I've avoided both. Crust 19:37, 4 November 2005 (UTC)

As you've seen I reverted your last edits. I don't think linear transformation is intimidating. I simply think it is useless to define what is an eigenvector. The article is not a subarticle of linear algebra. If you look at the history of this article you will see that the original featured article was referring to vector space in the lead throug a footnote only. The original idea of this article was (see discussion above) to merge eigenvectors, eigenfunctions, eigenstates and other things into one single article. My idea was to merge all those things under the abstract concept of eigenvector by referring to general transformations only because I think this abstract concept is in fact easier to explain that the concept of matrix diagonalization. Well all in all this article is not mine and if you really want to change it utterly from its featured version: I will not begin an edit war and, well, let the article die. That's it: wikipedia is a living organism its articles may live but also die. Vb 08:42, 7 November 2005 (UTC)

Vb, please note that eigenfunction and eigenstate are just special words for eigenvector in specific contexts (an "eigenfunction" is an eigenvector of a linear transformation on a function space, and "eigenstates" are eigenvectors of certain linear operators in quantum mechanics). In all cases, the operators involved are linear. When we talk about the continuous spectrum, again we are talking about a linear operator. E.g. d/dx is a linear transformation that acts on the vector space of infinitely differentiable functions. The eigenvectors   are also called eigenfunctions since they are functions and the spectrum of d/dx is continuous (it is the whole real line). There is nothing non-linear about this situation (perhaps you are somehow confusing the terms "non-linear" and "infinite-dimensional"?). Vb, you seem to have your own personal definition of eigenvalue, spectrum, etc. that includes non-linear operators. If I (and Jitse) are wrong and there is some non-trivial community that uses these words the same way you do, OK, but please provide references (and even if that is the case, we will need to note that most authors do not do this). It would be pretty crazy to label a math article (and a featured article to boot) with an Wikipedia:Accuracy dispute tag, but if we can't come to agreement on this I don't know what else to do. Crust 22:05, 7 November 2005 (UTC)

It seems that Vb agrees with not mentioning nonlinear operators, as s/he wrote in an edit comment today "I really agree with not mentioning nonlinear". Vb does not want (and neither do I or you) to restrict to finite dimensions. -- Jitse Niesen (talk) 22:44, 7 November 2005 (UTC)
I hope you're right that Vb now agress with restricting to the linear case. I fixed the definition in the first sentence of the article to reflect this. Crust 23:06, 7 November 2005 (UTC)

Well, I think I have an argument which could make you understand my physicist point of view. When you look at the spectrum of an atom, more simply a vibrating rope, or a music instrument, you observe peaks in the cross section, oscillating shape scaled by a factor as the time evolves, production of particular sound frequencies (Helmholtz' Eigentoene). You don't mind whether the response function of your atom, your rope or your music instrument is linear. You simply observe that -- and this is particularly clear in the case of the rope -- the produced signal is simply scaled by the time evolution operator and is therefore an eigenfunction of it. This operator is in general non linear. Of course it can usually be linearized in the neighbourhood of the its eigenfrequencies but I think this is very technical. I don't claim that nonlinear aspects should be discussed in this article I just claim that linearity has nothing to do with the definition of eigenvalue, eigenvector, eigenspace, and geometrical multiplicity. Vb 10:14, 8 November 2005 (UTC)

Please provide a reference that defines eigenvalues for non-linear operators. If you can find such a reference (preferably by a link to a website, otherwise by a page reference to a standard physics text, e.g. Cohen-Tannoudji), then we can note both definitions, i.e. that some authors allow T to be non-linear, but most do not. Otherwise, we must stick to the standard definition. Sure, you would personally like to allow T to be non-linear. But what would you say to someone else who might want to allow   to be an operator instead of just a scalar, etc., etc.? This is supposed to be an encyclopedia not a personal webpage. Crust 17:16, 8 November 2005 (UTC)

I had not looked seriously yet for non linear eigenvalue problems because I think they don't belong to such an introductory article. My point was just a point of hierarchy of concepts. The concept of eigenvalue is in my opinion not dependent on the concept of linearity and doesn't therefore to refer to it. Of course Cohen-Tannoudji's book does not speak about it because such kind of theories are far more advanced. However, in order to answer your request, I search with google for nonlinear operator eigenvalue and found many links. Including http://etd.caltech.edu/etd/available/etd-09252002-110658/ or http://arxiv.org/abs/physics/9611006. Well I agree those references are not autoritative but I really think it is not the point. I don't want to make a research on this. I just want to point out that linearity is a concept which has nothing to do with eigenvalues and that both concepts should not be mixed. If you are interested in this try to do some google as I did. You will be astonished how many references you'll find. Vb 10:14, 9 November 2005 (UTC)
The concepts of eigenvalues, eigenvectors and eigenspaces are really concepts from linear algebra. The related articles in all other languages (that I can read) mention linear algebra right at the beginning, and these [4][5][6][7] don't mind mentioning linear transformations explicitly. Hilbert's title [8] makes it clear that his usage is in a linear context. Hamiltonian (quantum mechanics) makes it clear that "H is a Hermitian operator." The sinusoidal standing waves in the rope example are eigenfunctions of the time differential operator, which is linear. Vb seems to be confusing this point - the "response functions" of the oscillator may be nonlinear, but the eigen- terms arise in this context specifically because the differential operator is linear. While there are undoubtedly uses of the term eigenvalue applied to nonlinear operators in the literature, I would guess they are mostly novel extensions of the concept. To exclude the mention of linear algebra and linear transformations in this article based on a dedication to these generalizations is hubris and/or original research. I also think it is silly to exclude linear algebra and linear transformation based on hypothetical readers being unfamiliar with the terms, and yet to include vector space, which is clearly defined as "the basic object of study in the branch of mathematics called linear algebra."Zander 13:34, 9 November 2005 (UTC)
Vb, I had a quick look at the two references you cited. The first (a PhD thesis from 1968) does define the terms eigenvector and eigenvalue for an arbitrary (not necessarily linear) operator on a Banach space (although it restricts the term "spectrum" to linear operators). For the second, I think you again made the mistake discussed by Zander above of confusing what is nonlinear. For that paper, the eigenvalues are eigenvalues of the (linear) Hamiltonian. But all this isn't really the point. Finding a paper or two that (in my view anyway) abuses or extends conventional terminology for the author's convenience is not sufficient. If this really is a standard usage, it should appear in some standard reference. (Sure, Cohen-Tannoudji is an undergraduate text. I just mentioned it because you cited it as a reference and also I am/was familiar with it. If you find some advanced graduate text that defines eigenvalue this way that's obviously fine.) Crust 14:55, 9 November 2005 (UTC)

OK. 3 vs 1 I guess I am going into the minority. If everybody find silly to introduce only the necessary concepts before stating a definition. However I would like you to make google with "nonlinear eigenvalue problem", "nonlinear eigenvector", "nonlinear operator eigenvalue", etc... It is very surprizing to discover how many authors are interested in this question. I have even found a workshop which look very serious with the title "Mini-Workshop on Nonlinear Spectral and Eigenvalue Theory with Applications to the p-Laplacian" and they mention that one of the topic will be "During the Mini-Workshop we will discuss recent progress and open problems in the theory, methods, and applications of spectra and eigenvalues of nonlinear operators." This sentence is also interesting: "Although eigenvalue and spectrum analysis for nonlinear operators have been studied by many researchers. in mathematics literature, singular value ... " (Kenji Fujimoto, Nagoya University). "A new spectral theory for nonlinear operators and its applications" (W. Feng, Abstr. Appl. Anal. 2, no. 1-2 (1997), 163–183). "We study the asymptotic behaviour of the eigenvalues of a family of non-linear monotone elliptic operators of the form Ae = -div(ae(x, Ñu)) with oscillating coefficients." from "HOMOGENIZATION OF A CLASS OF NON-LINEAR EIGENVALUE PROBLEMS" presented at the "VII French-Latin American Congress on Applied Mathematics" by Mahadevan RAJESH and Carlos Conca. I could go on like that on a whole page. Of course I agree with Zander and Crust that the concept of eigenvalue comes from Hilbert's work on linear algebra. I also agree that students are tough about it during the course on linear algebra. But I have to oppose on one point: you don't need linearity to define eigenvectors. You need it when you talk about the spectral theorem but not to define the basic concepts. Vb 15:09, 9 November 2005 (UTC)

Once again. I have googled a bit for "nonlinear Schroedinger equation" and "nonlinear eigenstate". Those things exist. In the context of Bose-Einstein Condensates. Look at this http://aleph.physik.uni-kl.de/~korsch/preprints/Graefe_quantph_0507185.pdf and Nonlinear Schrödinger Equation at EqWorld: The World of Mathematical Equations. Vb17:22, 9 November 2005 (UTC)
Your most recent two physics examples appear off the mark. The BEC paper is the same issue as the Fock matrix discussion above. The Nonlinear Schrodinger Equation site doesn't even use any word starting with "eigen-"! However, your previous applied math examples look legit. Perhaps we could include this generalization as a new section/subsection (similar to entries from a ring, generalized eigenvalue problem, neither of which are really specific to matrices by the way). While we're at it, there are several other, I think more common, generalizations that we perhaps should mention (other types of generalized eigenvalue problems; pseudo-eigenvalues; generalized eigenvectors e.g. for eigenvalue 0, vectors that are annihilated by some power of an operator but not by the operator itself). Crust 18:07, 9 November 2005 (UTC)
Well I don't think those two physics examples are off the mark. But from googling around I noticed that nonlinear eigenvalue problem are in fact so common that they could require a section of their own. However, I really don't feel competent for this. Mathematicians specialists of the topic would do much better than I would. Vb 09:51, 10 November 2005 (UTC)
Vb, you've got a great sense of humor. You cite an article that doesn't even contain the string "eigen" and when challenged continue to insist that it involves eigenvalues of non-linear operators. Hilarious. The BEC situation is slightly more subtle in that it uses the phrase "nonlinear eigenstate", but if you actually look at the paper the "nonlinear eigenstates" are eigenvectors of linear operators, it's just a generalized eigenvalue problem (similar to the situation discussed by Jitse with reference to the Fock operator). I think it's pretty clear that the applied math literature on this is as discussed by Zander very much a novel generalization of the concept by a relatively small community. One symptom of this is they don't have a standard definition of "spectrum" with many competing definitions that give different answers for the same operator.
I'm going to change "transformation" to "linear transformation" one last time. Feel free to research and start a generalization section covering non-linear operators (though like I said there are several other, probably more important, generalizations we don't currently cover). If you're still unhappy, I don't see any other solution than a disputed accuracy tag. Let me close by saying that you have put a lot of work into this article and I think it is therefore reasonable to show some deference to you on matters of taste, but not accuracy. This will likely be my last substantive post on this; I just don't have the bandwith to keep chasing these things down. Crust 18:56, 10 November 2005 (UTC)

Well. We don't agree. I still believe we simply don't agree on the form and not on the content. I don't want either to discuss nonlinear transformations. If you want to explain a friend of yours in five minutes what is an eigenvector or an eigenvalue: would you begin by explaining him what is a linear transformation? I don't think so: you will explain him what is a transformation and that eigenvectors are vectors which are just scaled during the transformation. I also have also other things to do. Vb 13:28, 11 November 2005 (UTC)

Who is the audience for this article?

This is yet another example of an article on a mathematical topic written by mathematicians for mathematicians.

Why not write in such a way that readers with the necessary pre-requisite knowledge can be led into the topic in a natural and friendly way?

The professional mathematicians have their own forums - typical Wiki readers come here to learn, not to be confronted with a highly technical approach which can be appreciated only by those who know it all anyway?

203.0.223.244 23:05, 1 November 2005 (UTC)

In my opinion, this article provides just the sort of introduction to the general reader that the above comment says that it fails to provide. I am not a mathemetician, but I am interested in math and science and read non-professional (i.e., popular) literature in these fields. I have seen eigenvalue and eigenvector in my physics reading, but had no understanding of these terms. I found this article because it was featured on the WP Main Page (the editors of which obviously thought that it would interest a broad readership). Now I have some understanding, consistent with my limited mathematical education; those with more education will understand more. Great article! Finell (Talk) 01:44, 2 November 2005 (UTC)
I find this article interesting precisely because it has the problem of explaining something so "hard-core-Math". I think the ideal in dealing with a deep specialized topic is to naturally sift people toward basis texts that are at the level they can understand. Expecting the article to explain eigen[X]s to people who aren't familiar with algebra would be impossible, but there should be some way of pointing people in the direction of more basic articles that they can read to build up the reequisite knowledge. Sending them towards ever-increasingly stratospheric articles is no help at all.
The introduction is a place to really see if consensus can hash out something that fits the encyclopedic tone and yet isn't inaccessible to someone who only has a high-school level math education. I think this article is trying to do too much by lumping all the eigens into one, it's an interesting experiment nonetheless, but each term might have to be defined in its own sentence (especially to avoid the vague definition). The reliance on the figure in the intro is not good and I'm definitely opposed to referring to it from the text. Metaeducation 14:06, 2 November 2005 (UTC)
You are right. The article before it got featured on the main page had reached the following equilibrium (watch the version before the 1st Nov and discussion at the Featured Article Candidate page of this article): Don't define at all eigenXs, say only they are important quantities im math and refer to the picture for an informal definition; the exact definition coming as first section. Yesterday as the article came on the front page one new editor said: "the lead doesn't define anything; this is bad style; I provide an informal definition; and she did". OK why not. In my opinion this was a good edit. You seem to disagree and prefer three distinct definitions in the lead. I think this too technical and doesn't correspond to the editorial line for math articles. Quotation: It is a good idea to also have an informal introduction to the topic, without rigor, suitable for a high school student or a first-year undergraduate, as appropriate. For example, "In the case of real numbers, a continuous function corresponds to a graph that you can draw without lifting your pen from the paper, that is, without any gaps or jumps." The informal introduction should clearly state that it is informal, and that it is only stated to introduce the formal and correct approach. If a physical or geometric analogy or diagram will help, use one: many of the readers may be non-mathematical scientists. Defining a complicated math thing in one sentence and without picture is a very hard task. I thought the figure was explicit enough and was a good informal and intuitive definition. From reading the positive comments on FAC page I had the feeling I was not alone. If you have an other opinion: dont hesitate be bold! Vb 15:02, 2 November 2005 (UTC)

Old English?

Is the relation of the German word eigen to Old English really in the scope of this article? To me it's distracting and just a piece of trivia in this context. --doerfler 15:35, 10 November 2005 (UTC)

I don't mind if you remove this. Vb 16:07, 10 November 2005 (UTC)

Linear eigenfunction operators

Hellow boys! I've studied Math physics and encountered some problems about  (which appears in the chapter of Eingenfunction methods).[9] Wish someone can tell me why they are such.

Now we have a Linear eigenfunction that gives

 
 

This is first question. Second one is

 

where

 is weightfunction. What roles(or physical meanings) does   play?

PS:I'm not quite sure what I wrote. If any mistake,correct on me!^^

Connection between spectral radius and matrix norm for normal matrices

The reasons for my reverts are as follows:

  1. Only square matrices have eigenvalues.
  2. The operator norm is not the least upper bound for the moduli of its eigenvalues; the spectral radius is the least upper bound.
  3. Even for normal matrices, different vector norms give different operator norms. For instance, the matrix
 
(rotation over 45 degrees) is orthogonal, hence normal. Its spectral norm is   and its maximum column sum norm is  .

-- Jitse Niesen (talk) 02:13, 5 April 2006 (UTC)

I assume the maximum column sum norm is the operator norm induced by the L^1 norm on C^2, which unit vector do you then take to achieve sqrt(2)? Mct mht 02:22, 5 April 2006 (UTC)

Shoot, you're right, sorry Jitse. Something might be wrong with your notation there tho. The operator norm induced by the L^{\infty} norm of that matrix is indeed sqrt(2). Thanks. Mct mht 02:33, 5 April 2006 (UTC)

Statement about 2-norm of a normal matrix added back. Mct mht 02:46, 5 April 2006 (UTC)

Eigenvalues & Eigenvectors of matrices

Under the eigenvalues & eigenvectors of matrices section, could the part regarding "Finding Eigenvectors" be expanded. From a limited understanding of eigenvectors, it is unclear for me how to actually find an eigenvector from an eigenvalue. Hope someone would be able to expand on this part of the article.

Normal matrix

Not a bad article, but I am surprised that Wikipedia calls it "a featured article, (...) one of the best articles produced by the Wikipedia community." Just looking at it for one minute one can spot a serious error: The statement that "a matrix is diagonalizable if and only if it is normal" is absolutely false. I did not read the rest of the article after a mistake of this caliber. Grand Vizier 20:46, 18 April 2006 (UTC)

You're right, it should be "a matrix is unitarily diagonalizable if and only if it is normal". Thanks for mentioning this. However, I hope that in the future, you will be more constructive and correct it yourself; otherwise it might appear that you just want to show off how smart you are. -- Jitse Niesen (talk) 03:33, 19 April 2006 (UTC)

No need to feel threatened. It is wrong if it is wrong. Grand Vizier 20:51, 19 April 2006 (UTC)

I typed that sentence. I thought it should seem ovbious that unitary equivalence is meant. It would be clearly wrong otherwise, SVD says every matrix is "diagonalizable," even non square ones. Mct mht 17:12, 19 April 2006 (UTC)

"Diagonalizable" has a specific meaning in matrix theory, which is explained in diagonalizable matrix. With this meaning, not every matrix is diagonalizable. As it says in the article, "a matrix is diagonalizable if and only if the algebraic and geometric multiplicities coincide for all its eigenvalues". -- Jitse Niesen (talk) 05:28, 20 April 2006 (UTC)

Error in the generalized eigenvalue problem

It looks like an error got inserted. Take for example matrices   and  , then  ,  , and the solutions to the generalized eigenvalue problem   are the pure imaginary eigenvalues   and  .

Therefore, there exists a pair of real symmetric matrices such that the solution of the corresponding eigenvalue problem are not real ... the contrary is stated on the page ... and I'm about to take it out ... Actually, what is stated here is true for each of the matrices independently and for their own simple eigenvalue problem, but the generalized eigenvalue problem it is not true. tradora 00:51, 9 May 2006 (UTC)

Vibrational modes - erroneous treatment in the article

The current version of the article claims that the eigenvalue for a standing-wave problem is the amplitude. This is an absurd and totally nonstandard way to formulate the problem, if it even can be made to make sense at all. For vibrational modes, one writes down the equation for the acceleration in the form:

 

where   is the amplitude and   is the operator giving the acceleration = force/mass (i.e. from the coupled force equations). In a linear, lossless system, this is a real-symmetric (Hermitian) linear operator. (More generally, e.g. if the density is not constant, one writes it as a generalized Hermitian eigenproblem.) Ordinarily,   is positive semi-definite too.

To get the normal modes, one writes   in the form of a harmonic mode:   for a frequency ω. (Of course, the physical solution is the real part of this.) Then one obtains the eigenequation:

 

and so one obtains the frequency from the eigenvalue. Since   is Hermitian and positive semi-definite, the frequencies are real, corresponding to oscillating modes (as opposed to decaying/growing modes for complex ω). Furthermore, because   is Hermitian the eigenvectors are orthogonal (hence, the normal modes), and form a complete basis (at least, for a reasonable physical  ). Other nice properties follow, too (e.g. the eigenvalues are discrete in an infinite-dimensional problem with compact support.)

If losses (damping) are included, then   becomes non-Hermitian, leading to complex ω that give exponential decay.

Notice that the amplitude per se is totally arbitrary, as usual for eigenvalue problems. One can scale   by any constant and still have the same normal mode.

User:Jitse Niesen claimed at Wikipedia:Featured article review/Eigenvalue, eigenvector and eigenspace that the article was talking about the eigenvalues of the "time-evolution operator"  . The first problem with this is that   is not the time-evolution operator, because this is a second-order problem and is not determined by the initial value  . You could convert it to a set of first-order equations of twice the size, but then your eigenvector is not the mode shape any more, it is the mode shape along with the velocity profile. Even then, the eigenvalue is only a factor multiplying the amplitude, since the absolute amplitude is still arbitrary. Anyway, it's a perverse approach; the whole point of working with harmonic modes is to eliminate t in favor of ω.

Note that the discussion above is totally general and applies to all sorts of oscillation problems, from discrete oscillators (e.g. coupled pendulums) to vibrating strings, to acoustic waves in solids, to optical resonators.

As normal modes of oscillating systems, from jumpropes to drumheads, are probably the most familiar example of eigenproblems to most people, and in particular illustrate the important case of Hermitian eigenproblems, this subject deserves to be treated properly. (I'm not saying that the initial example needs the level of detail above; it can just be a brief summary, with more detail at the end, perhaps for a specific case. But it should summarize the right approach in any case.)

—Steven G. Johnson 15:29, 24 August 2006 (UTC)

I readily agree that my comment at FAR was sloppy, and I'm glad you worked out what I had in mind. Originally, I thought that the "perverse" approach was not a bad idea to explain the concept of eigenvalues/functions, but I now think that it's too confusing for those that have already seen the standard approach. -- Jitse Niesen (talk) 12:14, 25 August 2006 (UTC)
Thanks for your thoughtful response. Let me point out another problem with saying that the "amplitude" is the eigenvalue. Knowing the amplitude at any given time is not enough to know the behavior or frequency. You need to know the amplitude for at least two times. By simply calling the eigenvalue "the" amplitude, you've underspecified the result. —Steven G. Johnson 15:23, 25 August 2006 (UTC)


Request for Clarificaion of the Standing wave example for eigen values

The Standing wave example of eigen values isn't very clear. It is just stated that the standing waves are the eigen values. Why is this the case? How do they fit the definition / satisfy the criterion for being an eigen value? —The preceding unsigned comment was added by 67.80.149.169 (talkcontribs) .

Actually, the wave is the eigenfunction, not the eigenvalue. Did you not see this part?:

The standing waves correspond to particular oscillations of the rope such that the shape of the rope is scaled by a factor (the eigenvalue) as time passes. Each component of the vector associated with the rope is multiplied by this time-dependent factor. This factor, the eigenvalue, oscillates as times goes by.

I don't see any way to improve that. —Keenan Pepper 03:10, 7 September 2006 (UTC)
Except that calling the amplitude the "time-dependent eigenvalue" is horribly misleading and bears little relation to how this problem is actually studied, as I explained above. Sigh. —Steven G. Johnson 20:54, 7 September 2006 (UTC)

I think the new version of the rope example is exactly what I was trying to avoid! I think the time-evolution operator is something anybody can understand. This is also the only operator which is interesting from an experimental point of view. It doesn't require any math knowlege. It doesn't even require the rope to be an Hamiltonian system. The shape of the rope is an eigenfunction of this operator if it remains proportional to itself as time passes by. That it is an eigenvector of the frequence operator (the Hamiltonian) is irrelevant and also valid only if the system has an Hamiltonian! Vb

As I explained above, the shape of the rope isn't the eigenvector of this operator. Because it is second order in time, you would need the shape of the rope plus the velocity profile. And there are other problems as well, as I explained above. —Steven G. Johnson 12:48, 8 September 2006 (UTC)
Sorry for the delay. Perhaps then the statment "the standing was is the egienfunction" could be elobrated on a little more. I'm having trouble visualising what that means, and how the notition of a vector whos direction reamains unchanged by the transformation applies to this example. My apologies for the confusion. --165.230.132.126 23:35, 11 October 2006 (UTC)

Orthogonality

When are eigenvectors orthogonal? Symmetric matrix says "Another way of stating the spectral theorem is that the eigenvectors of a symmetric matrix are orthogonal." So A is symmetric => A has orthogonal eigenvectors, but does that relation go both ways? If not, is there some non-trivial property of A such that A has orthogonal eigenvectors iff ___? —Ben FrantzDale 20:36, 7 September 2006 (UTC)

eigenvectors corresponding to distinct eigenvalues are orthogonal iff the matrix is normal. in general, eigenvectors corresponding to distinct eigevalues are linear independent. Mct mht 22:31, 7 September 2006 (UTC)
btw, "symmetric matrix" in the above quote should mean symmetric matrix with real entries. Mct mht 22:34, 7 September 2006 (UTC)
Thanks. Normal matrix does say this; I'm updating other places which should refer to it such as symmetric matrix, spectral theorem, and eigenvector. —Ben FrantzDale 13:30, 8 September 2006 (UTC)

An arrow from the center of the Earth to the Geographic South Pole would be an eigenvector of this transformation

i think the geographic pole is different from the magnetic pole...-- Boggie 09:57, 21 November 2006 (UTC)

You are correct that the geographic south pole is different from the magnetic south pole. However, it is the geographic pole that we need here, isn't it? -- Jitse Niesen (talk) 11:33, 21 November 2006 (UTC)
yes, u r right, i just mixed them up;-) -- Boggie 16:51, 22 November 2006 (UTC)

Existence of Eigenvectors, eigenvalues and eigenspaces

Some confusion about the existence of eigenvectors, eigenvalues and eigenspaces for transformations can possibly arise when reading this article (as it did when I asked a question at the mathematics reference desk, after reading the article half-asleep - my bad!).

My question here would be, what could be done to correct it. The existence of eigenvectors is a property of the transformation not a property of the eigenvector itself, making it unclear which of the pages needs revision. Does anybody have any ideas?

Also this article seems a bit long, I suggest placing the applications in a separate article (as are some of the theorems in the earlier sections).

    • Putting the application elsewhere would be very damaging. Math without applications is useless, i.e. not interesting!
I disagree. All truly interesting math is useless.  ;) —Preceding unsigned comment added by Crito2161 (talkcontribs) 03:02, 8 March 2008 (UTC)

Continuous spectrum

Figure 3 in the article has a nice picture of the absorption spectrum of chlorine. The caption says that the sharp lines correspond to the discrete spectrum and that the rest is due to the continuous spectrum. Could someone explain some things?:

  • What operator it is that this is the spectrum of?
  • If the discrete spectrum corresponds to eigenvalues, are those eigenvalues shown on the x or y axis of the graph?
    • The x-values, i.e. the energies of the tomic eigenstates.
  • What would the corresponding eigenfunctions be for elements of the spectrum?

Thanks. —Ben FrantzDale 15:35, 20 December 2006 (UTC)

The article suggests that the term spectrum is used only for infinite set of eigenvalues. Is not the phrase "matrix spectrum" mean the set of its eigenvalues whether or not that set is infinite? See http://mathworld.wolfram.com/MatrixSpectrum.html. If you search wikipedia for "spectrum of a matrix" you are redirected to this article, yet this article only talks about spectrum in the continuous sense. —Preceding unsigned comment added by 66.75.47.88 (talk) 17:14, 20 June 2009 (UTC)

Going back to featured status!

I am so sad this article is not featured anymore. I did the job to get it featured. I have no time to improve it now. I wish someone could do the job. The article is not paedagogic anymore. Such a pain! Vb 09:38, 28 December 2006 (UTC)~

minor typo?

Excerpt from the text: "Having found an eigenvalue, we can solve for the space of eigenvectors by finding the nullspace of A − (1)I = 0."

Shouldn't this be ...finding the nullspace of A - λI = 0?

216.64.121.34 04:06, 8 February 2007 (UTC)

small technical question

It says in the article: A vector function A is linear if it has the following two properties:

   Additivity \ A(\mathbf{x}+\mathbf{y})=A(\mathbf{x})+A(\mathbf{y})
   Homogeneity \ A(\alpha \mathbf{x})=\alpha A(\mathbf{x})

where x and y are any two vectors of the vector space L and α is any real number.[10]

doesn't alpha not necessarily have to be a real number, just a scalar from the base field of the vector space? LkNsngth (talk) 23:43, 11 April 2008 (UTC)

Thanks for the good and concrete question. I am checking it and will try to answer ASAP. --Lantonov (talk) 06:14, 14 April 2008 (UTC)
Seems that you are right. Korn and Korn (section 14.3-1) say that α is a scalar. I could not find said there that α is necessarily real. I will look through some more books for this definition and change it accorsingly. --Lantonov (talk) 10:24, 14 April 2008 (UTC)
On the other hand, Akivis, where I took this definition from, explicitly states that α is a real number. --Lantonov (talk) 15:11, 14 April 2008 (UTC)
Mathworld says that α is any scalar. --Lantonov (talk) 15:14, 14 April 2008 (UTC)
Shilov - any α --Lantonov (talk) 15:17, 14 April 2008 (UTC)
Strang - all numbers ... --Lantonov (talk) 15:24, 14 April 2008 (UTC)
Linear transformation - any scalar α --Lantonov (talk) 15:28, 14 April 2008 (UTC)
Kuttler - a and b scalars --Lantonov (talk) 15:34, 14 April 2008 (UTC)
Beezer -   --Lantonov (talk) 15:37, 14 April 2008 (UTC)
Now the last one really convinced me. Changing ... --Lantonov (talk) 15:37, 14 April 2008 (UTC)

Left and right eigenvectors

Where to put this in the article? Something to the effect: A right eigenvector v corresponding to eigenvalue λ satisfies the equation   for matrix A. Contrast this with the concept of left eigenvector u which satisfies the equation  . ?? --HappyCamper 04:39, 15 March 2007 (UTC)

It's briefly mentioned in the section "Entries from a ring". Are you saying that you want to have it somewhere else? -- Jitse Niesen (talk) 06:16, 15 March 2007 (UTC)
No no...just that I didn't see the note on the first reading. I think the little note is plenty for the article already. --HappyCamper 23:36, 16 March 2007 (UTC)

Just a Note

I think the Mona Lisa picture and it's description help to clarify these concepts very much! Thanks to the contributor(s)! —Preceding unsigned comment added by 128.187.80.2 (talkcontribs) 14:23, 27 March 2007

Comment on level of presentation

As far as mathematics articles goes, this is certainly one of the best. To the general reader, however, it remains quite arcane, but I'm not too sure how it could be improved, or whether a little dumbing down would be either possible or desirable.

To begin with the earliest eigenvalue problem, I understand, was that of the frequency of a stretched string. Pythagoras didn't solve it, but observed the simple relationship between string lengths, which formed harmonies, and also that the frequency depended on the parameters (string tension and length) and not on the excitation.

Using the examples of the Earth's axis and Schrodinger's equation, endows the article with a certain ivory tower feel, as if this subject were only relevant to the 'difficult' problems which 'clever' people preoccupy themselves with. Maybe the intention is to impress, rather than inform, but I doubt it, the attempt to communicate with joe public appears sincere.

However, eigenvalue problems are to be found everywhere, they are not obscure mathematical concepts applicable to arcane problems, they are to be found in stock market crashes and population dynamics. Even the 12inch ruler on the desk, with its buckling load, illustrates an eigenvalue problem. They are to be found in every musical instrument from tin whistle to church organ. The swinging of a pendulum, the stability of a bullet, the automatic pilot of an aircraft. Such questions as why does a slender ship turn turtle? Why does a train sway at high speed? Why is the boy racer's dream car a 'whiteknuckles special'? Why do arrows have flights? Why doesn't a spinning top fall over? All of these are much more immediate and relevant to the lay person, who is surely the one we seek to educate.

Some articles do indeed fall into the category of 'ignotum per ignotius'; an explanation which is more obscure that the thing it purports to explain. This is not one of them.Gordon Vigurs 07:45, 10 April 2007 (UTC)

The introduction seems particularly opaque. I'm going to try to say something that a non-mathematician could understand. Rick Norwood 14:29, 27 April 2007 (UTC)
Finally, some Wikipedia mathematicians that are actually concerned about the general reading public. :) 70.187.40.16 (talk) 04:54, 29 June 2008 (UTC)

picture

The picture of the Mona Lisa appears only have been turned a little, and it is not clear that the blue arrow has changed direction. A more extreme distortion, where the blue arrow was clearly pointing in a different direction, would make the point better. If the length of the red arrow were also changed, that would also be good. Rick Norwood 15:02, 27 April 2007 (UTC)

You misunderstand. It hasn't been turned at all. It's been sheared. There's a link in the caption. The change of the arrow direction is quite clear, and the length of the red arrow does not change in a shear. -- GWO

I understand that it has been sheared. That's why I said "appears" instead of "is". But the average viewer is going to assume that it has been turned, because that's what it "looks like". Also, the change in direction of the blue arrow is not clear, it is a few degrees at most. Which is why a more extreme distortion would better convey the idea that the red arrow keeps its direction while the blue arrow changes direction. Rick Norwood 12:42, 28 April 2007 (UTC)

I find the caption of the Mona Lisa picture a little misleading. The red eigenvector is not an eigenvector because it hasn't been stretched or compressed (which would be the special case of its eigenvalue being equal to 1); it's an eigenvector because axis it lies along hasn't changed direction. I feel that this should be clarified. 18.56.0.51 03:10, 25 May 2007 (UTC)

Illustration

I don't think the current Mona Lisa illustration is as helpful as it could be. In very many applications, the operator is symmetric and so there is a set of n real orthogonal eigenvectors. In the case of an image transformation, that would be an orthotropic scaling, not necessarily axis-aligned. The current image shows shearing which has only one eigenvector and so isn't representative of most "typical" eigenvector problems. I propose replacing the sheared Mona Lisa with an orthotropically stretched Mona Lisa with two eigenvectors. —Ben FrantzDale 07:14, 29 April 2007 (UTC)

Let me amend that. I think the example would do better with a Cartesian grid rather than a picture since a Cartesian grid has a clear origin and so is easier for novice to connect to linear algebra. Those new to linear algebra won't necessarily connect image transformation with matrix operations. —Ben FrantzDale 12:01, 25 May 2007 (UTC)
Perhaps a grid could be superimposed on the image? I think it is not such a bad idea to illustrate an "atypical" problem (e.g. to combat the misperception that there is always a basis of eigenvectors). On the other hand, I don't think it is a good lead image, and the text there should be used to annotate the copy of the image in the "Mona Lisa" example (which should be renamed to avoid OR). Something like the electron eigenstates image seems like a more appealing lead image to me. Geometry guy 16:54, 9 June 2007 (UTC)

Great contribution

The lead has been very much improved recently. The definition of linear transformation is now very clear! Thanks to the editors! Vb 12:50, 30 April 2007 (UTC)

Return the serial comma?

The article summary repeatedly uses the serial comma for lists — it seems to me that the title is the only place without it. Admittedly, I'd like to rename the article just for my love of commas; but, as a legitimate reason, shouldn't we rename it for consistency? ~ Booya Bazooka 15:43, 12 June 2007 (UTC)

Support. —Ben FrantzDale 16:59, 4 November 2007 (UTC)
Wikipedia on the whole uses the serial comma reasonably consistently. For this reason, and because the serial comma is used throughout the article, and is often needed for clarity, the title of the page, and any remaining outliers should be fixed, for consistency above all.--169.237.30.164 (talk) 11:05, 13 December 2007 (UTC)

Sense or direction?

"An eigenvalue of +1 means that the eigenvector stays the same, while an eigenvalue of −1 means that the eigenvector is reversed in direction."

Shouldn't it be "sense" instead of direction? By definition, the direction of an eigenvector doesn't change.

Somebody should fix it up.

Answer: I'm not sure what you mean here. "Sense" has the same meaning as "direction", but "direction" is clearer because "sense" has other meanings. And yes, the direction of an eigenvector can be reversed by a transformation. The basic equation is AX = λX, where A = transformation matrix, X = eigenvector and λ = eigenvalue. So if λ is negative, the transformed eigenvector AX points in the direction opposite to the original eigenvector X. I believe that this sentence is correct as is. Dirac66 22:01, 3 August 2007 (UTC)

I think "reversed in direction" makes excellent sense to a lay audience. The direction of an eigenvector is only unchanged if you believe that North is the same direction as South. -- GWO

Misleading colors

The colors used to highlight the text in the first picture is rather misleading - blue is used for links usually while red is usually for broken links. I think just bolding them would be sufficient, the colors are perhaps unnecessary.--Freiddie 18:55, 4 August 2007 (UTC)

Note of thanks and a brief suggestion regarding need for eigenvalues/eigenvectors

Just a quick note of thanks especially for the practical examples of eigenvectors/values such as the Mona Lisa picture and the description of the vectors of the earth and the rubber sheet stretching. I find that having a number of different descriptions helps me generalise the concept, much more so than the mathematical treatment.

I certainly don't imply that the rigorous mathematical treatment should be removed, as this is also important, however I do feel that often mathematical symbology (while very concise and important) hides the meaning of what can actually be simple concepts (at least to the beginner). Thanks to all for providing both practical and theoretical aspects in this article.

As a suggestion, it would also be very helpful to have some simple examples of why eigenvalues/eigenvectors are useful in some different problem domains. For example, why would I want to find the eigenvectors in the first place? How do they help us?

131.181.33.112 07:12, 4 September 2007 (UTC)

Poor Example

The example of how to find an eigenvalue value and eigenvector is far too simplistic. Steps are skipped or poorly explained so often that if the reader doesn't already have a good understanding of the material he will not be able to follow along. I think the whole thing needs to be redone starting with a more typical problem. One with two distinct eigenvalues and two distinct eigenvectors. Because the example has only one eigen vector it is misleading about what to expect when working with eigenvalues.Marcusyoder 05:35, 17 October 2007 (UTC)


Agreed, the example shown is a poor choice. This example should be accompanied by another solutions, possibly a 3x3 matrix which does not have arbitrary eigen values. At minimum, it should explain that the x=y is not always the case and what to do if they get an exact eigen vector. 65.24.136.10 (talk) 00:56, 8 May 2009 (UTC)

Infinite dimensional spaces

I removed the following paragraph from the Infinite-dimensional spaces section.

The exponential growth or decay <!---of WHAT??--> provides an example of a continuous spectrum, as does the vibrating string example illustrated above. The hydrogen atom is an example where both types of spectra appear. The bound states of the hydrogen atom correspond to the discrete part of the spectrum while the ionization processes are described by the continuous part.

(I have also made visible a hidden note in the paragraph.) I am not sure this fits with the rest of the section, and I have a hard time seeing what it is driving at. Thoughts? --TeaDrinker 23:27, 10 November 2007 (UTC)

In answer to "of WHAT??", I suggest the sentence makes more sense if the first word "The" is deleted, leaving "Exponential growth or decay provides an example ..." which would refer to exponential growth or decay in general. Dirac66 00:03, 11 November 2007 (UTC)

I reintroduced the paragraph and tried and respond to your comments. Vb 15:24, 16 January 2008 (UTC) —Preceding unsigned comment added by 87.78.200.17 (talk)

Algebraic multiplicity

Redirects here, but was not mentioned in the text, so I put in a short definition. The formal definition is:

an eigenvalue λ of A has algebraic multiplicity  , if  , where   is the characteristic polynomial and  .

Thomas Nygreen (talk) 00:10, 6 December 2007 (UTC)

Applications

Factor_analysis,Principal_components_analysis,Eigenfaces,Tensor of inertia,Stress_tensor.

To me at least the first three are actually the exact same application, just with different types of data. Each of those three have a spread of data, looked at from it's center of mass, in the case of the tensor of inertia the data points are each differential element of mass. Then to find the principle axes, if you didn't know about the eigenvalue/eigenvector shortcut, you'd ask what direction of a best fit line would give you the least squared error/moment of inertia. The derivative of error with respect to direction will be zero at the local minima & maxima.

Once you have the components into the covariance matrix/tensor of inertia, you notice that stresses and shears behave exactly the same way as the variance and covariance components. I mean I think it's no accident that people use sigma as the symbol for stresses. In fact I've found it helpful to use Mohr's circle in those earlier cases of statistics and inertia.

This is the long way of saying that I don't think we should let terminology get in the way of the explanation. There seems to be a few people very interested in this article, so I won't just redo that section without your input.Sukisuki (talk) 14:58, 29 December 2007 (UTC)

I agree with these comments. I however think this section should not appear as one more mathematical concept but as example of practical application in utterly distinct domains. The stress should be on the applications not on the theory behind. This section must be appealing to the user of math not to the mathematicians! Vb 15:04, 16 January 2008 (UTC) —Preceding unsigned comment added by 87.78.200.17 (talk)

When is P, the matrix of the Eigenvectors, to be normed?

Is it right, that P is never to be normed, but it's gonna be easier to calculate P-1, since then, if P=orthogonally =>   ?
On my calculator (TI-89), however, he norms P always, when searching the EigenVectors ... --Saippuakauppias 09:14, 16 January 2008 (UTC)

Vibrating System

I feel that this article should reflect the importance of the eigenvalue problem to solving the frequency and damping ratios of reciprocating systems. I think this should be in the 'applications' section. 130.88.114.226 (talk) 13:03, 23 January 2008 (UTC)

Structure

I think the definite point of turnoff are the definitions. Do we have to make so much definitions on such small space, even of things we haven't mentioned so far? MathStyle says that it is good to have 2 definitions for a thing: 1 formal, and 1 informal, and many explanations around each definition. Changing the structure of the article to introduce everything one in a time will make it more accessible. Lead section and history are tolerable. My suggestion for structure after them is:

  1. Introduce eigenvector by easy examples
  2. Make a formal definition
  3. Make informal definition or, alternatively
  4. Explain the formal definition with more examples
  5. Do 1-4 for eigenvalue
  6. Do 1-4 for eigenspace
  7. Introdice and/or define the characteristic equation
  8. Introduce and define eigenmatrix
  9. Finding eigens for the 2-case
  10. Finding eigens for the 3-case
  11. Introduce complications: zero eigenvectors and equal eigenvalues
  12. Calculations in the complicated cases: complex matrices
  13. Linking those concepts with matrix orthogonalization as it is where most applications come from
  14. Applications from simple to complex, starting with geometry on a plane, space, n-dim, then electrodynamics, relativity, etc.

Although I like the image of Mona Lisa, having it twice is too much. Also, a heading "Mona Lisa" on a section, which is an example of finding specific eigenvalues is just a little bit misleading. --Lantonov (talk) 13:33, 11 February 2008 (UTC)

Five of the 8 definitions in section "Definitions" are dealt with. The remaining 3 will go to characteristic equation when that section is repaired. --Lantonov (talk) 10:43, 13 February 2008 (UTC)

Concern over new "Eigenvalue in mathematical analysis" section

I see the section Eigenvalue, eigenvector and eigenspace#Eigenvalue in mathematical analysis has just been added. Now, I'm no expert mathematician, but I fail to see what this has to do with eigenvalues and eigenfunctions, in that the way this section defines them appears to have nothing to do with the "standard" definition, as used by the rest of this article.

The "standard" definition of "eigenvalue" is, roughly, the scaling that an eigenvector/eigenfunction undergoes when the transformation in question is applied. This new section appears to define it as, roughly, the value of the transformation parameter such that non-zero solutions are obtained when the transformation is inverted. I just don't see how this is in any way related!

Oli Filth(talk) 12:32, 13 February 2008 (UTC)

After some thought, I see what the example in that section is getting at, i.e. it may be rewritten as:
 
However, this connection is less than clear. Furthermore, the next example is  , which appears to be a non-linear transformation. Again, it is less than clear what this has to do with eigenvalues, etc., as the rest of the article defines them for linear transforms only.
Going back to the first example, even if the connection were explained more clearly, is this really anything more than a specific application of linear algebra, which is only possible because the RHS happened to be [0 0]T? In other words, is this section really imparting anything new to the reader, which they wouldn't have got already by reading the rest of the article? Oli Filth(talk) 12:55, 13 February 2008 (UTC)

I put this section under this provisional name because I was concerned with the definition, and I wanted to make it more general. Of course, it is out of place here. Most of the material will go to extend the "standard" definition, after it becomes clarified in the "standard" way by illustrating it with simple examples of linear transformations. After the basics of eigenvector scaling are introduced, come the matrix notation of the definition of eigenvectors, and also matrix notation of systems of linear equations, and matrix methods for their solution (Kramer, Gram-Schmid, etc.). The latter bear direct relationships to eigenvectors, and eigenvalues. I put the non-linear transformation in an attempt to extend the definition to non-linear transformations but am hesitating whether to introduce it that early. I think better not. As you see, there is still much work on the definition and introductory material, so most of the effort goes there. --Lantonov (talk) 13:19, 13 February 2008 (UTC)

As for the notation, I prefer to use HTML wherever possible because I hate the way math symbols come out when transformed from TeX. --Lantonov (talk) 13:34, 13 February 2008 (UTC)

In general, the style should at least be consistent with the rest of the existing article. Oli Filth(talk) 13:43, 13 February 2008 (UTC)

Agreed, sure enough. --Lantonov (talk) 13:45, 13 February 2008 (UTC)

Scrapped the section. Some of the material in it can be fitted somewhere later. --Lantonov (talk) 14:08, 13 February 2008 (UTC)

Request for attention from an expert no longer needed.

At the top of this article there is a notice saying that "This article or section is in need of attention from an expert on the subject." In view of the very thorough revision by Lantonov over the last four weeks, I think that this notice is no longer necessary. Also this talk page does not indicate what is supposedly wrong with the article. Therefore I have decided to remove the notice, with thanks to Lantonov for much hard work well done. If anyone else wants to replace the notice, please indicate what you think should still be changed. Dirac66 (talk) 17:06, 11 March 2008 (UTC)

Thanks, Dirac. I am not yet half done. In the moment I am making figures with Inkscape to illustrate shear and rotation. Good program but not easy to work with. --Lantonov (talk) 17:14, 11 March 2008 (UTC)

Incorrect Rotation Illustration?

At first I thought, I found a nice illustration for complex linear maps, but then I had to find out, that there seem to occur several mistakes: Here the author assigns u1=1+i and calls this a Vector. Also the illustration I found very confusing: The complex plane is the Im(y) part, but has also a x-component?? The "Eigenvector" has one complex component, not two, as it should. I believe thats the reason for all this mess - the space is complex-2d, and thus real-4d and so cannot be displayed... Flo B. 62.47.208.168 (talk) 21:07, 1 May 2008 (UTC)

I cannot understand the source of your confusion. Vectors on the complex plane correspond to complex numbers and have one real x component, and one imaginary iy component. The complex plane itself is determined by one real X axis and one imaginary iY axis. The iY axis alone is one-dimensional and cannot determine a plane. The complex plane is NOT the Im(y) part, in fact, Im(y) for complex y is a REAL number. Real numbers are a subset of the complex numbers, and in the same way vectors that lie on the x axis are a subset of vectors in the complex plane for which Im(y) = 0. The two complex conjugated eigenvectors have a common real component and opposite sign complex components, as they should. The X axis is the only geometric site of points that are common for the real and complex planes (this quoted from the text). The only difference between the complex and Euclidean planes is the definition of the point z = Infinity so that the complex plane represents a projection of a sphere on a plane. See, e.g., Korn & Korn, section 1.3-2 and section 7.2-4, or some other more detailed book on complex geometry. Also, see the following link [10] for an animated illustration of roots of quadratic equation on the complex plane or this one [11] for the roots of a cubic. --Lantonov (talk) 05:55, 7 May 2008 (UTC)
On thinking a bit more, I think I found where the confusion springs from. You are thinking about a general vector in the complex plane which is determined by two points on the complex plane which are, in general, complex numbers. Instead, the eigenvectors u1 and u2 are radius vectors, respectively, of the points 1 + i and 1 - i, and as such are determined only by those two points (the other ends of those vectors are in the coordinate origin O, and that's why they are complex conjugates). --Lantonov (talk) 10:02, 7 May 2008 (UTC)
I have to admit there is one mistake though, which I knew from the start but did not correct for the sake of not burdening the exposition with details because this requires a long explanation. It concerns normalization of eigenvectors. As they are now in the picture, u1 and u2 are not normalized as required for eigenvectors. To be normalized, they must have modules equal to 1. As it is now, their modules are   ≈ 1.414 > 1. --Lantonov (talk) 06:00, 8 May 2008 (UTC)
There's no requirement for eigenvectors to be normalised. They frequently are, but its not a requirement, and its not part of the definition. (Not least because you can have eigenvectors and linear operators on spaces that don't have norms.). -- GWO (talk)
So it is, the accepted practice is to represent normalized eigenvectors in normalizable spaces. Anyway, I may leave the picture as it is for the time being until it needs some more important change. --Lantonov (talk) 12:01, 8 May 2008 (UTC)
I don't think it is accepted practice, necessarily. It's done when its convenient and useful to do so, and its not done when it not convenient or useful. In the case of certain eigenfunctions, for example, there might not be a convenient closed form of the normalisation constant. -- GWO (talk)
Ok, thanks for participating, and allaying my qualms. Cheers. --Lantonov (talk) 05:48, 9 May 2008 (UTC)
Lantonov -- what you've written above literally makes no sense. Please don't revert the page until you can explain what you think
The complex eigenvectors u1 = 1 + i and u2 = 1 − i are radius vectors of the complex conjugated eigenvalues and lie in the complex plane
means? How can an eigenvalue have a radius vector?
Do you not see that a 2D complex vector CANNOT be represented meaningfully in a 3D picture?
That arbitrarily adding the imaginary part of Y as a third dimension is not at all helpful to anyones understanding, since without the complex part of X no-one can see how that cause the vector to be scaled? That since complex multiplication on the argand plane looks like rotation, even if the picture made sense (which it doesn't) it doesn't inform, because it STILL looks like rotation, not multiplication.
Seriously, trying to show 4D vectors rotating on a 3D picture is doomed to failure, and your picture, with its nonsense caption, does not aid understanding. -- GWO (talk)
See section "Complex plane" below. 4D plane? 2D vector? Where have you taken such notions from? --Lantonov (talk) 08:31, 14 May 2008 (UTC)

Complex plane

Hi, Gareth Owen. I reverted back your deletion of the rotation matrix illustration and text. As much as I appreciate the other corrections you did on the article, I am certain that you are wrong on this one.

First, the definition of complex plane. A plane cannot be be 4D, it is always 2D, be that real or complex. To quote the definition from the article Complex plane: "In mathematics, the complex plane is a geometric representation of the complex numbers established by the real axis and the orthogonal imaginary axis. It can be thought of as a modified Cartesian plane, with the real part of a complex number represented by a displacement along the x-axis, and the imaginary part by a displacement along the y-axis. The complex plane is sometimes called the Argand plane because it is used in Argand diagrams. ..." If you do not believe in Wiki, look in the first convenient textbook of complex geometry. As a most widely used, I can recommend to you Korn & Korn (see the References in article). Section 1.3-2 there reads: "The complex number z = x + iy is conveniently drawn as a point z = (x, y) or the corresponding radius vector on the complex plane. Axes Ox and Oy (in the orthogonal Cartesian coordinates), are called, respectively, real and imaginary axes. The absciss and the ordinate of each point z are drawn, respectively, as the real part x and the imaginary part y of the number z." This answers also your second objection: radius vector on the complex plane corresponds to the point (a complex number) on the complex plane in the same way as radius vector on the real plane corresponds to a point (a real number) on the real plane. I can quote half a dozen more books I have at my disposal but do not find it necessary because they do not differ on these points. The diagram in the beginning of the article Complex plane is almost exact copy of the same diagram in Alexandrov, Shilov, Pigolkina, and almost any other book of analytic geometry I leaf through. The same diagram is also part of the rotation picture here. Another good source is [12] and references therein. Please do not mix a dimension of an object with number of coordinates that are used to represent it. Thus, a point in the real 3D space has 0 dimensions, altough it is represented with 3 coordinates (x,y,z). Its radius vector is one-dimensional, and it is expressed also with these 3 coordinates. In the same way a complex radius vector, if it is on the complex plane, is represented with 2 coordinates (components) - 1 real (X axis) and one imaginary (iY axis). A point in 3D projective space is represented with 4 homogenous coordinates, and so on. Your third objection, about Y axis (real) and iY axis (imaginary), look in the rotation picture to see that they are not collinear. Y and iY axes intersect in O so they are at angle different from 0, 180, 360, ... degrees. I use colors in diagrams, in order for people (to be more precise, the majority of us that are not color-blind) to orient themselves better in 3D pictures drawn on 2D sheet. Thus, in this picture which depicts a 3D space, the real plane (the plane of rotation) is in blue, and the complex plane is in yellow. The 3 axes (basis) of this 3D space are the following: real X axis, real Y axis, and imaginary iY axis that determines a complex dimension (one dimension, one axis). The real and complex planes are at angle ≠ 0, 180, 360 degrees to each other and intersect at the X axis. I really cannot be more clear or explicit than this. If you have any objections, or read some different books than those listed here, please discuss this before deleting. I would like especially to see where is it written that a complex plane is 4D and a complex vector is 2D. --Lantonov (talk) 07:02, 13 May 2008 (UTC)

Sorry but when I wrote this you wrote before me on the talk page, resulting in edit conflict. I will not revert back your changes today because of the 3RR rule. Read all above, also read the references listed, and convince yourself. I expect you to reinstate the rotation picture yourself. --Lantonov (talk) 07:06, 13 May 2008 (UTC)

There is one thing wrong in the caption, though, and I will correct it. The eigenvectors are radius vectors in the complex plane, all right, but they are not radius vectors of the eigenvalues, they are radius vectors of the complex numbers (points) u = 1 + i and ū = 1 - i (more precisely, the normalized u =   and ū =  , as they are given in many books), respectively, while the eigenvalues are λ1 = cos φ + i sin φ and λ2 = cos φ - i sin φ. --Lantonov (talk) 08:00, 13 May 2008 (UTC) Anyway, thanks to this discussion, I got an idea for a simpler picture. I will change it and reinsert again. --Lantonov (talk) 09:58, 14 May 2008 (UTC)

Really, it's hard to know where to begin...
First, the definition of complex plane. A plane cannot be be 4D, it is always 2D, be that real or complex.
That depends. In the vector space sense C is a one-dimensional complex vector space, but its isomorphic to R^2 - i.e. it requires two real numbers to pin down a location.
But you're dealing with C^2 -- thats a 2D complex space, buts its isomorphic to R^4, i.e. it requires 4 real numbers to pin down a single point. So when we're thinking about drawing a picture, we need four dimensions. Just like we need two dimensions to draw the complex plane (which is a 1D vector space over the complex numbers).
And that's why your diagram cannot help but be confusing. There's simply no way to draw a picture of those 4-real-dimensions.
Let's imagine were only interested in C, not C^2.
Considered a vector space over the complex numbers, this is a 1D space, and every linear map is simply multiplication by a complex constant. Lets pick a simple T, corresponding to multiplication by the complex unit i. Now, every complex number, z, is an eigenvector with eigenvalue i, because Tz = iz, by definition.
So lets draw the Argand diagram and see what happens to 1+i under this map:
          |        o              o         |
          |                                 |
          |             maps to             |
          |                                 |
---------------------             --------------------
         1+i            maps to           -1+i
So, when we plot the complex numbers onto two real axes (i.e. a 1d Complex space mapped onto a 2D real space), the transformation that we know to be a 1D scaling by i looks exactly like a rotation in the real plane. And that diagram could not convince a lay-person that (in the complex space) our point is just being scaled, not rotated. It's simply not a helpful explanatory tool to someone who doesn't already know whats going on.
Now imagine what happens if you try to draw this diagram based on C^2, resulting in a picture that require 4 real axes... Well, I hope you can see why I don't think your picture is very informative.
GWO (talk) 06:12, 15 May 2008 (UTC)


The eigenvectors are radius vectors in the complex plane, all right, but they are not radius vectors of the eigenvalues, they are radius vectors of the complex numbers (points) u = 1 + i and ū = 1 - i (more precisely, the normalized u = \scriptstyle{\frac{\sqrt{2}}{2} + \frac{\sqrt{2}}{2} i} and ū = \scriptstyle{\frac{\sqrt{2}}{2} - \frac{\sqrt{2}}{2} i}, as they are given in many books), respectively, while the eigenvalues are λ1 = cos φ + i sin φ and λ2 = cos φ - i sin φ
I'm sorry, but your use of the standard terminology is all over the place here.
It simply makes no sense, when dealing with C^2 (as we are) to talk about "radius vectors of the complex number 1+i. Single complex numbers are not elements of C^2, they're elements of the scalar field. It's quite apparent that you're terribly confused about whats going on. The complex eigenvectors of a 2x2 rotation matrix each have two complex components. A 2D complex vector simple cannot be a "radius vector of a scalar".
In fact, that last phrase is literally meaningless. Restricting ourselves to a +90degree rotation, please explain to me what you think it means to say that the vector  is a radius vector of the scalar  
-- GWO (talk)
"It simply makes no sense, when dealing with C^2 (as we are) to talk about "radius vectors of the complex number 1+i." I do not understand what are you talking about. No, we are not dealing with C^2 (whatever you mean by this). We are dealing with a rotation in a 2D real plane in the positive (counterclockwise) direction about the origin O.
"Single complex numbers are not elements of C^2, they're elements of the scalar field." Of course, as geometric elements, complex numbers are points that lie in a 2D complex plane, real numbers (Im(z) = 0) are points that lie on the real X axis, purely imaginary numbers (no real part - Re(z) = 0) are points that lie on the imaginary iY axis. I do not understand where do you get such strange notions as the above quote. Scalars can be real or complex, algebraically they are numbers, and geometrically they are points. You can see this in every single geometry book that you care to open. Also, scalars are 0-dimensional, vectors and lines are 1-dimensional, planes and surfaces are 2-dimensional, always. Elements with more than 2 dimensions are called hypersurfaces, and their dimensions are specified additionally. Thus, the sphere is a 3-dimensional hypersurface, and a tessaract is a 4-dimensional hypersurface. That's standard elementary geometry which is all that is needed to understand rotation.
"The complex eigenvectors of a 2x2 rotation matrix each have two complex components." Now this is outright wrong. The 2x2 rotation matrix has 2 complex eigenvectors, and each of these eigenvectors has one real component which is measured on the real X axis and one imaginary component that is measured on the imaginary iY axis. It appears that you mix here the terms "complex" - "imaginary" and "component" - "dimension". The components of the eigenvector that is associated with the eigenvalue λ1 are cos φ (real component, on the X axis) and i sin φ (imaginary component on the iY axis). The components of the eigenvector that is associated with the eigenvalue λ2 are cos φ (real component, on the X axis) and - i sin φ (imaginary component on the iY axis). Please look at Fig. 1 in Complex plane to get visual input (you can see this also in the figure that you deleted). Geometrically, those eigenvectors are radius vectors of their corresponding eigenvalues and lie in the complex plane which has 2 dimensions (basis (e1, e2)) with the 2 basis vectors respectively along the X and iY axes. Note that the real components of the two eigenvectors are the same: cos φ, and they both lie on the real X axis (they are congruent). This is why the 2 eigenvalues and their associated radius vectors (eigenvectors) are called complex conjugated - their real parts (components) are conjugated (fused, congruent, ...). No need for additional dimensions (basis vectors, coordinate axes), sorry. --Lantonov (talk) 10:12, 15 May 2008 (UTC)
I'm sorry, but your use of the standard terminology is all over the place here. No need to be sorry for this. This is an encyclopedia to be read by all - specialist and non-specialist alike and this is why use of a standard terminology is highly recommended. I always strive to use standard terminology in order to be better understood. Non-standard terminology is only to be used in highly specialised texts. --Lantonov (talk) 10:26, 15 May 2008 (UTC)
About your use of Argand diagrams. Here we do not talk about a rotation in the complex plane. We are talking about a rotation in the real plane. To obtain the complex plane, we do not map the whole real plane (2D). We map only the real Y axis into an imaginary iY axis, so that, e.g. number 1 becomes i, and sin φ becomes i sin φ. --Lantonov (talk) 10:38, 15 May 2008 (UTC)
Restricting ourselves to a +90degree rotation, please explain to me what you think it means to say that the vector   is a radius vector of the scalar  . Readily. It is easy to explain, although too long (I lost the better part of the day explaining here). First, there is no such animal as a vector  . Those are two vectors:   and  . Geometrically, these are vectors with equal modules (lenghts),  . They lie on the complex plane which has a real axis X and imaginary axis iY. The origin of both vectors is point O which is the coordinate origin. The end of the first vector is the point (complex scalar) C(1, i), and the end of the second vector is the point (complex scalar) C*(1, −i)(BTW, there is no scalar  ). All 3 points (O(0,0), C(1, i), and C*(1, −i)) lie in the complex plane (which is 2D determined by the 2 axes X (real) and iY (imaginary)). Therefore, the two vectors also lie in the complex plane. Moreover they are radius vectors of the 2 said scalars in the complex plane because they originate at the origin of the coordinates. If we look at these 2 vectors as rotated relative to the X axis, then   is rotated counterclockwise at a 45° and   is rotated clockwise at − 45°. Sorry but I have no more time to lose in explanations. If you want to clear this out for you, please look in the references that I gave for you above as it appears that you have not done this effectively so far. --Lantonov (talk) 11:27, 15 May 2008 (UTC)
Considered a vector space over the complex numbers, this is a 1D space, and every linear map is simply multiplication by a complex constant. Lets pick a simple T, corresponding to multiplication by the complex unit i. Now, every complex number, z, is an eigenvector with eigenvalue i, because Tz = iz, by definition. I am not sure what you want to show with this but if you want to present T as a linear transformation, it goes by the rules of all linear transformations. You say that T is simply the complex unit i (number), therefore T is a scalar. This transformation transforms each vector z into into vector iz. Now let's consider dimensions. A 1D space has only one basis vector and therefore only one coordinate axis, so the vector z will have only one component (if it is not zero). It could be a real component (on the real X axis) or an imaginary component (on the imaginary iY) axis. If it is imaginary, when multiplied with i, it will give i × i × z = − z. In your example it gives iz when multiplied by i, therefore z is a real number. Geometrically, z is a point on the X axis whose radius vector is ze1, with e1 (also designated i) being the single basis vector lying on the X axis. iz is a point on the iY axis which is a purely imaginary number. Thus, the transformation T maps a point (or its corresponding radius vector) on the X axis onto a point (or its corresponding radius vector) on the imaginary iY axis. Now let us consider the general case of a complex 2D plane. As repeated everywhere, this has one real X axis and one complex iY axis. Each vector not on the axes will have 1 real component and 1 imaginary component, the vector itself remaining 1-dimensional. Let the component on the X axis is x and the component on the iY axis is iy. The transformation is  . This is not homothety because, in general, xy. This transformation maps the complex vector   onto the complex vector   so we have mapping of X axis on iY axis and simultaneous mapping of iY axis on X axis, and corresponding change of components. This is not the case that we consider here. The above transformation never can be a proper rotation. For one thing, AT A = − I and not I. The matrix   is not in the rotation group. The complex rotation matrix is   and it acts on real vectors (all components of which are real). When such rotation matrix is multiplied with its Hermitian transposed, it gives the identity matrix. The 2x2 rotation matrices considered here act on real vectors in the real space and the eigenvalues and eigenvectors that are obtained are generally complex numbers and complex vectors in the complex 2D space (complex plane). --Lantonov (talk) 13:23, 15 May 2008 (UTC)
Tell me then, what is  . Is it a vector of two real numbers? A single point in the real plane? A single complex number? A single point in the complex plane? Or none of these things? -- ~~
Well now we got to the crux of the matter. The above is a rotation in the real plane (axes X and Y real) of a real vector with components 7 units on the X axis and 5 units on the Y axis. It is in the first quadrant. This vector rotates to 45° (π/4) counterclockwise in the real plane. The result will be a real vector in the second quadrant of the real plane whose components are real numbers. I can calculate the exact components of the rotated vector but I have really no time now, I have to go. Listen to my advice, read the references I gave and you will receive answers to your questions. --Lantonov (talk) 16:51, 15 May 2008 (UTC)
No. You see. It's not. I have a PhD in applied mathematics, and I don't need your references to do basic matrix algebra.
If you'll excuse the use of decimals:
i)  
ii)  
Therefore:
 
So
 
Don't believe me: try running this code in GNU Octave
octave> m = [exp(i*pi/4) 0; 0 exp(-i*pi/4)];
octave> m * [7;5]
ans =

  4.9497 + 4.9497i
  3.5355 - 3.5355i
That's a complex vector. It's not a point on the real plane at all. -- GWO (talk) 17:09, 15 May 2008 (UTC)

OK I've got it

"It simply makes no sense, when dealing with C^2 (as we are) to talk about "radius vectors of the complex number 1+i." I do not understand what are you talking about. No, we are not dealing with C^2 (whatever you mean by this). We are dealing with a rotation in a 2D real plane in the positive (counterclockwise) direction about the origin O.

I understand now. You have a serious miscomprehension of what it means to say that the matrix   has complex eigenvalues and eigenvectors. You seem to believe that this means this means we can treat a planar rotation a multiplication by a complex. It is true that we CAN do this, but this is not what we mean when we talk about the complex eigenvectors. What this means is we can find a vector   where   and   are complex numbers. That's what "C^2" means. It means an ordered pair of complex numbers. Something like this:  

It's not a single complex number, and it doesn't exist on a single complex plane. It's a vector of complex numbers and it exists in the Cartesian product of two complex planes (which we call C^2). Specifically, the vector   is NOT the same thing as the complex scalar 1+i.

I can add elements of C^2 together   but I cannot add a complex vector to a complex scalar  .

They lie on the complex plane which has a real axis X and imaginary axis iY

No no no no and no. The complex scalar 1+i lives on the complex plane. The complex vector  , like the complex vector  , is an entirely different entity. You are confusing the two. -- GWO (talk)

I do not think that "we can treat a planar rotation a multiplication by a complex" whatever you mean by this. By complex number? By complex matrix? You are right (and I am wrong) to say that   will give a complex vector. However, this is not a vector function (transformation) if the vector   is on the real plane. To quote from the article: "In a vector space L a vector function A is defined if for each vector x of L there corresponds a unique vector y = A(x) of L."
The vector transformation acts on a vector in L, and the result is also a vector in L. This is an endomorphism. Anything else is not a vector transformation. Implication is, e.g., that if a real vector is multiplied by a complex scalar, this is not a vector transformation. The elements of the transformation matrix must be scalars from the base field of the vector space. With this in mind, the above example:   can be a vector transformation iff the vector space have as its base field complex scalars. The vector space must be defined by defining a basis in it in order that a transformation has a meaning as a vector transformation.   implies that the base field of C^2 are complex scalars so that   is  . With the deleted figure, I did not intend to show, e.g., that the real plane is transformed by rotation to a complex plane. This is very wrong. All that I was trying to show is to illustrate the eigenvalues and eigenvectors on the complex plane. I do not mix vectors   and scalars (1 + i) either. I think that they are different elements the same as the point P(1,1) is different from the vector   which is its radius vector in the real Cartesian plane. Could you explain in some more detail your strong objection to They lie on the complex plane which has a real axis X and imaginary axis iY. Maybe it is in using i in the column vector  ? This i should become part of the basis (e1, i e2)? Or something else? --Lantonov (talk) 08:12, 17 May 2008 (UTC)


All that I was trying to show is to illustrate the eigenvalues and eigenvectors on the complex plane.
The problem with trying to show this is that it is not true. The eigenvalues are on the complex plane. The eigenvectors are elements of the Cartesian product of two complex planes, denoted   (just as the Cartesian product of to real lines is called  .) That's my strong objection.
The eigenvalues lie in the field, the eigenvectors lie in the vector space. OK?
The complex eigenvalues of a 2x2 matrix lie in  .
The complex eigenvectors of a 2x2 matrix lie in  
You can plot the eigenvalues on  . You can't plot the eigenvectors on  , because they are not complex numbers, they are ordered pairs of complex numbers.
I'll demonstrate (again). Let A be a 45 degree rotation matrix

 

The eigenvalues of A are   and  . Ok. They are complex scalars (i.e. elements of  , and you can plot them on an Argand diagram.
The corresponding eigenvectors of A are   and  
Now, I don't know how many times I need to keep saying it but THESE VECTORS ARE NOT THE COMPLEX SCALARS 1+i and 1-i. OK? Can you agree with that? They ARE NOT ELEMENTS OF THE SET OF COMPLEX NUMBERS in exactly the same way that the vector   is NOT AND ELEMENT OF THE SET OF REAL NUMBERS.
A vector of real numbers is not itself a real number. It is not an elemnt of the real line  , but it is in   (the Cartesian product of two copies of the real line).
A vector of complex numbers is not itself a complex number. It is not an element of the complex plane  , but it is in   (the Cartesian product of two copies of the complex plane).
You cannot plot the vector   on the real line, because it is not a real number.
You cannot plot the vector   on the complex plane, because it is not a complex number. If you try, and you have, you will fail, and you did.
The basis is still  ,  . The difference is you make complex vectors by multiplying the basis vectors by complex scalars. So

 

or

 

or in general

 

where   and   are complex numbers.
Look, I'm fed up with politely correcting you, so I've asked for arbitration. I appreciate you're well meaning, but your grasp of this subject leaves an awful lot to be desired. If you really attend VA Tech, please stroll over to the maths department and ask a complex variable professor whether they think the complex eigenvectors of a 2x2 matrix can be plotted on the complex plane, because I'm not getting paid to wade through any more of you half-informed screeds. -- GWO (talk) 09:40, 17 May 2008 (UTC)
Ok, thanks for the detailed explanation. Sorry for losing your time. This explanation will help me edit and correct myself what I have written about rotation. Please accept my apology for offending you with the advice to read references. In fact, I am glad to have you here correcting mistakes. --Lantonov (talk) 10:06, 17 May 2008 (UTC)
See answers to your comments in the following section. --Lantonov (talk) 11:27, 21 May 2008 (UTC)

Complex vectors

These are point by point answers to comments above:

  • The eigenvectors of rotation are   and  . True.
  • Now, I don't know how many times I need to keep saying it but THESE VECTORS ARE NOT THE COMPLEX SCALARS 1+i and 1-i. OK? Can you agree with that? True. Agreed.
  • They ARE NOT ELEMENTS OF THE SET OF COMPLEX NUMBERS in exactly the same way that the vector   is NOT AND ELEMENT OF THE SET OF REAL NUMBERS. True.
  • A vector of real numbers is not itself a real number. True.
  • It is not an elemnt of the real line  , but it is in   (the Cartesian product of two copies of the real line). False. A vector (directed intercept) can be an element of the real line. Namely, if we take a point with coordinates (a, 0) where a is real number as point 1, and a point with coordinates (b, 0) where b is real number as point 2 and build a vector between point 1 and point 2, this vector   will be an element of the real line because it lies in the real line  . This is so because in the underlying field of scalars of   there are scalars (a,0). Vectors on the real line are subset of vectors in  . Furthermore, the real line can be a one-dimensional vector space because it is closed under addition and multiplication by real scalar. Check:   and   and the resulting vectors are in  .
  • A vector of complex numbers is not itself a complex number. True. However, we must be careful what we understand by a "vector of complex numbers". In the notation,   where   and   is a vector of complex numbers (scalars) and where   is supposed to mean multiplication of vector by scalar and not vector product, a vector for which   is real and   is complex is also a complex vector. Note that the 2 eigenvectors of rotation are exactly of this type:   is real, and   is complex. More specifically,   and   or  .
  • It is not an element of the complex plane  , but it is in   (the Cartesian product of two copies of the complex plane). False, if stated in this way. True, if it is supposed to mean that the complex plane is not a vector space of complex vectors. A complex vector can be an element of a complex plane   for a similar reason as the above. Such plane is defined if one takes 2 lines (one-dimensional spaces) such that one line contains the whole set of real numbers (line X) while the other contains the whole set of purely imaginary numbers (line Y). Then make the Cartesian product   to obtain a set of ordered pairs (x, iy). If these elements are points, they will form a two dimensional space, in which each point has coordinates (x, iy) where x and y are real. If we take in this plane a point with coordinates (x1, iy1) as point 1 and a point with coordinates (x2, iy2) as point 2 and direct an intercept between these two points in such a way that point 1 is origin, and point 2 is end, we will obtain (build) a vector in the complex plane which as a column matrix is  . This vector will be element of the complex plane because it is built in this plane as vector and lies in this plane. Whether such plane is a vector space is another matter. It is NOT a vector space because it is not closed under multiplication by a complex number and such numbers are in the set of the underlying field of scalars.
  • You cannot plot the vector   on the complex plane, because it is not a complex number. If you try, and you have, you will fail, and you did. False. I can plot the vector   on the complex plane. For this I must use the standard textbook definition of complex plane: it is a plane of 2 real axes (rays, directed lines, composed of real points only) that are geometric sites of points corresponding to the real R(z) and imaginary Im(z) parts of the complex number z = x + iy. Axis X contains the points for x (all real numbers), and axis Y contains the points for y (all real numbers). X and Y are orthogonal. The Cartesian product   gives the set of ordered pairs (x,y). In order for this set to be a ring with a unit, we define two operations: addition (x1, y1) + (x2, y2) = (x1 + x2, y1 + y2) and multiplication (x1, y1) . (x2, y2) = (x1 . x2 + y1 . y2, x1 . y2 + x2 . y1). These operations satisfy the conditions of associativity, distributivity, and multiplicative closure. Also define the unit as (1,1) and zero as (0,0). We do not need to define inverse element for the present purpose (do not need a field) because a ring with unit is all that is needed to define a vector space. Those elements (x,y) defined in such way are the scalars. We define vectors as  . To define vector space over the ring of scalars, we define the following two operations: addition of vectors   and multiplication of vector by scalar  . These two operations should satisfy all necessary conditions for vector space: commutative addition, closure upon multiplication by scalar, associativity for multiplication by scalar, distibutive laws and so on (too lazy to check this). To cut it short, now we have a vector space which is one dimensional relative to the scalars (x,y) but two dimensional relative to the real numbers x and y (which are scalars (x,0) and (0,y) on the X and Y axes). I need again to stress: this is a real vector space and the vectors and scalars in it are real. However, vectors   have all the properites of complex vectors   and scalars (x,y) can be added and multiplied as complex scalars x + iy. Other properties of complex numbers can be easily obtained if we suitably define a field of (x,y) to have division and other operations with complex numbers. Now we have all that is needed to plot the eigenvalues x + iy and eigenvectors   on the complex plane. I did it, and I succeeded. Note that we need only 2 real numbers x and y and this helps to plot eigenvalues and eigenvectors in two dimensions.

I understand what all those comments are driving at: to represent the diagonalized rotation matrix   as a homothety in the complex C^2 space. This can also be drawn on two-dimensional sheet because we need only two real numbers. For it, X and Y can contain all complex numbers but this is not good for further theorizing. Therefore, it is preferrable that X contains only the complex numbers x + iy and Y contains their conjugates x - iy. In this way axis X can be represented by the upper half-(complex planes) + real axis X and Y can be represented by the lower half-(complex planes) + real axis X which, in fact, are the eigenspaces of the two eigenvectors of rotation. As a unit on X we will choose 1 + i, and as a unit on Y -- 1 - i. All vectors on such plane must be defined so that they form a vector space with vectors of the type   where x and y are real. We need only 2 real numbers so we can draw this on a sheet. Then,   is a homothety with eigenvalue sin φ. I might be mistaken somewhere in the last paragraph, especially in the construction of this supposed complex space, and welcome any good-faith corrections. Also, I would welcome any arbitration or third-editor opinion on this. --Lantonov (talk) 11:14, 21 May 2008 (UTC)

I see first mistake - not closed under multiplication by scalars. --Lantonov (talk) 13:04, 21 May 2008 (UTC)

You said "A vector (directed intercept) can be an element of the real line." That is true in a way: as you point out, some 2-D vectors lie on the x axis, so in as much as the x axis is isomorphic to the real number line, two-vectors of the form (x, 0) are isomorphic to the real numbers. However, arbitrary two-vectors cannot be considered elements of the real line. The thing is, vectors are first-order mathematical objects, even if we often write them in a coordinate system. Using vector operations alone, how can I write the two-vector v as an element on the real line? I can't: R2 is not isomorphic to R. —Ben FrantzDale (talk) 21:37, 23 May 2008 (UTC)

I can add vectors in R space and I can multiply them by scalars of R space. The resulting vectors are in R. Fill the rest by yourself because it seems someone tries to set limits on my discussion space. --Lantonov (talk) 12:24, 26 May 2008 (UTC)