Welcome!

Hello, Typeships17, and welcome to Wikipedia! Thank you for your contributions. I hope you like the place and decide to stay. Here are some pages that you might find helpful:

I hope you enjoy editing here and being a Wikipedian! Please sign your messages on discussion pages using four tildes (~~~~); this will automatically insert your username and the date. If you need help, check out Wikipedia:Questions, ask me on my talk page, or ask your question on this page and then place {{helpme}} before the question. Again, welcome!


Your reference desk question

edit

Hello there. I thought I'd try and answer your question about orthogonals. Your exact question wasn't very clear, but I think I know what you meant. If this isn't what you meant then I'm sorry. I'll give a bit of theory first, if you don't understand any of it then try to read the articles, and/or send me a message (or the reference desk a message). Then I'll carry on and try to answer your question.

Let f : RmRn be a smooth function. For a fixed point x0 in Rm the differential of f at x0 , denoted by   is a linear map from the tangent space of Rm at x0 to the tangent space of Rn at f(x0); i.e.  . As with any linear map, if we take a basis then we can find a matrix representation of the linear map. In this case, the matrix if the Jacobian matrix. The differential df (not evaluating at a point anymore) gives a map from the tangent bundle of Rm to the tangent bundle of Rn, i.e.  

Now, in you question you were talking about the case n = 1, and about the level sets of f, i.e. for some real number y in R

 

In this case the Jacobian matrix of f is a 1 × m matrix, i.e. a vector. The Jacobian matrix, in the n = 1 case, is just the gradient vector of f. And you say that you understand why this vector if orthongonal to the level sets.

Now, in the case of general n, things become more complicated. The Jacobian matrix will be an n × m matrix, and it doesn't make sense to say that someting is orthogonal to a matrix. (Orthogonal is a word used with vectors). If we think of a vector as a 1 × m matrix, then another vector is orthogonal to it if and only if it lies in the kernel of the 1 × m matrix. And in the case of the Jacobian matrix the kernel also plays a key role. But we need to impose some conditions on f.

First of all we need that nm and that the Jacobian matrix has maximal rank, i.e. n. This just means that the differential at each point is a surjective map between tangent spaces. If n < m then f is called a submersion. If n = m then f is called a diffeomorphism. For the level set idea it's normal to have n < m.

We can prove (using the implicit function theorem) that if the Jacobian matrix of f evaluated at a point x0 of Rm has rank n and f(x0) = y0 then the level set Ly0 is a parametrisable p-dimensional manifold in a neighbourhood of x0, where p = mn. (In this case we say that x0 is a regular point of f, and that y0 is a regular value of f.) Now, here's the key point:

If f is a submersion, x0 is a regular point of f, and y0 is a regular value of f then the kernel of the Jacobian matrix evaluated at x0 spans the tangent space of Ly0 at x0.

(Note that in the case where n = 1, the submersion condition if just that the gradient vector is not the zero vector. In this case the gradient vector is perpendicular to the level sets, so the orthogonal will be tangent to the level sets.)

To see why this is true, we make calculations along the lines of the n = 1 case. Assume that f is a submersion. I'll drop any refernce to point, but just keep in mind that everything needs to be evaluated at a point. We know that the level set is a parametrisable p-dimensional manifold sitting inside Rm, say M. So let's parametrise it with   where   is an open neighbourhood and

 

for smooth functions   To keep things simple lets assume that M is the zero level set of f, i.e.

 

This just makes the calculations look simpler. We would just need to add constants to all of the   if you wanted a different level set.

Since   parametrises M we have   i.e.

 

Notice that f is a mapping from m-dimensions to n-dimensions, so we can break down the components of f as

 

where for 1 ≤ in we have  

Given that   we calculate the partial derivatives of both sides of this equality. We get that for 1 ≤ in and 1 ≤ jp

 

Letting i vary from 1 to n we can put these equations into matrix form:

 

Notice that the matrix on the left is exactly the Jacobian matrix of f. This last expressions tells us that each of the partial derivatives of the parametrising functions lie in the kernel of the Jacobian matrix. But these partial derivatives of the parametrisation of M are exactly what span the tangent space to M. It now follows that the kernel of the Jacobian matrix coincides with the tangent space to M. ~~ Dr Dec (Talk) ~~ 12:08, 10 September 2009 (UTC)Reply