Wikipedia:Reference desk/Archives/Mathematics/2010 March 10

Mathematics desk
< March 9 << Feb | March | Apr >> March 11 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


March 10 edit

How do you prove that   generates the multiplicative group of   if   is a primitive polynomial? edit

How do you prove that   (and any element in   other than the additive identity) generates the multiplicative group of   if   is a primitive polynomial?

If it is fairly easy to prove, I'd like to give it a try, but could use some hints. On the other hands, if it is difficult, could someone point me to a complete proof? Thanks. --98.114.146.242 (talk) 02:57, 10 March 2010 (UTC)[reply]

Well, x generates the field because it is a root of a primitive polynomial... However, for example, if the multiplicative group of the field has size 8 and z is a generator, then z4 cannot possibly be a generator, because z4 will have order 2. So it will not be true that "any element in   other than the additive identity" will generate the multiplicative group. In general, a cyclic group of size n has only φ(n) generators. — Carl (CBM · talk) 03:04, 10 March 2010 (UTC)[reply]
You're right. I confused a special case with the general case. --98.114.146.242 (talk) 03:28, 10 March 2010 (UTC)[reply]

Project Euler problem edit

In this problem, how many simulations of random walks would you expect to have to do in order to get a result accurate to 6 decimal places? (and yes, I am aware this isn't a good way to solve the problem). 149.169.164.152 (talk) 07:11, 10 March 2010 (UTC)[reply]

That's easier to estimate once you've done a few simulations. You'll get very rough estimates of the mean   and standard deviation  . If you then do n simulations and take their average, the result will be distributed normally with expectation   and standard error  . Let  ,  . To be highly confident the result is accurate to 6 s.f., you'll want the SE to be about  , so  . If the distribution of the experiment result is roughly exponential, expect the required number to be in the trillions. -- Meni Rosenfeld (talk) 08:27, 10 March 2010 (UTC)[reply]
Oh, decimal places, not s.f.. Then what you really want is an SE of  , which means  , which is much higher (order of  , maybe?) -- Meni Rosenfeld (talk) 08:35, 10 March 2010 (UTC)[reply]

Is C dense between the measure preserving transformations? edit

My problem is:

Suppose we have a diffeomorphism T which preserves the Lebesgue measure and is C3. Is it possible to find a T' which is C3-close to T, preserves the Lebesgue measure and is C? Usually density arguments make use of convolution but I guess this would destroy the property of preserving the measure. How can I do?--Pokipsy76 (talk) 10:39, 10 March 2010 (UTC)[reply]

At least in the case of a C3 diffeo T:M→M on a C compact connected differentiable manifold M with a volume form α, I think this follows quite immediately by a celebrated theorem by Jürgen Moser [1], that states that if α and β are two volume forms with the same total mass on such an M, there exists a C diffeo f:M→M such that β=f*(α), and in fact f can be chosen arbitrarily C close to the identity provided α and β are suitably close. Now, take a C diffeo S which is C3 close to T. Then S*(α) is a volume form close to α and has the same total mass. By Moser theorem with β=S*(α) there exists a C diffeo f:M→M close to the identity such that S*(α)=f*(α), so (Sf -1)*(α)=α, which mans that Sf -1 is a C diffeo C3 close to T which preserves α, as you wanted. I strongly suggest you to read Moser's beautiful proof, that may give you several hints. It has been generalized to various cases (manifold with boundary, non-compact, &c). You should work out the details of what I sketched, but I think it's ok and I hope it's of help. It is possible that there is a more direct proof, but I guess in any case one has to pass close to some of Moser's proof's lemmas. --pma 21:15, 11 March 2010 (UTC)[reply]
Thank you very much, I think this article will be useful.--Pokipsy76 (talk) 13:10, 12 March 2010 (UTC)[reply]
Another thing. Let F:R×RmRm be a time-dependent field with with vanishing divergence (wrto the x variable), and such that its integral flow
 
 
is defined at time 1 for any initial value x.
Then it's clear that the map at time 1, T(x):=g(1,x) is a diffeomorphism that preserves the Lebesgue measure. Conversely, any measure preserving diffeo is of this form -I don't have a reference here but it is a standard result (not completely elementary; it uses Moser's theorem and a few facts about fibrations); I am pretty sure that you can do it for any T of class C3, like the ones you are treating. Of course, such a map T can be very easily approximated by a C measure preserving diffeo: just take the integral flow of a regularized field with null divergence (convolution now works well, because it gives a fiels with null divergence). So if you find a ready reference, the approximation should follow very easily. Hope this help. --pma 22:13, 12 March 2010 (UTC)[reply]

'Extension' of a linear transformation? edit

Say I have a linear transformation of R2,  , and a linear transformation of R4,   is it fair to call A an 'extension' of J, since, considering R2 a subset of R4, J and A bring about the same transformation on this subset? Is there a proper term for this sort of situation? Thanks, Icthyos (talk) 17:03, 10 March 2010 (UTC)[reply]

You could say that J the restriction of A to R2. This is term has pretty general application, see Domain of a function#Formal definition. I doubt the specific situation you describe has a name.--RDBury (talk) 03:03, 11 March 2010 (UTC)[reply]


Thanks, that makes sense. Restriction will do nicely! Icthyos (talk) 11:48, 11 March 2010 (UTC)[reply]
Actually A an extension of J (or J a restriction of A, as said above) is fair, exactly with the justification you wrote, and is indeed common language for linear operators also. Note that you can say the same even if there's only one zero-blok in the matrix (which one: NE or SW? ;-) ). If you want to describe the diagonal decomposition, you may also say that J is a factor of A, or that A splits into the direct sum of J and K, and similar expressions. --pma 16:46, 11 March 2010 (UTC)[reply]
Well, as it stands, A is an extension of both J and K - if only the NE block is zero, it remains an extension of J, but no longer for K, and if only the SW block is zero, it is still an extension of K, but not of J, correct? I suppose I'll just pick my favourite expression and run with it, thanks! Icthyos (talk) 21:57, 11 March 2010 (UTC)[reply]

Complex Analytic Functions edit

Let's say I have a function f : C --> C, which is infinitely differentiable in a neighbourhood of  . I define a new sequence of polynomial functions   given by

 

I want to consider the new sequence of functions   given by   I want to know how to investigate the limit of the sequence   as n tends to infinity. I think this is called the pointwise convergence. Ultimately I'd like to investigate the uniform convergence. But I want some explicit methods. A nice worked example would be great. Let's say   and  . I know that   is complex analytic, a.k.a. holomorphic, and so I know that it's equal to it's power series. I just use this as a familiar non-trivial example for the explicit computations. •• Fly by Night (talk) 19:53, 10 March 2010 (UTC)[reply]

it's not quite clear if you are talking of an f defined everywhere on C, but only C in a nbd of 0. If so, it is a rather strange assumption, and you should be aware that there is no relation at all between f and the fn in the large (i.e. out of the neighborhood Ω you are talking of). Besides, it is not clear if you mean real or complex differentiability. In the first case, you can have a C function with f(x)≠0 for all x≠0, and all the fn identically 0. If you mean infinitely differentiable in the complex sense, then complex differentiability (that is f holomorphic) is sufficient (it's equivalent) ad implies that fn converges to f uniformly on all closed disks centered at z0 and contained in Ω. You can't have uniform convergence on C even if f is holomorphic on C (entire) unless f itself is a polynomial (in which case the sequence of the fn stabilizes and the uniform convergence is trivially true). For instance, the function you wrote is bounded and nonconstant, whereas a nonconstant polynomial is not. So the uniform distance is infinite. --pma 16:36, 11 March 2010 (UTC) (PS: above I meant: bounded on R, sorry)[reply]
I gave an explicit example:   and  . I also said that I know that   is complex analytic, a.k.a. holomorphic, and so I know that it's equal to it's power series. I just used this as a familiar non-trivial example for the explicit computations. •• Fly by Night (talk) 21:47, 12 March 2010 (UTC)[reply]
Ok, then for this particular entire function the polynomials fn converge to f uniformly on every disk, but do not converge uniformly on C, for the reason I wrote (it is bounded on R whereas the fn are not. It was not clear to me what you were asking exactly. Is it all clear now? --pma 22:50, 12 March 2010 (UTC)[reply]