Wikipedia:Reference desk/Archives/Mathematics/2006 September 24

< September 23 Mathematics desk archive September 25 >
Humanities Science Mathematics Computing/IT Language Miscellaneous Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions at one of the pages linked to above.
< August September October >

Random Variable X and its Mean Cancelling edit

If I have a random variable X with a mean mu(X), and both are in the same equation, one being negative, can the two simply cancel out?

Basically, how do the two relate?

Like in X – mu(X)? No, they won't simply cancel out. This is a new random variable, say Z. Have you read the articles Random variables and Expected value? Take for example that X is: the result of throwing two dice and adding the number of eyes. The outcomes of X may range from 2 to 12, with an expected value (arithmetic population mean) of 7. Suppose you throw the dice 10 times and observe for X this sample: [3, 7, 10, 9, 9, 7, 2, 7, 2, 7]. (I actually threw two dice ten times here.) That means for Z = X − 7 this sample: [−4, 0, 3, 2, 2, 0, −5, 0, −5, 0]. Occasionally the outcome of Z is 0, but the random variable Z itself is clearly not the constant 0. However, mu(Z) = 0. To see this, we need three facts: mu(V) = E(V) for a random variable V – which is true by definition of mu(.), E(X − Y) = E(X) − E(Y), and E(C) = C for constant C. Then mu(Z) = E(Z) = E(X – mu(X)) = E(X) – E(mu(X)) = mu(X) – mu(X) = 0. Indeed, the sample mean for our sample of 10 outcomes for Z is −0.7 – not quite 0 due to the random fluctuations, but fairly close. --LambiamTalk 01:13, 24 September 2006 (UTC)[reply]

Okay, but in the long run, they will cancel? Like if I have a function Y equal to the equation you said, Y = X - mu(X), the E(Y) will be 0, simply due to the rule that X can be transformed into mu (X)?

Yes, E(Y)=0, but I'm not sure what you meant by "X can be transformed into mu (X)". -- Meni Rosenfeld (talk) 16:38, 24 September 2006 (UTC)[reply]

Well in order for E(Y)=0, X must be equal to mu (X) to cancel...how does this come about? How do you show that E(Y)=0?

That's what I showed above, except that I called it Z instead of Y. --LambiamTalk 16:59, 24 September 2006 (UTC)[reply]

Wow Sorry you are completely correct, that makes perfect sense, thank you. Just as a final clarification, the E(muX) is the part that pertains to E(C) = C for constant C?

Exactly. E(X) (aka μ(X)) is a constant, therefore E(E(X)) = E(X). -- Meni Rosenfeld (talk) 17:11, 24 September 2006 (UTC)[reply]

the other delta function? edit

OK, so I am trying to understand how the fourier transform of a pulse train in the time domain gives also a pulse train in the frequency domain. Actually i'm stuck on a little detail. I get that:  , that  , and that  . And that in the end, you make the jump and say that  ...

I don't see it. Did I miss something? Anybody has some intuition (hopefully on physical grounds) on how the expression   is equivalent to a delta function? please? --crj

What makes you think a pulse train in the time domain should give a pulse train in the frequency domain? On the contrary, it should be all over the place in the frequency domain (see "Localization property" in the article Continuous Fourier transform). This should be intuitively obvious: you can't assign any one frequency to a single spike. It also fits with your result: 1, which is hardly a spike. In the expression you give for  , there is an occurrence of the variable   (the traditional notation is  ). That can't be right; the result should only depend on   . This is not an indefinite integral (antiderivative; primitive function) but a definite integral for   from   to  . --LambiamTalk 21:30, 24 September 2006 (UTC)[reply]
Ooops. I meant to type t instead of w after the integration. There are some notes on the fourier transform of pulse train in the article Dirac comb. Maybe I am doing the problem the wrong way but something smells fishy here... because what happens at t=0? oh dear. Thanks anyway. --crj 00:48, 25 September 2006 (UTC)
Whoa! Careful there. The transform of a single impulse is a constant; the transform of a periodic sequence of impulses is again a periodic sequence of impulses, whose period is inversely proportional to the original period. --KSmrqT 22:34, 24 September 2006 (UTC)[reply]
So the Dirac delta function is a tiny bit confusing? Since it's not really a normal function at all, that's not surprising. One way we approach it is through its properties within an integral. That is,
 
selects the value f(t0). We can be more formal by using a limiting process. For example, we know that a Gaussian bell curve,
 
integrates to 1, and can be made as narrow as we like by letting σ approach zero. We also know (and can easily verify) that the Fourier transform of a Gaussian is again a Gaussian, but with width inversely proportional to σ. As we pinch the Gaussian to its delta function limit, its transform spreads and flattens to a constant.
A delta function cannot be written as you propose, as a simple exponential, which may account for your confusion.
If making a single impulse requires some "funny business", making an impulse train requires more. But perhaps we should stop here for now. --KSmrqT 23:24, 24 September 2006 (UTC)[reply]
Let me rephrase the question:
According to the Dirac comb article the fourier transform of an impulse train is:
 .
My question is, under what grounds are the last two terms (sum with delta function in the frequency domain, sum with complex exponential) equivalent? -- Crj 02:34, 25 September 2006 (UTC)[reply]
Let's look at the sum.  . Now let N approach infinity. It's possible to check that this sum approaches infinity if f = k/T (with integer k), approaches zero otherwise, and that its integral over any interval containing exactly one point where f = k/T approaches 1. So it's a sum of delta functions. Conscious 11:53, 25 September 2006 (UTC)[reply]
The question is based on an elementary slip. If the sum 1+2+3 equals the sum 2+2+2, we have no grounds for assuming equality between respective terms. This is true for infinite sums as well. --KSmrqT 14:41, 25 September 2006 (UTC)[reply]
By the way, you are wrong in saying (in your initial post) that  . You have to use the definite integral, i.e.  , to obtain the result. Conscious 17:02, 25 September 2006 (UTC)[reply]
The question of equivalence of the sum of deltas and the sum of exponentials is discussed at the Nyqvist-Shannon sampling theorem (in the Mathematical basis for the theorem section). There, it is explained that the sum of exponentials only agrees with the sum of deltas in the sense of tempered distributions. This is slightly weaker that pointwise equality. -- Fuzzyeric 02:26, 28 September 2006 (UTC)[reply]

wow this turned out to be a funny business. my difficulty really was in recovering the delta function intact after I take the fourier transform of an impulse train: "delta functions in... delta functions out", or so I thought. Thanks for all the responses! -- Crj 14:17, 26 September 2006 (UTC)[reply]