Wikipedia:Reference desk/Archives/Mathematics/2009 June 30

Mathematics desk
< June 29 << May | June | Jul >> July 1 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


June 30 edit

Linear functionals edit

Another qual problem:

Let  , and suppose   is a continuous linear functional on   such that for every  ,  . Show that   is the zero functional. Show that this is false if  .

I'm not sure how to do this, obviously, or I wouldn't ask. But I have gotten somewhere, whether it is helpful or not, I do not know. First, by the Riesz Representation Theorem, there exists   such that

 

for all  . In particular, let   for any real a. Then, using the condition on   and using a change of variables gives

 

for any real a. Here I am stuck. My thought was perhaps I could show this implies g is periodic... I could use   instead maybe. I don't know. Then, if I get that g is periodic, perhaps I could pick another choice of f that gives a contradiction. Any suggestions? Thanks. StatisticsMan (talk) 02:20, 30 June 2009 (UTC)[reply]

This is basically fine. L^p(R) for finite p does not contain any nonzero periodic functions. For every e>0, there is some interval outside of which |g| is less than epsilon. On the other hand, its integral is constant on same-size intervals, a big no-no for functions with a finite integral. This breaks down when p is infinity, and you can have periodic g. In terms of f, this is when p=1. JackSchmidt (talk) 04:21, 30 June 2009 (UTC)[reply]
That's substantially correct, but notice that the integral of g a priori could be constant and zero on same-size intervals; and also that a g in Lq(R) for 1<q needs not be integrable (so |g| may have infinite integral on R). So it's safer and more direct to use |g(x)|q in your argument, when showing that there are no periodic nonzero functions in Lq(R) . --pma (talk) 06:05, 30 June 2009 (UTC)[reply]
Notice that g 1-periodic also follows from   for all f, whence   a.e. --pma (talk) 06:16, 30 June 2009 (UTC)[reply]
By the way, notice that the fact mentioned by JackSchmidt (there is no nonzero periodic function in Lp for finite p) also answers negatively to the question: can a nonzero function in Lp for a finite p have zero integral over all unit intervals? extending the analog result in L1 mentioned around your preceding post. The reason is that the integral function of such a function g (i.e.  ) would be 1-periodic, so g(x) itself would be 1-periodic, being the derivative a.e. of f(x), therefore necessarily g=0 as we know. pma. --131.114.72.186 (talk) 11:24, 30 June 2009 (UTC)[reply]
Thanks a lot. I read through what you all said and it made sense to me after some thought and I was able to complete the proof. I have not come up with an example to show it is not true for p=1, but I understand what you are saying and I will think about that some. StatisticsMan (talk) 19:35, 30 June 2009 (UTC)[reply]
Hint: the p=1 case is really easy. Algebraist 19:42, 30 June 2009 (UTC)[reply]
So easy, the answer has already been given. Let g(x) = sin(2pi x). Then g(x) is in L^\infty(R). By a Proposition in Royden,
 
defines a bounded linear functional on L^1(R). And, then I can use   which is in L^1 and the integral is not 0 so phi is not the 0 functional. Thanks! StatisticsMan (talk) 21:04, 30 June 2009 (UTC)[reply]
I was thinking just set g(x)=1. Algebraist 23:49, 30 June 2009 (UTC)[reply]
Good point, thanks! StatisticsMan (talk) 13:42, 1 July 2009 (UTC)[reply]

Unemployment vs. application rate edit

What is the theoretical relationship between the unemployment rate and the number of applicants per job opening? NeonMerlin 03:17, 30 June 2009 (UTC)[reply]

In basic theory, unemployment is positively correlated with the number of applicants per job opening. If you look specifically at a single sector such as accounting jobs, it's easier to think of the basic principles. If there are 900 accounting jobs for 1000 accountants, then I would imagine if I were an unemployed accountant, I would broaden my job search since I have more hungry accountants pursuing limited jobs. The more unemployed accountants relative to me, the more broadly I would apply to accounting jobs. I think that there are two positive correlations. In addition to the part mentioned about every unemployed accountant expanding his job search and applying more aggressively and exercising less discretion, you also have more unemployed accountants who all think the same way. I think it is squared relationship (quadratic relationship?). When the unemployment rate was 5%, you never really panicked like facing 10%. Also, humans are a species driven by exuberance, fear, and emotional reasoning. Lots of theoretical relationships only apply to rational decision makers acting in their best interest. I think a theoretical relationship exists, but it's probably not an economics question--perhaps a consumer psychology model would best explain it by using "unemployed people" as the "consumers" of scarce new jobs. See also Bigger fool theory which shows that a stable equilibrium may not exist for building a reasonable model for your question. If you would like more help, try the article on Financial modeling and Econometrics 74.5.237.2 (talk) 09:06, 30 June 2009 (UTC)[reply]

"Scoring" a product, correcting for incomplete scores. edit

Please could you check that my thinking on this problem is mathematically sound. The company that I work for tenders for various products and services. As part of the evaluation process we come up with a "scoring" matrix, which scores each company against a number of categories. The points in each category are set to give a weighting. A simplified example might be:

COMPANY 1 COMPANY 2 COMPANY 3
PRICE (0-10)
FUNCTIONALITY (0-20)
SUPPORT (0-10)
COMPANY STABILITY (0-20)

In practice there would be many functional areas and criteria, i.e. many rows. Each scorer would potentially put a score in the range given against each company. If everyone filled in the table entirely then scoring would be easy - just totalling the scores for each company. In practice two things happen:

Some people only score certain areas This is expected, technical people might only be able to answer questions about functionality and not (for example) company stability. The thing is that if we just add up the scores some areas would not get the weighting they need; for example only three people might score company stability but ten functionality. I figure that the way to cope with this is to give the people who have not scored in an area a score equal to the average of the scores that have been given.

Some people do not score all companies Ideally this would not happen, but due to ongoing work, unexpected calls, sickness, etc. some people may miss some of the presentations. Obviously it would be wrong to mark a company down because fewer people attended their presentation, so I figure that a solution to this is to give companies they missed an average of the scores they gave to ones they attended. I thought that this is better than giving an average of the scores given by other people because some people seem to mark high and some low.

Is this mathematically sound? I can see no reason why I should apply the correction for missing rows before I apply the correction for missing columns, but it feels right. In practice we have used these corrections and they do tend to come out with results that match the "general feelings" that people had. Thanks in advance . -- Q Chris (talk) 09:36, 30 June 2009 (UTC)[reply]

It's a complicated topic. We have an article, imputation (statistics), but I think it really just scratches the surface. --Trovatore (talk) 09:45, 30 June 2009 (UTC)[reply]

I think about it this way. The strength of a category of a company should be a number between 0 and 1, like a probability, P, of success. This number P is unknown, but some knowledge of P is gained by scoring. The number of times it is scored is n, and the number of successes in scoring is i. So the score i is an integer between 0 and n, while the strength P is a real between 0 and 1. The likelihood function of P is the beta distribution having mean value m = (i+1)/(n+2) and variance s2 = m(1-m)/(n+3). So if a category is not scored, simply put n = i = 0 in the above formula and get m = 1/2 and s2 = 1/12. Bo Jacoby (talk) 06:27, 1 July 2009 (UTC).[reply]

Expected value of the reciprocal of the gcd edit

Our greatest common divisor article says that the expected value E(k) of the gcd of k integers is E(k) = ζ(k-1)/ζ(k) for k > 2. Does anyone have a reference for this? (While you're at it, if you're knowledgeable, you can try to fix up the sentences following this statement in the article which currently don't make any sense to me.) My real question: Is a similar formula known for the expected value of the reciprocal of the gcd? Staecker (talk) 14:46, 30 June 2009 (UTC)[reply]

Assuming the stuff in the article is correct, then the same argument shows that the expectation of the reciprocal of the GCD is ζ(k+1)/ζ(k). Algebraist 14:51, 30 June 2009 (UTC)[reply]
OK- I agree. I should've read the derivation more closely. Thanks- Staecker (talk) 23:19, 30 June 2009 (UTC)[reply]

Calculating residues edit

Hi. I just made an edit to the section of Residue (complex analysis) on calculating residues. I wonder if someone could have a look at it to double-check that I didn't say anything false, or otherwise break the article. Thanks in advance. -GTBacchus(talk) 15:27, 30 June 2009 (UTC)[reply]

I think it's a good idea to have the special case of a simple pole mentioned explicitly. This should probably be discussed at Wikipedia_talk:WikiProject Mathematics, though. --Tango (talk) 18:19, 30 June 2009 (UTC)[reply]
Ah yes, that would be a better forum. I'll head there now. -GTBacchus(talk) 18:22, 30 June 2009 (UTC)[reply]

Double integral = 0 over all rectangles edit

Suppose f(x, y) is a bounded measurable function on  . Show that if for every a < b and c < d

 

then f = 0 a.e.

In our study group, we looked at a similar problem we have yet to figure out, which is that the integral over each open disk in R^2 is 0 and we are supposed show f = 0 a.e. [as far as the Lebesgues measure on R^2].

Can any one help us figure these out? Thanks! StatisticsMan (talk) 20:47, 30 June 2009 (UTC)[reply]

A naive attempt but did you try proof by contradiction? Assume that the function f is nonzero on a set of measure nonzero. Then you show that f is strictly positive (or negative) in a open set so its integral over that open set will be also nonzero. Notice that being nonzero is not enough. Because nonzero functions on a set of measure nonzero CAN have zero integral (for example, when volume above the plane is the same as volume below the plane, they cancel). So you need a set where the function is always positive or always negative. You fill in the details using all the assumptions you assume. I hope this will be enough.-Looking for Wisdom and Insight! (talk) 21:13, 30 June 2009 (UTC)[reply]
Alternatively, just check the answers to the last question of this type you asked. Some of them are straight-up applicable, others require only minimal amounts of adaptation. RayTalk 21:15, 30 June 2009 (UTC)[reply]
Well, the two hints I suggested for the 1-dimensional version to your question here easily generalize to any dimension. Alternatively, you can reduce the problem to 1 variable using Fubini's theorem. For fixed a and b consider the function  . It has vanishing integral over all intervals [b,c]. Therefore it is identically zero, according to the 1 dimensional case. This means that for a.e. y, the function   has vanishing integral over all intervals [a,b], and you conclude. PS: can you see how to prove in an elementary way that if a function f in L1(R2) has zero integral over all unit squares, then it is identically zero? Note that this also works for the analog with unit cubes in dimension three, &c. However, if you replace unit squares with unit disks, the thing is still true but (as far as I know) no elementary proof is available (you can do it via Fourier transform as I mentioned) --pma (talk) 22:08, 30 June 2009 (UTC)[reply]
I haven't thought very long about it, but can't you just use Lebesgue's density theorem, which works either for squares or balls? If a function f is positive on a set of positive measure then there is a set P of positive measure on which f is uniformly bounded above 0. Also f is bounded overall. So if you find a ball or square in which P has sufficiently high density, then integral of f over that ball or square will be positive. — Carl (CBM · talk) 22:37, 2 July 2009 (UTC)[reply]
well, if you are talking of f(x,y) having vanishing integral on disks, or squares, of all size, then yes; but then you can do in even more elementary ways. I was talking of unit disks and unit squares.--pma (talk) 06:42, 3 July 2009 (UTC)[reply]
I have tried using Fubini unsuccessfully. I end up showing that   is 0 a.e. as you suggested. But, I don't see how to use the result again to then show that f(x, y) is 0 a.e. Because, what I have shown is that for almost all y, the integral over [a, b] of f(x, y) is 0. But, the previous result is that the integral is 0 for every interval, not just some of them. Also, I am not seeing how to generalize your first suggestion on this previous problem, though I believe I understand the method as it relates to that problem. StatisticsMan (talk) 22:04, 15 August 2009 (UTC)[reply]