Wikipedia:Reference desk/Archives/Mathematics/2008 July 3

Mathematics desk
< July 2 << Jun | July | Aug >> July 4 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


July 3 edit

Factorials edit

What am I misunderstanding? There is a graph shown in Gamma function and also in Factorial that purports to be the gamma function. reading a few integer results of the graph, I would guess at (x=1, y=1) (x=2, y=1) (x=3, y=3) yet if these were factorials I would expect (x=1, y=1) (x=2, y=2) (x=3, y=6). -- SGBailey (talk) 08:59, 3 July 2008 (UTC)[reply]

It looks like you missed the "-1" part of the equivalence formula in the article:
 
So Gamma(1)=0!=1, Gamma(3)=2!=2, ... Your (x=3,y=3) is just misreading the graph, zoom in and you'll see it's actually (3,2). The point (4,6) is just outside the boundaries of the picture. --tcsetattr (talk / contribs) 09:07, 3 July 2008 (UTC)[reply]
Thanks -- SGBailey (talk) 16:07, 3 July 2008 (UTC)[reply]

Pochhammer Symbol - Incosistencies between articles edit

The Hypergeometric series article repeatedly refers to   as the pochhammer symbol for rising factorial, whereas the Pochhammer symbol article states that the   notation specifically refers to the falling factorial and that   is correct notation for the rising factorial. Which is correct? —Preceding unsigned comment added by 92.22.67.157 (talk) 11:23, 3 July 2008 (UTC)[reply]

  can mean either rising factorial or falling factorial depending on context. MathWorld describes it as "an unfortunate notation". Gandalf61 (talk) 20:58, 3 July 2008 (UTC)[reply]

Economic capital edit

Okay, banks and insurers are required to hold a certain miminum amount of capital (please see Capital requirement) to protect themselves against adverse experience (i.e. so that they can still pay their liabilities). As an example, adverse mortality experience will have a negative impact on a life insurer so let's say that the insurer estimates that the capital it needs to protect itself against losses arising out of 99.9% of all possible mortality outcomes is m. Say that the insurer also sells health insurance and the amount of capital for morbidity is n.

To combine these two, one needs to make an assumption regarding the correlation between losses from adverse mortality and losses from adverse morbidity. If they are perfectly correlated, the total capital requirement is m + n. But if they are independent (then losses from one may be offset by gains in the other) the requirement decreases to  . My question is: why does independence imply this formula?. Btw, I know that this is a very crude way to combine these factors Zain Ebrahim (talk) 18:38, 3 July 2008 (UTC)[reply]

You are essentially looking for propagation of uncertainty, though that article may be too technical for you. In general the point is that if total losses = losses from morbidity + losses from mortality, e.g.   then the variance in LT is   assumming that Lm and Ln are normally distributed random variables and where COV is the covariance. In the limits, COVmn ranges from 0 (if independent) to σm * σn (if totally correlated), which shows how the variance reduces to those in your limit cases.
Perhaps someone may also provide a less technical explanation for the independence case. Dragons flight (talk) 20:05, 3 July 2008 (UTC)[reply]
Thanks for that. Actually, I think I may have understood that article (except that stuff on partial derivatives which was total Greek) :). From the looks of it, in that article the measure of uncertainty is variance. In my case, it's capital. I should probably have explained further.
Lm and Ln follow unknown distributions but we can estimate m and n and assume that the estimates are the true values. Now m is the difference between the 99.9% percentile and the expected value of Lm. So m = Lm, 0.999 - E[Lm] (not sure if "percentile" is the right word but I'm trying to say that 99.9% of the distribution of Lm lies below Lm, 0.999). So how do we propagate this measure of risk? Why is t =   if Lm and Ln are independent? Hope I've clarified things. Zain Ebrahim (talk) 20:40, 3 July 2008 (UTC)[reply]
If you assume that the distributions for Lm and Ln are normal, then
m = Lm, 0.999 - E[Lm] = A σm
Where A satisfies:  , and erf is the error function. The inverse of the error function is avaiable in many numerical packages and so in this case I can tell you that A is approximately 3.090.
In the case of independence it follows therefore that
 
 
As I said, this is based on the assumption of a normal distribution. The central limit theorem suggests that many real world situations are approximately normal (and many that aren't are log-normal) so in general this is often a good approximation. For specific applications you may be able to improve on this estimate, but only if you know more about the specific distribution functions underlying Lm and Ln. Dragons flight (talk) 22:07, 3 July 2008 (UTC)[reply]
Thanks very much! That was very helpful. One more question. How would you show that m is equal to a constant times the standard deviation? Zain Ebrahim (talk) 22:31, 3 July 2008 (UTC)[reply]
Just wanted to point out that Dragons Flight's answer is actually general and extends to distributions that are not normal, since it depends only on basic properties of the variance and covariance. --Prestidigitator (talk) 07:10, 4 July 2008 (UTC)[reply]
Yes and no. Much of it is general, but the poster asked about specific probability thresholds (i.e. 99.9%) and the relationship between the variance and such thresholds will depend on the details of the probability distribution function (possibly quite strongly in the case of asymmetric distributions). Dragons flight (talk) 21:16, 4 July 2008 (UTC)[reply]

Why don't casinos advertise (and make) slot machines with expected return of $thousands+ for every $1 played? edit

Why doesn't a casino add a 10^(10^100) dollar payload with a probability of 1/10^(10^98) of being achieved? They can be CERTAIN no one would ever hit something that had a 1/10^(10^98) of being hit (indeed, they could bet their whole SOLVENCY on it), so it doesn't matter what reward they put on that, it could approach infinite. In my example, it would drive up the expected return from, say, $0.85 on each $1 played (because of the returns on actually achievable results and their probabilities) to $99.85 on each dollar played -- basically, now a really good deal to try, because, the expected return is $100 for each $1 you put in!

Why don't casinos do that? —Preceding unsigned comment added by 83.199.125.34 (talk) 21:42, 3 July 2008 (UTC)[reply]

I don't know the relevant laws and regulations, but I suspect no sensible country would allow casinos to offer a prize they can't pay, no matter how improbable. Algebraist 21:46, 3 July 2008 (UTC)[reply]
Casinos are required by law to keep enough cash on their premises to be able to pay out the potential winnings from any bet that they allow people to make on their premises. In other words, they would need to have 10^(10^100) dollars on hand before they would be allowed to offer a wager where paying the winner that much was a potential outcome (no matter how improbable). Dragons flight (talk) 22:12, 3 July 2008 (UTC)[reply]
May I ask what jurisdiction you're referring to? Algebraist 22:15, 3 July 2008 (UTC)[reply]
Nevada for sure. Probably most of the rest of the US as well. Dragons flight (talk) 22:20, 3 July 2008 (UTC)[reply]
Thanks. Algebraist 22:25, 3 July 2008 (UTC)[reply]

really? CASH? What about like an insurance policy? I've heard of very large bets being insured. There was a search engine that gave some small chance you would win $1 billion each time you searched, of course they didn't have it they had an insurance policy (and made this clear...)... —Preceding unsigned comment added by 83.199.125.34 (talk) 22:35, 3 July 2008 (UTC)[reply]

I imagine they could do something like the Megabucks slots -- the multi-million dollar payoffs are made over 20 years or so, with the first payment upon verification of win. --jpgordon∇∆∇∆ 23:42, 3 July 2008 (UTC)[reply]
Also, I expect that every great once in a while somebody has to win. That keeps all the poor suckers thinking they're only a handle pull or two away. People are often smart enough to see that something never happens, even if they may often be stupid enough not to grasp how often it really happens within a few orders of magnitude. --Prestidigitator (talk) 07:20, 4 July 2008 (UTC)[reply]

Here in Norway, a country that has quite strict regulation of gambling and consumer information, there seems to be no law against the flyers I get in my mail that assure me that "You have aleady won! with a picture of a bag of money. The actual prize turns out to be one lottery ticket.Cuddlyable3 (talk) 08:30, 4 July 2008 (UTC)[reply]


Since this is the math refdesk, and no one else has mentioned it, I suppose I should mention that the expected payout on one dollar, in the problem as posed, is not 100 dollars, but   dollars. --Trovatore (talk) 08:46, 4 July 2008 (UTC)[reply]

  • Besides, there's a flaw in the logic. Yes, long long long odds make it incredibly unlikely that anyone will score. On the other hand, it could happen on the very first pull of the virtual handle. Casinos don't stay in business by taking bets they can't cover. --jpgordon∇∆∇∆ 19:30, 4 July 2008 (UTC)[reply]
Well, it could happen. Much more likely, though, is that every atom of potassium-40 in the Earth's crust will decay in the same millisecond, putting an abrupt end to all life. You don't stay in business by trying to insure against that sort of thing. --Trovatore (talk) 22:28, 4 July 2008 (UTC)[reply]
Continuing that line of thought, is this proposed slot machine physically realizable? I kind of doubt it. The obvious way to try to do it would be to generate   independent random bits, and pay off if they all happen to come up zero. Clearly that's not going to work. Is there a non-obvious way that will work? --Trovatore (talk) 22:52, 4 July 2008 (UTC)[reply]
You can reduce the number of bits required by not having them be 50/50 - decrease the chance of each bit being zero and you decrease the chance of them all being zero. You could use radioactive nuclei with long half-lives - Uranium-238, say. According to my calculations, you would need five and a bit nuclei to get the probability of them all decaying within a given second (you would need to replace any that decayed early, so there are always five present) to be 10-98, which I think was the probability you were aiming for (the OP actually asked for  , which would require rather more nuclei, or a much shorter time period, and isn't really practical). --Tango (talk) 00:19, 5 July 2008 (UTC)[reply]
No, I was aiming for  .   is much easier; doesn't require anything really special at all. Just take some diode noise, use a good hash function to compress the entropy and generate a 98-digit random number; this is child's play. --Trovatore (talk) 04:19, 5 July 2008 (UTC)[reply]
Another option is to not do it all at once. Calculate say 10^6 random bits at a time and if they are all 0 repeat up to 10^92 times. You'd have to keep a counter of the number of iterations performed, but it only requires log(10^92)/log(2) = 306 bits to store the necessary counter. Dragons flight (talk) 01:31, 5 July 2008 (UTC)[reply]
Hmm--I think I like this one. Of course it would take longer than the time to whatever version of the end of the Universe you believe in to find out you had won. But that doesn't seem to be a worse problem than the facts that the casino couldn't pay off anyway, and that even if they did, you couldn't spend it. And essentially every time, the process would stop after a single iteration, so it would look instantaneous to the marks. (However I think you've made a small miscalculation -- it should take about 3*10^92 iterations). --Trovatore (talk) 04:14, 5 July 2008 (UTC)[reply]

This reminds me of a reverse martingale. With a martingale, a player with infinite money can always make infinite+1 money. However, with finite martingales, devestating ruin can occur. No gambler would possibly pay this game because the current pot is impossible to pay up. You end up with maybe 10^100 dollars for a 1:10^10^98 gamble, an incredibly small payout. So either the casino will lie about the jackpot and nobody will use the machine, or it won't and risk catastrophic failure for chicken feed. Indeed123 (talk) 17:03, 8 July 2008 (UTC)[reply]

have a question about functions edit

hi, I've been trying to figure out what is the difference between a domain and a range.Aren't they of one function? —Preceding unsigned comment added by Lighteyes22003 (talkcontribs) 23:51, 3 July 2008 (UTC)[reply]

Consider the function which assigns to every person their name. Its domain is the set of things it acts on, ie the set of all people. Its image (also called range) is the set of values it takes, ie all names of people (its codomain (also called range) might be something else, unfortunately). Does that help? Algebraist 23:56, 3 July 2008 (UTC)[reply]

So what your saying is the domain is like one person and the range is how many people have the same name? —Preceding unsigned comment added by Lighteyes22003 (talkcontribs) 00:24, 4 July 2008 (UTC)[reply]

Both the domain and the range are sets. The domain is a set of people, the range is a set of names. They are totally different things. Algebraist 00:25, 4 July 2008 (UTC)[reply]
In functions, Domain is x and range is y in an ordered pair (x,y). The point (5,4) isnt that same as (4,5) or (5,3).--68.231.202.21 (talk) 01:36, 4 July 2008 (UTC)[reply]
Every function has both a domain and a range. If we denote y = f(x), then the domain of the function f is the set of all inputs x for which the function is defined, and the range is the set of all outputs y that the function can equal. For instance, suppose  . Assuming that we are only dealing with real numbers, then the domain is x ≥ 0 and the range is y ≥ 1. (To note, though, the domain and range are sets, so I should technically say that the domain is the set of all x such that x ≥ 0, etc.) 99.139.120.212 (talk) 04:34, 4 July 2008 (UTC)[reply]
I'm not sure why it is so easy to get confused over those (and believe me I do too all the time). A function acts on any value in its domain (like a king), and ranges over.... For example, sine acts on the domain that includes all angles, and it ranges from -1 to +1. Maybe thinking of phrasing things like that will help make it more intuitive. --Prestidigitator (talk) 07:30, 4 July 2008 (UTC)[reply]

Is it always unimodal edit

Is the product of a montotonically increasing function, and a monotonically decreasing function, always uni-modal? --Masatran (talk) 23:52, 3 July 2008 (UTC)[reply]

What do you mean by uni-modal? Assuming I've guessed right, the pair of functions (on the positive reals) x → x and x → 1/x are a counterexample. Algebraist 23:59, 3 July 2008 (UTC)[reply]

I got a counter-example just now:

x -2 -1 0 +1 +2
f 10 5 2 1 0
g 0 1 2 5 10
f*g 0 5 4 5 0

--Masatran (talk) 00:26, 4 July 2008 (UTC)[reply]