Talk:Gambling and information theory

Latest comment: 15 years ago by Deepmath in topic Author vs. discoverer

log_2 ?

edit

It seems to me this article is misinformed about several aspects of the subject matter, but lets start from the easy part, I think the correct formula for the capital growth rate needs to use natural logarithms, not base-2 log. —Preceding unsigned comment added by Kotika98 (talkcontribs) 16:36, 29 September 2008 (UTC)Reply

Ill-gotten gains

edit

The ill-gotten gains equation in the article actually describes the expected payoff from Kelly gambling in the long run. This expected payoff can only be described in logarithmic terms, which is why the equation is written that way. This connection is not apparent in the article and it should be explained better.

Also it should be made clear from the beginning of the article that the "information entropy" mentioned is actually the mutual information between the gambler's side information and the outcome of each betable event. This is, as explained in the information theory article, the Kullback-Leibler divergence of the posterior probability distribution of the outcomes of the betable event conditioned on the side information obtained as compared to the prior distribution (the odds without the side information).

I hope someone can rewrite this article to better clarify these things. 198.145.196.71 (talk) 01:04, 31 December 2007 (UTC)Reply

Surprisal

edit

'Surprisal' is actually an old-fashioned term for self-information. I am not sure that the section on Coin Tosses and Surprisal at the end adds anything to the article. It has no references except for the definition of surprisal and may be original research.

I was opening only one of several possible new threads on gambling and information theory, namely the use of simple logarithmic measures for discussing risk and evidence. Dr. Jaynes expanded somewhat more on the latter in his book (footnote1). The insights contained in the approach are very old and rather simple, so I agree that the section certainly adds no new math. I guess I was thinking of it as a placeholder section for the moment. If you don't think it can be put into a form that highlights connections of interest to newcomers, it's ok with me to pull it until something demonstrably useful is ready to go in. I suspect that pieces will fall into place in the days ahead to make this possible, but that's only a hunch. Thermochap (talk) 22:41, 13 March 2008 (UTC)Reply
Indeed surprisal is old-fashioned. I use it because it fits nicely with other recent applications I know, but it's certainly OK to use something else if more would relate. This may or may not include the recent work by Eric Horvitz at Microsoft on surprise modeling in community data streams, as I'm not sure about the mathematical underpinnings of that work. Thermochap (talk) 22:41, 13 March 2008 (UTC)Reply

I would like to better explain in the article Kelly's principle that the rate of side information (more properly the mutual information relative to the outcome of each betable event) obtained by a gambler is equal to the expected exponential growth of the gambler's capital, under the assumption of an optimal betting strategy and no transaction costs such as "track take" for betting. Kelly's paper is actually the first application of information theory to gambling, or to any area except coding theory. Deepmath (talk) 21:15, 13 March 2008 (UTC)Reply

Reminds me a bit of some of the discussions surrounding the St. Petersburg game, which many of the best statisticians got into big debates about. I don't know much about it yet. I seem to remember that information theory also had early applications in physics (assuming we can't count J. W. Gibbs since his work was done before Shannon was born), although the key papers on that (written also by Jaynes) came out around 1957. Thermochap (talk) 22:41, 13 March 2008 (UTC)Reply

Author vs. discoverer

edit

Kelly didn't make this up. He discovered it. He was the author of a paper about it. Deepmath (talk) 21:34, 31 July 2009 (UTC)Reply