Wikipedia:Reference desk/Archives/Mathematics/2012 May 19

Mathematics desk
< May 18 << Apr | May | Jun >> May 20 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


May 19

edit

Uniform Convergence of Fourier Series

edit

In Fourier_series#Convergence, a theorem states that the uniform convergence for absolutely summable Fourier series follow from the following inequality:

 .

Can you provide a reference for this inequality. I don't see how to prove it myself and would really appreciate a proof. Bb8fran (talk) 00:50, 19 May 2012 (UTC)[reply]

I don't know the answer, but I just want to note that I've reformatted your question into proper Wikipedia math syntax. Looie496 (talk) 01:43, 19 May 2012 (UTC)[reply]

Mnemonic equation finding

edit

Does a field of research exist concerned with automatically discovering memorable relationships between input and outputs? I mean memorable in the sense that they are designed to focus on relationships consisting of elementary mathematical operations using easy-to-work-with numbers that someone with reasonable mathematical skill could carry out quickly in their head. This as opposed to, say, a multilayer perceptron for which any relationship found will be complicated and any interpretive meaning won't be obvious. I imagine such a technique would involve genetic algorithms / programming. On a semi-related note, I remember reading about some software a few years ago that was able to 'rediscover' famous equations such as Kepler's laws when given some basic facts and mappings to work from. Does anyone know what this software might have been? I have a feeling it was an acronym of some kind. Thanks. --Iae (talk) 12:40, 19 May 2012 (UTC)[reply]

The software may have been Formulize, and if not it may still be of interest. 131.111.255.9 (talk) 15:27, 19 May 2012 (UTC)[reply]
It wasn't Formulize, but Formulize looks much more sophisticated and modern than what I was remembering anyway. Thanks! --Iae (talk) 10:08, 20 May 2012 (UTC)[reply]

Calculations involving large integers

edit

Is it possible in a programming environment where the largest integers I can represent are 32-bit to do calculations involving much larger integers and if so, how would I do this? I guess yes, since software such as the GIMPS client can perform calculations involving very large numbers. How is this done? I want to do some computations involving large integers where I have 32-bit integers and 32-bit floats. -- Toshio Yamaguchi (tlkctb) 15:09, 19 May 2012 (UTC)[reply]

Yes. In principle, you could just implement the rules you were taught in primary school, but better algorithms are available. You're looking for a 'bignum' or 'arbitrary precision' library such as GNU GMP. 131.111.255.9 (talk) 15:24, 19 May 2012 (UTC)[reply]
(edit conflict) Is Arbitrary-precision arithmetic#Arbitrary-precision software any help? Qwfp (talk) 15:26, 19 May 2012 (UTC)[reply]
Are such calculations also possible in an environment where such a library does not exist, ie how could I mimic the functionality of such a library or create it from scratch? -- Toshio Yamaguchi (tlkctb) 16:10, 19 May 2012 (UTC)[reply]
Yes, but it's painful. You could store your numbers as character strings, and then manually multiply them, if that was your goal (multiply each digit in character string 1 by each digit in character string 2, being sure to carry). Addition and subtraction would be similar. Division has the ugly possibility of infinite repeating fractions resulting, which means you will lose some accuracy, unless you keep it in fractional form. More complex mathematical operations, are, of course, more complex to implement. StuRat (talk) 17:26, 19 May 2012 (UTC)[reply]
Oh come on. You would store the numbers as arrays, and you would have to write functions to perform the operations you need, taking arrays as inputs and producing arrays as outputs. If all you need is addition and multiplication, it isn't even much work. If you need division, or roots, or powers, or special functions, it's considerably more work, although you can find algorithms for all of them with a bit of searching. Looie496 (talk) 22:33, 19 May 2012 (UTC)[reply]
If you mean one dimensional arrays of characters, then that's exactly what I said. If you mean two-dimension or multi-dimensional arrays, or integers or real numbers in those arrays, I don't see how that's any easier, provided that the array elements are not large enough for the numbers they need to handle. StuRat (talk) 22:47, 19 May 2012 (UTC)[reply]
The reason for using arrays of integers rather than strings is that it is vastly more efficient. 32 bits will (typically, depending on what encoding you are using) give you four characters. If you are working in decimal, this lets you store integers from 0 to 9999. Using a 32-bit integer (or two 16-bit integers, etc.) will let you store integers from 0 to 4294967295. It is also far easier to code arithmetic operations, and they will be a lot faster than if you try to work with strings. A lot of popular programming languages have bigints built in: bc, Common Lisp, C#, Erlang, Go, Haskell, J, Java, OCaml, Mathematica, Perl, PHP, Python, Ruby, Scheme, and Scala all have this functionality. 130.88.73.65 (talk) 12:39, 24 May 2012 (UTC)[reply]
Sure, you can store a bigger integer as a 4 byte integer than a 4 byte string, but, is that relevant, say, for a 100 digit integer ? It's not like that 100 bytes is going to take down your computer, after all. And being able to do digit-wise (base 10) operations rather than base 4294967295 operations makes for much simpler programming. StuRat (talk) 16:34, 24 May 2012 (UTC)[reply]
Usually, when arbitrary-precision arithmetic is needed (for example in cryptography), efficiency is very important, and you need to deal with really big numbers. If anything, it is easier to code with 32-bit chunks rather than string representations of decimal integers. The algorithms are exactly the same, except in the latter case you need to convert to and from your string encoding all the time. 130.88.73.65 (talk) 10:57, 25 May 2012 (UTC)[reply]
The easiest way to create a from-scratch, efficient implementation of bignum arithmetic would probably be to look at the source code for an arbitrary-precision arithmetic library and understand and reimplement its algorithms. Here's two independent implementations of Java's BigInteger class, for example. Neither is particularly easy-to-understand, but I think we can be confident that both are lightning-fast. (The actual easiest way to create an efficient arbitrary-precision library in a language that doesn't have one would be to bind against GMP with the language's foreign function interface. I've actually tried to do this in Java in the past, to get GMP's fast big-rational support.) « Aaron Rotenberg « Talk « 20:35, 20 May 2012 (UTC)[reply]

Download the J (programming language) and try extended precision (x),

  29^39x
1080244137479689290215446159447411025741704035417740877269

Why bother to do it yourself? Bo Jacoby (talk) 12:05, 20 May 2012 (UTC).[reply]

I seems to remember that earlier version of Mathematica works on 32 bit systems. So yes you can do it by purchasing a software program. 220.239.37.244 (talk) 14:03, 20 May 2012 (UTC)[reply]


Last year I did such computations for a project, using modular arithmetic. I did also use Mathematica, but that was only used to compute the last few steps involving the Chinese Remainder Theorem and some further trivial processing. The issue here was that Mathematica is very slow. By writing a program, I could do the computations much faster, but then that became difficult for large integers. So, the obvious solution was to do the computations modulo some prime numbers (which is quite easy to program, the only thing is that division become multplication by the inverse, and the inverse is easily computed using Euclid's algorithm)

While the result was an integer reconstructed from the integers modulo prime numbers, this would also work for cases where the result is a rational number. Even though when computing modulo prime numbers, you eventually get an integer modulo the product of all these prime numbers, you can still compute what the desired rational number is, provided you are computing modulo a large enough number. You can do this using a process known as "rational reconstruction", which amounts to runnning Euclid's algorithm. So, this would then also allow you to compute real numbers, as you can approximate them using rational numbers.

Another advantage of this method is that you can let the computations modulo different prime numbers be computed in parallel. You can start up different instances of the same program (on multple core processors they will then run parallel), or run them on different machines. Count Iblis (talk) 03:22, 24 May 2012 (UTC)[reply]

Analyticity

edit

I have the function   where capital gamma denotes the gamma function, and I have to determine where in the complex plane it is analytic. How do I go about this? I believe that the numerator of the fraction is analytic for Re(s)>-1. That's all I have so far. Can anyone help? Thanks. meromorphic [talk to me] 16:35, 19 May 2012 (UTC)[reply]

It's only analytic in Re(s) > 0. (It's enough to check absolute convergence of the integral.) Sławomir Biały (talk) 18:45, 19 May 2012 (UTC)[reply]
Thanks of your help. (It's slightly unfortunate then that my days of non-trivial analysis are behind me...) meromorphic [talk to me] 08:47, 20 May 2012 (UTC)[reply]
But also note that, although the integral only converges for Re(s) > 0, your function can be analytically extended to an entire function. I'll add the details at request. One finds
 --pma 16:07, 20 May 2012 (UTC)[reply]
You have an excellent eye for where a problem might lead because that was the next part of the question (which I managed to do). Quite a nice result. Just to check, I had to confirm that I(s) is an entire function; is this simply because it is the product of two entire functions? meromorphic [talk to me] 18:50, 20 May 2012 (UTC)[reply]
Almost: the zeta function has a pole at s=1, but fortunately the other function is zero there, so you can remove the singularity and the product becomes an entire function. —Kusma (t·c) 18:54, 20 May 2012 (UTC)[reply]
Exact: s=1 is a simple pole for the zeta function, with residue 1, so for   we have
  whence the value   for the function   (or its extension) at s=0.--pma 20:07, 21 May 2012 (UTC)[reply]

What's a proof worth if it needs more than elementary algebra

edit

This might have been asked numerous times but I couldn't find it. For example take this puzzle and I would want to prove that that piece X can never be placed at C3. For this proof I need to say somewhere something like "We see that for X to be at C3, it also needs to be in H5, so X can never be in C3. QED.". So to prove that X could never be at C3, I would be using the "everyone knows" axiom "no piece can be at two places at the same time. No computer program would be able to prove that without being told that pieces can only be placed at one position. This may be a bad example, but I wonder how hard a proof is if freshly made up axioms "everone knows" are needed. If someone in 1800 proved that an oxygen atom could not have 6 electrons because there are exactly 5 places available (I'm obviously making this up), everyone would say he had a solid proof, but now I would not be sure because the electrons apparantly can be in two places at the same time. Joepnl (talk) 23:53, 19 May 2012 (UTC)[reply]

It isn't easy. Euclid for instance missed out a few obvious facts which need extra axioms, for instance if a straight line cuts two sides of a triangle then it can only cut the third side extended outside the triangle. You must admit that's pretty obvious but people missed it. In fact for arithmetic we can't ever have a complete set of axioms, Gödel showed there's always things which are true but can't be proved from the axioms. By the way oxygen needs eight electrons to be neutral, it's best thought of as an inner complete shell containing two and an outer one of six needing two extra to form a complete shell - and none of that business can be explained without invoking all the basics of quantum theory. In physics what one would say about things like Newtons theory of gravity being replaced by Einstein's is that his theory was superseded by a better one. It's not quite the same in mathematics where one doesn't throw away Euclidean geometry just because one can also deal with curved spaces. Dmcq (talk) 09:34, 20 May 2012 (UTC)[reply]
But note that the same applies to "elementary algebra", which, IIRC, came about 600-1000 years after Aristoteles and Euclid had logic and geometry kinda sorted out... --Stephan Schulz (talk) 09:52, 20 May 2012 (UTC)[reply]
Newton's gravity also involves a hell of a lot more than elementary algebra. Some of the most elementary problems in Newtonian gravity (the three body problem for instance) still lack a satisfactory solution using all of the modern methods of mathematics, like KAM theory. Sławomir Biały (talk) 12:29, 20 May 2012 (UTC)[reply]
What mathematicians think of as a "proof" is somewhat informal. It is possible to write fully formal proofs, with every inference attributed to a specific axiom or rule, but they are almost always very long and nearly impossible for humans to read. The Principia Mathematica of Russell and Whitehead gives formal proofs for many basic mathematical facts. If mathematicians tried to communicate with each other using formal proofs, though, they would never get anywhere. Looie496 (talk) 18:08, 20 May 2012 (UTC)[reply]
There is, nowadays, plenty of fully formalised mathematics. See e.g. the Mizar system and the MML. Similarly, there are significant corpora for Isabelle and Coq. --Stephan Schulz (talk) 20:52, 20 May 2012 (UTC)[reply]
I asked the question because I heard of automatic proof programs and couldn't imagine how they'd be able to prove for instance Pythagoras theorem without some "obviously..." things that a program can't think of. Thanks for all your answers! Now its time for Google to translate politicans' talk into X->Y so a independent program can give them a lying-rank. :). Joepnl (talk) 00:58, 22 May 2012 (UTC)[reply]