Wikipedia:Reference desk/Archives/Mathematics/2008 October 23

Mathematics desk
< October 22 << Sep | October | Nov >> October 24 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


October 23

edit

Hi, I have a little bit of doubt about the collocation method. If I use it to solve a nonlinear two-point boundary value problem, will I get a set of nonlinear algebraic equations? Is there any approximate method of solving such a differential equation, and get a set of linear equations of the form Ax = b? The equation I need to solve is t' ' '(x) = a*cos(x). t(0) = t0, t(1) = t1. Thanks, deeptrivia (talk) 01:10, 23 October 2008 (UTC)[reply]

For solving nonlinear algebraic equations, see root-finding algorithms. For solving
t' ' '(x) = a*cos(x),
integrate three times getting
t' ' (x) = a*sin(x) + b,
t' (x) = −a*cos(x) + bx + c,
t (x) = −a*sin(x) + bx2/2 + cx + d.
Find two equations for the three constants b, c and d, by substituting x=0 and x=1.
t0 = d
t1 = −a*sin(1) + b/2 + c + d
You need a third condition for determining the solution.
Probably I did not understand your problem completely.
Bo Jacoby (talk) 15:09, 23 October 2008 (UTC).[reply]
Sorry, I made a mistake. The problem is t' ' '(x) = a*cos(t(x)). t(0) = t0, t(1) = t1. deeptrivia (talk) 18:21, 24 October 2008 (UTC)[reply]
The special case, a=0, gives t' ' '(x)=0, which integrate to t' '=b, t'=bx+c, t=bx2/2+cx+d. The two boundary conditions t(0) = t0, t(1) = t1 are still not sufficient to determine the three integration constants b, c and d. Bo Jacoby (talk) 15:10, 26 October 2008 (UTC).[reply]

Math(s)?

edit

Back in my college days, I studied math. In fact I excelled in math. I still like a challenging math question. Math appeals to me. Yet from scanning the above topics, I see that there are others who have studied maths, have excelled in maths, and like a challenging maths question. Maths appeal to them. Is this an American English vs British English thing? I also studied science and history. Do others study sciences and histories? Just wondering. -- Tcncv (talk) 02:00, 23 October 2008 (UTC)[reply]

Yes, it's an American/British thing, or more precisely American/Commonwealth. Though I think Canadian English may say math even though they're part of the Commonwealth. I estimate that Canadian is roughly 55% American, 35% British, and 20% Canadian. --Trovatore (talk) 02:06, 23 October 2008 (UTC)[reply]
Recursive, eh? —Tamfang (talk) 02:34, 23 October 2008 (UTC)[reply]
Yes. And it also gives 110 percent, which I'm told is a good thing. --Trovatore (talk) 02:37, 23 October 2008 (UTC)[reply]
And how is the plural vs singular thing in the other languages? In France it is "mathématiques", but Bourbaki indroduced the singular "mathématique" to emphasize the unity of this science. --PMajer (talk) 06:56, 23 October 2008 (UTC)[reply]
Well, you can't always go by whether the word has a plural ending; in English it's mathematics, but a singular word, on both sides of the pond. (Mathematics is rather than are. I think maths is as well; maybe a Commonwealth speaker can confirm.)
The only other language I can attest to is Italian, in which it's singular both grammatically and in the form of the word ("la matematica"). --Trovatore (talk) 07:15, 23 October 2008 (UTC)[reply]
Yes, maths is singular, just like physics and statistics (the discipline, not the numbers). But I was wondering what sort of demographic would use "mathematic" as a metric? Maybe those who hope to become learned in the fields of statistic or physic.  :) -- JackofOz (talk) 07:18, 23 October 2008 (UTC)[reply]
According to the OED, 'mathematic' for 'mathematics' "had become rare by the early 17th cent., but was revived in the later 19th cent. (perh. after German Mathematik) for use instead of 'mathematics' in contexts in which the unity of the science is emphasized." Algebraist 07:30, 23 October 2008 (UTC)[reply]
Well it wasn't my intention to have one for or the other declared more correct. I believe both are considered short forms of and are synonymous with "mathematics". I don't it's a case of singular vs plural. -- Tcncv (talk) 05:06, 24 October 2008 (UTC)[reply]
I think in other languages it's called maths also. --71.106.183.17 (talk) 05:12, 27 October 2008 (UTC)[reply]

Stat(s)?

edit

An aside: what's the shortened form of "Statistics" in the US? Is it "stat"? Zain Ebrahim (talk) 07:17, 23 October 2008 (UTC)[reply]

I’ve usually heard it called “stats” GromXXVII (talk) 10:40, 23 October 2008 (UTC)[reply]
I have always heard it being called stats also.130.166.126.80 (talk) 16:39, 23 October 2008 (UTC)[reply]
I've heard both, with "stat" perhaps even being slightly more common, at least when referring to the subject ("My stat class is at 1pm"). The other meaning is always "stats" when shortened (as discussed below). kfgauss (talk) 04:04, 29 October 2008 (UTC)[reply]

This is a little different, though, in that the full word statistics is sometimes construed as plural, whereas mathematics never is (well, hardly ever). It's singular when you say "statistics is my favorite subject" but plural when you say "the statistics argue otherwise". --Trovatore (talk) 18:37, 23 October 2008 (UTC)[reply]

There is such a thing as one statistic, which is one (of many) measures of a population. Mean, for example, is one statistic. There is no such thing as one mathematic.
In order to more accurately describe a population, though, most of the time you need more than one statistic. So, to learn about those many and varied measurements and descriptors, you take a class which is quite properly called statistics.
Or stats.--DaHorsesMouth (talk) 23:59, 24 October 2008 (UTC)[reply]

The proper term is sadistics. DOR (HK) (talk) 05:47, 28 October 2008 (UTC)[reply]

Bounds on the Real Zeros of a Polynomial

edit

On Sturm's theorem, one of the applications (due to Cauchy) says that all real zeros of a polynomial   are in the interval [-M,M] where M is defined as

 

with  .

In my algebra book, I have a theorem which says that,

let  

a polynomial with one as its leading coefficient, then a bound M on the real zeros of f is

 .

My question is, are both of these bounds the same? Can one be derived from another? Is one the corollary of another? If they are not the same, then is one better than the other? Thanks.130.166.126.80 (talk) 16:49, 23 October 2008 (UTC)[reply]

The second is at least as good as the first, and is presumably better in some cases. The 2nd part of the 2nd one is the same as the first one (just divide the polynomial by an, which won't change its zeros), so assuming the 1st part is sometimes smaller (I haven't checked), you will sometimes get a tighter bound, you'll certainly never get a worse one. --Tango (talk) 17:00, 23 October 2008 (UTC)[reply]
Actually, it's clear better when the sum of the coefficients is less than one. --Tango (talk) 17:02, 23 October 2008 (UTC)[reply]


Yes... let's say that in fact the second is just obtained putting together two simpler bounds:

 

and

 .

The first is the one quoted in the article (the polynomial being monic is no loss of generality), and of course sometimes the first one is better, sometimes the second one, as observed. Notice that they also hold for complex zeroes. To obtain both, and other variants if you like, start with some zero x, of P(x), isolate the leading term   in the equation P(x)=0, take the absolute value, then use a Hölder's inequality; for the first bound, it is useful to recall what's the sum of a geometric progression. Also note that these are examples of a priori bounds, that is, they state: if there exists a solution, then it is within these bounds, but they still do not say: there exists a solution (like when you claim, in a party: "if this is a murder, the murderer is still in this room": but sometimes it's just a suicide). However, a priori bounds are often the key point in existence results, whenever general theorems can be applied to conclude (usually, compactness theorems): this is indeed the case of a complex root of a polynomial: quite every proof of the Fundamental theorem of algebra uses a bound like the ones above. --PMajer (talk) 18:08, 23 October 2008 (UTC)[reply]

So, when you say that these bounds can be applied to complex polynomials, do you mean that they give a real bound on the real zeros of a complex polynomial (which is possible because we take the modulus of complex coefficients, if any) or would this give the bound on ALL roots of a complex polynomial where M would be the radius of the disc around the origin in the complex plane containing all of the roots?69.106.106.183 (talk) 04:50, 24 October 2008 (UTC)[reply]

The second one: you stated it very well: any complex root z satisfies  . By the way, using in the same way the Hölder's inequality (try it) for other values of p one gets a family of bounds in terms of the p-norm of the n-vector of the coefficients, that is precisely the q-norm of   (q being the conjugate exponent of p)

 .

The case p=2 looks not bad. Actually one can also consider the infimum of all the   (over all p from 1 to infinity).--PMajer (talk) 12:27, 24 October 2008 (UTC)[reply]

efficient algorithm for converting an integer to a sum of four squares

edit

Since there is already an efficient algorithm for converting an 4n+1 prime into a sum of 2 squares (by van der Poorten, I guess), Is there an analog for 4 squares?

I should be something like this:

given an integer  , find  ,  ,  ,   such that:

 

Is there an efficient (sub exponential time) algorithm for that? Mdob | Talk 21:24, 23 October 2008 (UTC)[reply]

Our article, Lagrange's four-square theorem, doesn't mention an algorithm, but it does link to an website that does it for you and they describe their algorithm here. It doesn't specify how fast it is, though (and I haven't read it, so I can't even guess). --Tango (talk) 21:33, 23 October 2008 (UTC)[reply]
Thanks!Mdob | Talk 21:42, 23 October 2008 (UTC)[reply]

On the practical side, another description can be found in [1]. On the theoretical side, randomized polynomial-time algorithms for solving the four square problem were found by Rabin and Shallit [2] (two unconditional, one depending on the ERH). As far as I am aware, no deterministic subexponential algorithm is known. — Emil J. 11:38, 24 October 2008 (UTC)[reply]