Wikipedia:Reference desk/Archives/Mathematics/2008 November 24

Mathematics desk
< November 23 << Oct | November | Dec >> November 25 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


November 24

edit

Difficulty while integrating

edit

I'm doing a problem on my calculus homework, and I'm having significant trouble with it:

 

I don't know where to start. Integration by substitution doesn't seem to work, and the answer according to my calculator involves ln(root(x) - 3). I have no idea how it got there.

Could someone help, please? Thanks! —Preceding unsigned comment added by CalcBCStudent (talkcontribs) 00:37, 24 November 2008 (UTC)[reply]

This BEGS for a rationalizing substitution:

 
 
 
 
 
 

etc. Michael Hardy (talk) 02:57, 24 November 2008 (UTC)[reply]

Factorising polynomials

edit

A couple of questions:

I'm trying to factorise  . My usual approach is a 'substitution' method where I take the leading coefficient and multiply it into the whole equation to obtain a result:

 

I then substitute 6x for z, which removes the leading coefficient and makes the equation easier to factorise;

 

All that remains is to 'extract' a factor of 6 from the equation to produce a factorised form of the original:  

Which leaves me with  .

However, the problem is that the answer I was given is  , and I am not sure how to arrive at this conclusion; this factorisation is a lot "nicer" than mine because it involves integer coefficients and leading terms only. Is there another strict "method" for obtaining the result like this? Secondly, what about factorising polynomials of nth degree? Is there a general method? I can imagine it gets tough pretty fast as the degree increases. 86.139.230.229 (talk) 07:26, 24 November 2008 (UTC)[reply]

1. Your computation is OK, and the other too... Just have a closer look to both (remember that a polynomial factorization in linear factors is unique up to non zero constants, that you can transfer form one factor to another).
2. You can find some information here: polynomial factorization, of course. --PMajer (talk) 07:44, 24 November 2008 (UTC)[reply]
Extract the factor of   from the first paren, and use the multiplication associativity..... --CiaPan (talk) 08:13, 24 November 2008 (UTC)[reply]

Since the comments above are less than completely explicit about this point, I'll say it here. The following two expressions are exactly the same thing (and it's pretty easy to see why):

 
 

Michael Hardy (talk) 01:38, 25 November 2008 (UTC)[reply]

If you want an easier way to do ones like that, then just multiply the co-efficient of   with the constant; find two numbers which times to get this and add to get the middle one then factor out. It sounds hard but when you get used to it it's a lot easier, for example:

 

 , so we want two numbers that times to get 6 and add to get 7, so obviously 6 and 1 work here.

Rewrite the middle term as a sum:  

Factorise out:   (as a check the two brackets should be the same!)

Rewrite the expression like this:  , and we're done! Hope this helps, 194.80.32.9 (talk) 23:02, 25 November 2008 (UTC)[reply]

negative fractional powers

edit

Thirty-some years ago, I was told to write √2/2 rather than 1/√2. I don't remember why, but I guess because it's easier to divide by an integer than by a real. Since nobody computes such a thing by hand anymore, is √2/2 still preferred? —Tamfang (talk) 08:47, 24 November 2008 (UTC)[reply]

If someone says that √2/2 is preferred, they're wrong. If for no other reason than when students take the reciprocal, they don't seem to understand that 2/√2 = √2. Also, because rationalizing the denominator is idiotic. 76.126.116.54 (talk) 09:04, 24 November 2008 (UTC)[reply]
The modern way of representing an algebraic number is to state the irreducible polynomial of which the number is a root. In general the root cannot be expressed explicitely anyway, but is easily computed numerically on a pc. The roots of 2x2−1 are x = 0.70711 and x = −0.70711, and explicite expressions like x = 1/√2 or x = √2/2 are not shortcuts any longer. Bo Jacoby (talk) 11:27, 24 November 2008 (UTC).[reply]
What? People use the root notation all the time. It's often not convenient to write everything in terms of polynomials. (And your decimal expressions are simply incorrect, if you're going to write it like that you either need to say either x≈0.70711 or x=0.70711..., writing that the square root of two exactly equals a terminating decimal is nonsense.) I was also taught, only 4 or 5 years ago, to rationalise denominators, so some people still prefer it. I never do so now (unless I actually need to in order to do a computation), and I rarely see anyone else doing so. --Tango (talk) 11:46, 24 November 2008 (UTC)[reply]
Agreed. There are times when we need to refer to, say, the specific root of x4−10x2+1 that lies between 3 and 4, and no other root (even though they are algebraically indistinguishable) nor a numerical approximation will do. And the simplest way to refer to this root is √2 + √3.
As regards the technique of rationalising denominators, this is useful when simplifing expressions such as
  Gandalf61 (talk) 12:00, 24 November 2008 (UTC)[reply]

Indeed. If you just have to express a single number, like in the sentence " the ratio between the edge and the diagonal of a square is etc", I would say that √2/2 or 1/√2 are much the same; but if you have more numbers to make computation with, then a certain uniformity in notation may be required: better to avoid mixed notations like:   that are ugly and unclear (this could be 0, and I'd not see it); in this case rationalizing denominators is certainly a good way to get a uniform notation.--PMajer (talk) 12:49, 24 November 2008 (UTC)[reply]

The integral-denominator version is helpful when giving the sines of 0, π/6, π/4, π/3 and π/2 in an easily-memorable form, though of course the first, second and fifth would be simplified before actually using the value:
 81.132.235.175 (talk) 16:34, 24 November 2008 (UTC)[reply]

Yes, people use root notation all the time even though it has been known since 1823 that it is a blind alley. That "the root of x4−10x2+1 that lies between 3 and 4" can be written √2 + √3 is merely a strike of luck. "The root of x5−10x3+1 that lies between 3 and 4" cannot be written like that, but it can be written "the root of x5−10x3+1 that is approximately 3.15724974". Bo Jacoby (talk) 17:49, 24 November 2008 (UTC).[reply]

It is not a blind alley, and I really don't know what you mean when you say it is. The fact that some algebraic numbers cannot be expressed in terms of radicals is not a reason to abandon radicals, any more than the fact that some reals cannot be described is a reason to abandon descriptions. Algebraist 17:55, 24 November 2008 (UTC)[reply]

Radicals were a help when computations were done by table look-up, but it is not faster to compute radicals than to solve (other) algebraic equations. Bo Jacoby (talk) 18:05, 24 November 2008 (UTC).[reply]

So what? There are more thinks you can do with radicals than numerically approximate them, as various people have pointed out above. Algebraist 18:08, 24 November 2008 (UTC)[reply]
It's far more precise to write √2 than the decimal expansion, it's also far shorter. Sure, you can write "the positive root of x2-2=0", but why would you want to when √2 is so much simpler? And you can't do arithmetic with such a statement, you can easily do arithmetic with radicals. --Tango (talk) 18:09, 24 November 2008 (UTC)[reply]
It is far easier to manipulate polynomials than radical expressions. I never said that the numerical approximation was the only thing you could do. I did say that solving equations in terms of radical expressions is not a shortcut any more. Bo Jacoby (talk) 23:23, 24 November 2008 (UTC).[reply]
What kind of manipulations are you doing with them? Let's take something simple, if I want to know the product of the positive square roots of 2 and 3 I just say √2.√3=√(2.3)=√6, done. How would you do that simply with polynomials? And what do you mean by "shortcut"? I shortcut to what? I would say the solution in terms of radicals is a far better answer than an approximate decimal expansion, which seems to be what you are proposing. --Tango (talk) 11:43, 25 November 2008 (UTC)[reply]
"How would you do that simply with polynomials?" Like this: y2−2 = z2−3 = xyz = 0 → x2−6 = 0, done. Numerical evaluation of an expression like (1+√5)/2 involves the evaluation of √5, meaning solving the equation x2 = 5, which is not simpler than the original equation x2 = x+1. The radical expression is useful if you are limited to using tables of square roots, or a pocket calculator with a botton, or an old-fashioned programming language with a sqrt subroutine, but if you use some modern programming language, such as J, then there is no value in trying to express the solution in terms of radicals. (Type p. 1 1 _1 and receive an answer including 1.61803399). I was not proposing an approximate decimal expansion, but I was proposing to state the irreducible polynomial of which the number is a root. Bo Jacoby (talk) 16:47, 25 November 2008 (UTC).[reply]
I would generally consider (1+√5)/2 to already be evaluated, there is rarely any need in mathematics (there is in the sciences, admittedly) to convert it to a decimal approximation, which is what you keep coming back to. While it may be possible to do the kind of computations I actually do using polynomials, it seems far simpler to do them using radicals. The radical explicitly states which root you are talking about, your method doesn't, you would need to state it in addition to giving the polynomial. --Tango (talk) 16:59, 25 November 2008 (UTC)[reply]
I would generally consider x2 = x+1 to already be solved, there is no need in mathematics (nor in the sciences) to convert x to a radical expression, which is what you keep coming back to. If you are used to computations with radicals and not used to computations with polynomials, it might be a matter of taste to decide which one of the methods is the simpler. But it is not a matter of taste which method is the more powerful: a polynomial with integer coefficients defines (conjugate) algebraic numbers which generally cannot be expressed by radicals. Conjugate algebraic numbers cannot be distingushed algebraically (using the equality sign = only), but analytically (using inequality signs > or <). Bo Jacoby (talk) 21:53, 25 November 2008 (UTC).[reply]
Indeed, but analysis is used rather extensively in mathematics, so that's not really relevant. Being able to specify a specific root is useful and radicals is often the best way to do it (sure, it's not always possible, but why not use it when it is?). --Tango (talk) 23:02, 25 November 2008 (UTC)[reply]
Being able to specify a specific root is useful and decimal fraction approximations is often the best way to do it (it's always possible). Bo Jacoby (talk) 08:43, 26 November 2008 (UTC).[reply]
Most mathematicians do not care about decimal approximations. Radicals reduce this approximations to "positive" or "negative". You actually do need to specify a specific root more often that you appear to believe, just consider (root of f(x)) − (root of f(x)) which may or may not be zero depending on whether you choose the same root twice.--80.136.177.218 (talk) 18:20, 26 November 2008 (UTC)[reply]
An approximation is rarely better than a precise value (in maths, anyway, in the sciences, sure). If the polynomial is solvable in radicals (many are - anything with deg<5 and quite a few with higher degree), then go for it (admittedly, the algorithms for general equations of deg 3 and 4 are pretty ugly, but in a specific case it's often easier). --Tango (talk) 19:53, 26 November 2008 (UTC)[reply]
"An approximation is rarely better than a precise value". Agreed. "If the polynomial is solvable in radicals then go for it". Why? What is the use of a solution expressed in radicals? All you need is the exact equation and some rude approximation to tell which one of the conjugate solutions you are talking about. This approximation may sometimes simply be a sign, but sometimes the sign is insufficient. (x5−10x3+1 = 0 has two positive solutions, for example. Do you want to go for a solution in radicals?) Bo Jacoby (talk) 23:41, 26 November 2008 (UTC).[reply]
A solution expressed in radicals is one thing, a polynomial together with an approximation is two. One thing is generally simpler than two things. I don't know if that polynomial can be solved in radicals (if I'd paid attention in Galois Theory last year I could probably work it out, but...), but I never claimed all could. Many can, though. --Tango (talk) 23:55, 26 November 2008 (UTC)[reply]
Long ago, when the equation y2 = a2+b was easier to solve than the equation x2 = 2ax+b , because of help from tables of square roots or logarithms, the solution in terms of radicals, x = a±√(a2+b), was valuable, but this is simply no longer the case. Galois theory says that only in exceptional cases can solutions be expressed in terms of radicals. This was surprising at the time, because it was conjectured that equations could in principle be solved in terms of radicals, even though it was difficult in practice, and so mathematicians tried to solve equations in terms of radicals, but it turned out to be a dead end: It is neither useful (due to Gauss' fundamental theorem of algebra), nor possible in general (due to Abel and Galois). But the inertia of mathematical teaching makes it take some centuries to be understood. Bo Jacoby (talk) 11:18, 27 November 2008 (UTC).[reply]

"Simplest radical form" as it is sometimes called, requires no radicals in denominators nor denominators in radicals, and also no square factors under square-root signs, or cube factors under cube-root signs, etc. There's nothing necessarily incorrect about other forms—which form one should use depends on context and purpose. But "simplest radical form" is a canonical form that enables you to tell whether two expressions are equal: they're equal if they're the same when put into "simplest radical form". Michael Hardy (talk) 23:13, 24 November 2008 (UTC)[reply]

Derivative and product

edit

Hello all, can anybody think of a pair of functions (non-trivial, of course) where the product of the derivatives is equal to the derivative of the product? Obviously this is not generally true, so can anybody think of a pair where this case works? Thanks, Fbv65edeltc // 16:02, 24 November 2008 (UTC)[reply]

There are many solutions. All we need is that 1 = f/f' + g/g' (wherever this makes sense), so e.g. f=g=e^(2x) works. Algebraist 16:07, 24 November 2008 (UTC)[reply]
And if the functions have to be different, e^(ax) and e^(bx), where b=a/(a-1) →81.132.235.175 (talk) 16:45, 24 November 2008 (UTC)[reply]
Or f=x, g=1/(1-x). For any (nice) f, there is a g such that this holds. Algebraist 16:48, 24 November 2008 (UTC)[reply]

The differential equation

 

can be solved routinely if you specify what one of the two functions is. For example, if you want

 

then the differential equation above becomes

 

So apply the product rule to the left side to get

 

and then cancel the square of x from both sides:

 

This is a pretty commonplace textbook-style ODE that you can solve for g. Michael Hardy (talk) 18:08, 24 November 2008 (UTC)[reply]

I can't put my finger on a reference but I think there's something showing Gottfried Leibniz thought (fg)' = f ' g ' for a short while when developing calculus. So you're following in famous footsteps Dmcq (talk) 20:44, 24 November 2008 (UTC)[reply]
Our article on the product rule says so, but it's marked as "citation needed", so if you do find a reference please add it to the article! --Tango (talk) 21:17, 24 November 2008 (UTC)[reply]

Indeed, there was a time in the history of Wikipedia's product rule article when it asserted that "most people think" what Leibniz is here reported to have thought for a time. Michael Hardy (talk) 23:15, 24 November 2008 (UTC)[reply]

I found a reference to him doing it and added it. Dmcq (talk) 23:19, 24 November 2008 (UTC)[reply]

To all: thanks for your help! Both the answers and the solution/method were very interesting. Best, Fbv65edeltc // 04:43, 25 November 2008 (UTC)[reply]

Test for vertex transitivity

edit
I've been researching on Google, and I cannot find the answer to this question: is there a quick, polynomial-time algorithm for determining if a graph is vertex transitive? Indeed123 (talk) 16:18, 24 November 2008 (UTC)[reply]
No polynomial-time algorithm is known, but I don't know whether this problem has been proved equivalent to graph isomorphism. In practice you can do it fairly quickly with nauty. McKay (talk) 10:25, 26 November 2008 (UTC)[reply]

statistics on dually diagnosed patients in-patient stays

edit

I'm trying to get the percentages of the dually dignosed patients return to the hospital and how long between stays. —Preceding unsigned comment added by 134.241.28.252 (talk) 19:15, 24 November 2008 (UTC)[reply]

You might have better luck posting this on the Science Desk. StuRat (talk) 21:23, 24 November 2008 (UTC)[reply]