Wikipedia:Reference desk/Archives/Science/2013 January 27

Science desk
< January 26 << Dec | January | Feb >> January 28 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


January 27 edit

Does the dog recognize me? edit

I have a business associate whom, for the past 2 years, I meet with exactly once a month, and not more. About half the times I meet him, he has his dog with him. I always greet the dog, and the dog nuzzles me and licks my hand (the dog does this with everyone it meets). So this is a dog whom I see on a roughly regular basis once every 2 months (on average) for just a brief moment over the past 2 years. I'm just curious: does the dog recognize me from these short 1 to 2 minute encounters, or am I just some random stranger to the dog every time? Put another way, if I happened to run into this dog at random in a park without its owner, would the dog recognize me? I know that the reverse is not true (if I ran into this dog at random in the park, I could not be certain that it was this dog and not another dog of the same breed and coloring). —SeekingAnswers (reply) 01:00, 27 January 2013 (UTC)[reply]

If you asked this about a person who has encountered you twice a month for the past two years (say, a waiter), it would be impossible to say, right? Well, it's no more possible to say for the dog. They are very good at distinguishing people by smell, but unless the dog shows some overt sign of recognition, some behavior that distinguishes you from other people, there can't be a solid answer. Looie496 (talk) 01:09, 27 January 2013 (UTC)[reply]
It would be a pretty poor waiter who would not recognise a regular customer. In a quality restaurant, waiters will check the bookings to refresh their memory, so that when you arrive roughly on time, they can say something "So nice to see you again, Mr StuRat", while faking their sincerity with a big smile. However, as far as the dog is concerned, he might or he might not - it depends on how different you are, in the dog's perception, to other humans he meets. Some humans will greet a dog, some won't, so just greeting it probably won't be enough. If your business associate treats you differently to other people, the dog might cue on that, but it seems unlikely. It's easier to tell with certain breeds of dog that have clearly different sorts of barks for different purposes. I visit my cousin about every 2 to 3 months or so. She has a labrador. It barks (once) whenever it detects someone comming to the door. If it is someone in my cousin's immediate family, in other words poeople who are there nearly every week, the dog announces them with a certain soft bark. If it is a stranger it gives a louder more harsh bark. If the dog knows the person, it gives another sort of bark - when I arrive it gives that sort of bark. My cousin calls it the "family bark". Incidentally I pretty much ignore the dog. Another way you can tell if a dog recognises you as friend, is if he approaches you with ears lowered (sometimes ears and head lowered), and walks 360 degrees around you. The more intelligent breeds do this. Poodles are useless - they just bark and don't use dog greeting etiquete - which is why other breeds of dog despise them. Wickwack 121.215.68.88 (talk) 03:13, 27 January 2013 (UTC)[reply]
It would be strange to the point of pathology if the dog didn't recognize your smell the second time he met you. That's the whole reason for the smell the hand greeting---to get to know your smell--not to check if you have bacon between your fingers. There are too many books on dogs, like Cziksentmihly's and The Dog Whisperer's. You might try Temple Grandin's works. And animal intelligence and dog behavior. But the answer is yes. μηδείς (talk) 03:36, 27 January 2013 (UTC)[reply]
I would bet that the dog recognizes you. I have five dogs, so this throws off my example a bit but hear me out. One of the dogs is more... hostile, I'll use that word for now, to strangers than the rest. So we have a method of introducing him to new people who we want him to trust in his house. We don't get a lot of guests and so when we do, they've normally been away for some time. Long enough to be similar in scope to your example. When those people come over, we don't have to go through the introduction every time. Between the members of my pack, they somehow know who is okay to let in and who isn't, which is handy when we need someone to feed the pack on some day that we might be gone. Dismas|(talk) 04:05, 27 January 2013 (UTC)[reply]
It took me two or three readings of your last sentence to realize that the preposition on went with some day and not with feed. --Trovatore (talk) 06:10, 27 January 2013 (UTC) [reply]
Yes, and by recognize all that may be meant is an association of your smell with the tag friend or foe. It's not like they have to remember names, phone numbers, etc. Olfaction is powerful and primitive. μηδείς (talk) 04:18, 27 January 2013 (UTC)[reply]
Yeah, the recognition of individuals is really a different thing from memorizing the 12 times table. I'd say the best estimate for the average ability and amount of variation of dogs in recognizing people is that it's equal to the ability of humans recognizing individual dogs. I.e., it's probably easier for either to recognize members of their own species, but there are still plenty of cues provided by individuals of a different mammalian species. Gzuckier (talk) 06:11, 27 January 2013 (UTC)[reply]
Humans, unless they are dog trainers or the like, are definitely not as good at recognizing and remembering dogs as dogs are at recognizing humans. For most people, other people's dogs are simply not an important aspect of their lives. For a dog, relatively little is more important than those other dogs and people it has met directly. Humans are distracted by the abstract, the conceptual, the verbal, the alpha-numeric, the future. Dogs are here now in the percept and the concrete existent. Humans have to be told to wake up to smell things. μηδείς (talk) 20:44, 28 January 2013 (UTC)[reply]
Dogs are wolf-descendent pack animals - and their eyesight isn't that great. They mostly rely on their incredible sense of smell to recognise pack members and bond with them. That's why they sniff friends and strangers alike. So I think it's reasonable to assume that the dog recognises you - but there is no easy way to know whether it cares or not. So it could be: "<sniff>...not a pack member...<Meh>"...or "<sniff>...oh yeah...that guy who was nice to me a couple of weeks ago <happy>". In my experience, a dog never forgets a free snack...so if you're prepared (and the owner doesn't mind) with a small doggy treat every time you meet the dog - I'm 100% sure you'll get a wild greeting each time! SteveBaker (talk) 15:52, 27 January 2013 (UTC)[reply]
Our article Gray_wolf#Intelligence says that a trained wolf recognized its master after a three-year absence. (I haven't looked into this further) Wnt (talk) 19:08, 27 January 2013 (UTC)[reply]

Interchangeable parts? edit

  Resolved
 
thumg

I recently got a new HP laptop and was wondering if I ought not use the AC charger from my previous Dell for my new HP. I read the details of the AC chargers from their backs and they both say 90V 90W and everything else seems to be pretty much the same, and the male inserts for both the HP and the old Dell charging appear identical and both fit into the female port on my new HP. As an aside, when I say "old Dell" I don't mean that it's 15 years old, but rather perhaps 5 years old. DRosenbach (Talk | Contribs) 01:08, 27 January 2013 (UTC)[reply]

maybe ;/ - assuming the voltage output (not input!)/wattage/polarity of connector/size of connector/AC or DC/ is identical, then possibly. however some newer models have special circuitry that can identify the charger that has been plugged in to keep a nice tight grip on the market for replacements prevent damage to the laptop. Output voltage is very unlikely to be 90v - usually somewhere between 9 and 19 volts - and this will be critical to make sure you have correctly identified before you plug it in. 90w of power seems plausible. There are so many variables, having had a number of laptops over the years, I've never seen one that was interchangeable with the charger of another manuf yet. If you plug it in and it wrecks your laptop, it won't be covered by the warranty. ---- nonsense ferret 01:18, 27 January 2013 (UTC)[reply]
Yes -- you got me. It was 90W, not 90V. But it does say V85 on one of them. DRosenbach (Talk | Contribs) 04:02, 27 January 2013 (UTC)[reply]
There's probably other ways to invalidate the warranty if you really wanted to, but your method should do exactly that just fine. --Jayron32 01:19, 27 January 2013 (UTC)[reply]
If both input and output voltage are the same and the connectors are the same, they should be interchangeable, and it should be okay swapping them. —SeekingAnswers (reply) 01:24, 27 January 2013 (UTC)[reply]
If the power and voltage output match and the connectors fit, than you should be OK. That will not invalidate the warranty (even if it did, how would they know?). Dauto (talk) 02:12, 27 January 2013 (UTC)[reply]

Electronics Engineer here: Please post exactly what each power supply says on it, including a description of any graphics that have a "+" "-", "~" or "- - - - -" as part of the diagram. "Pretty much the same" is not good enough to risk your laptop over. --Guy Macon (talk) 03:30, 27 January 2013 (UTC)[reply]

OP here -- OK, I didn't realize what was necessary to make them compatible. The new HP AC charger reads "Output: 19.5V --- 4.62A" and the old Dell AC charger reads the same. The only difference is that the old Dell reads "Input: 100-240 V ~ 1.5A 50-60Hz" and the new HP reads "Input: 100-240V ~ 1.6A 50-60Hz." Other than the difference between 1.5A and 1.6A, the only other difference is that the new HP includes the term "wide range input" whereas the old Dell charger does not. DRosenbach (Talk | Contribs) 03:38, 27 January 2013 (UTC)[reply]
Your original post said "both say 90V". Do they still both say this, or neither of them still say it, or one doesn't? --Demiurge1000 (talk) 03:43, 27 January 2013 (UTC)[reply]
Ha! Yes, they still both state everything I've already included above -- this last post was merely adding because someone made a comment about specifics regarding input and output. They are both 90V chargers. One of them says V85. I'm lost because I don't know the difference between watts, volts or amps. Here's a photo of the two. DRosenbach (Talk | Contribs) 03:48, 27 January 2013 (UTC)[reply]
This is a good thing to give knowledge about, but please note that the Wikipedia Refdesk does not give advice, and will not be offering compensation should your toasted laptop be found in the ruins of your burned-out house. Telling people sources about how laptop connectors are identified and categorized, of course, is within the purview. Also note that the Computing Refdesk might have additional experts. Wnt (talk) 03:40, 27 January 2013 (UTC)[reply]
Now that I see the stats, I should follow up what I was saying more specifically: is it possible that the greater power consumption (1.6 A) will mean there's an increased risk that the old laptop adaptor will overheat and catch itself or the surface it's lying on on fire? Wnt (talk) 03:50, 27 January 2013 (UTC)[reply]
I agree with the previous respondents - there is a lamentable lack of standardization between laptops - so you have to use an abundance of caution. Even if the voltage, current, frequency and everything else are the same - and the connector physically fits, there is also the issue of whether the inner pin is positive or negative...generally there is a little diagram that shows you that. If you're 100% sure they are identical - then it should work - but it's a definite risk, so if you aren't 100% sure, then don't do it. I'd want to check at least voltage and polarity with a meter before plugging it in. SteveBaker (talk) 03:44, 27 January 2013 (UTC)[reply]
The above advice is spot on. It was pretty much what I would have started my answer with before posting the following:
The "Output: 19.5V --- 4.62A" is your DC output voltage and current. Same is good, but having the plus and minus reversed would be very bad. Most power supplies tell you which goes where. For example you will see a little circle inside a circle with one or the other labeled "+" or perhaps a dashed line above a solid line. You didn't tell me about any of those, so I am not 100% sure that they are not reversed. One solution would be to find a friend who owns a voltmeter and knows how to use it and have him check the supply.
"Input: 100-240 V ~ " is the input voltage. Like most modern laptop power supplies it runs from pretty much any voltage (Japan is 100V, parts of Europe are 240V). The "~" means "AC".
1.5A or 1.6A is the input current, but that's the highest it can get (laptop using the full 4.62A and input at 100VAC). It will usually be a lot lower. The 1.5 and 1.6 are almost certainly just slight variations on those assumptions and can be ignored. "50-60Hz" is input frequency (US is 60, Europe is 50)
If it were my laptop and I was sure about plus and minus not being switched, I would try powering up my old laptop with the new power supply first (less valuable) and keep my hand on the supply to see if it is getting hot. If I didn't have the old one any longer, I personally would try it on the new, but as was detailed above, that's your decision, and you know what they say about following advice you got from strangers on the Internet...
EDIT: after writing the above but before posting, I saw the pictures. There is that circle in a circle I was talking about, and they match. In my opinion, this is less risky than using a no-name replacement power supply that claims to be compatible, but again, it is your decision to make. --Guy Macon (talk) 04:20, 27 January 2013 (UTC)[reply]
Thanks! DRosenbach (Talk | Contribs) 04:26, 27 January 2013 (UTC)[reply]
Getting here late, but as somebody who started out by destroying tube type radios and just recently melted a 9 volt wall wart by pulling too much current, I'd say that based on the two pictures above the two power supplies are absolutely interchangable. I wouldn't be surprise if a lot of the ICs inside were the same. The V85 means nothing, that's just some manufacturer's particular ID; the .1 Amp difference on the input ratings is unimportant, given that the minimum wall outlet has 15 amps to offer, and is just as likely to be a difference from one sample of the same item to another as between the two manufacturers. What's important regarding meltdown of the power supply is the output Amp rating, and that's the same; as the above poster implied, given that both manufacturers can be considered reliable, it's likely that both supplies meet their specs. Gzuckier (talk) 06:02, 27 January 2013 (UTC)[reply]
Agreed, I'd be willing to take the risk with my own laptop (after having made all the checks above). Most of the control circuitry is within the laptop, so it just needs an appropriate voltage and current. I have seen two pieces of expensive equipment ruined by plugging in the wrong power supply (even thought the plugs were a perfect fit), so it is wise to check very carefully before "trying out". (And, as mentioned above, none of us can be sued if we are all wrong!) Dbfirs 09:18, 27 January 2013 (UTC)[reply]
Thanks to all who contributed. I'm posting this with my Dell working off of the new HP charger and so far, there's been no fire. DRosenbach (Talk | Contribs) 18:22, 27 January 2013 (UTC)[reply]

Wooden coffer edit

I have a small antique wooden coffer that I need to reinforce. The problem is that the bottom panel is simply nailed in, and I fear that it is not designed to support 70 kg of weight. It measures ~36 x ~27 x ~17 cm, and is constructed from ~13 mm thick wood. How should I reinforce the chest in a way that preserves its 1910 style. Pehaps iron straps? Plasmic Physics (talk) 06:58, 27 January 2013 (UTC)[reply]

Iron straps (with big, protruding rivets) would indeed be best -- two L-section straps, one at each end, should be able to hold the weight. 24.23.196.85 (talk) 07:29, 27 January 2013 (UTC)[reply]
As a note, the iron fixtures, such as the hinges are attached via flat head, slot screws. Plasmic Physics (talk) 07:44, 27 January 2013 (UTC)[reply]
In that case, matching screws would be better stylistically (not to mention more practical to install). 24.23.196.85 (talk) 07:57, 27 January 2013 (UTC)[reply]
Wouldn't 4 smaller straps, two a side, distribute the load more evenly? Or is there a reason for only using two? Plasmic Physics (talk) 08:16, 27 January 2013 (UTC)[reply]
Yes, four straps is also an option -- two is just the minimum. 24.23.196.85 (talk) 00:25, 28 January 2013 (UTC)[reply]
If it's antique - I'd want to do a non-destructive "fix" - or at least one that didn't affect the outside appearance of the thing. So how about a box-within-a-box? Construct a strong container (from metal, plastic, heavier-grade wood, whatever) that fits tightly inside the original box but which can better take the weight and connect to whatever is going to be used to lift it. The antique becomes more of a decorative 'skin' around the "real" box which does all of the work of containing and distributing the load. SteveBaker (talk) 15:42, 27 January 2013 (UTC)[reply]
Along that train of thought (that any alteration would reduce it's value as an antique), another option would be to support the bottom externally. I'm assuming here that it has legs which slightly lift the bottom. In this case, you could place, underneath it, plywood of the proper thickness and cut to the same width and length as the bottom, so it would be supported. However, you'd need to be careful never to lift the coffer while loaded. If you can do that, you can both use it, and preserve it's value, in this manner. StuRat (talk) 17:50, 27 January 2013 (UTC)[reply]
It's not particularly valueable as an antique, is has more of a sentimental value. It doesn't have legs, the whole idea is to make it liftable when loaded. I need to stop the bottom panel from either cracking, or the nails pulling out. Plasmic Physics (talk) 00:09, 28 January 2013 (UTC)[reply]
There is a problem with using a box within a box: the internal box is still resting on the bottom, so you haven't really changed anything. Plasmic Physics (talk) 00:09, 28 January 2013 (UTC)[reply]
You mention 70 kg of weight. Is this 70 kg of weight that you are planning on putting into the box? Does it look like any of the "wooden coffers" when doing a Google image search for "wooden coffer"? Bus stop (talk) 00:23, 28 January 2013 (UTC)[reply]
Yes, I'm planning to put it in the box. Putting it on the box seems a bit absurd as to defeat its purpose. No, those coffers are too ornate. My dad told me that as far as he knows, it was made by his grandfather, although it looks suspiciously like a Boer ammunition box that's received a coat of wood stain. Plasmic Physics (talk) 01:08, 28 January 2013 (UTC)[reply]
Is it similar to one of these? Bus stop (talk) 01:48, 28 January 2013 (UTC)[reply]
Yes, it is. Plasmic Physics (talk) 02:19, 28 January 2013 (UTC)[reply]
I guess one consideration is replacing the bottom panel, perhaps with strong plywood, such as aircraft plywood. Perhaps screws will hold sufficiently well, depending on the side panels to which they would have to be attached. Retaining the original appearance would depend on choice of materials obviously. The metal straps idea sounds sound. Bus stop (talk) 03:55, 28 January 2013 (UTC)[reply]
Another thought. Metal sheathing could be secured all around the bottom. If the thin metal sheathing were cut to length and bent lengthwise at a right angle, it could probably then be tacked in place with a sequence of relatively small nails or tacks, relatively closely spaced. Four such sheaths might reinforce the four angles at which the sides of the box adjoin the bottom of the box. Locating the materials as well as the tools for working the materials may present an initial challenge but it may be doable and worth it. I would think the sheet metal used need not be particularly strong as its functionality would be continuous along all edges. Whether to replace the bottom or not would of course depend on the strength of the original bottom. Bus stop (talk) 04:21, 28 January 2013 (UTC)[reply]
Some interesting images related to this under Google image search wooden shipping crate reinforced. Bus stop (talk) 20:05, 28 January 2013 (UTC)[reply]
  • Since your concern seems to be transport, why not just make a good fitted tray with handles to sit it upon, one that can be lowered along with it into a bigger box if need be? That way you get what you need without alteration to the item. μηδείς (talk) 20:36, 28 January 2013 (UTC)[reply]

Thanks all fpor your ideas. Plasmic Physics (talk) 06:23, 30 January 2013 (UTC)[reply]

Stone Age Malthusianism and Cornucopianism edit

I've read some articles suggesting that resource peaks, climate change and unsolved substitution problems would often have been the issue of the day for preagricultural tribes. Has any work been done on the fitness and popularity of Malthusian and Cornucopian memes (population control, food rationing, exploratory migration etc.) in such an environment, and the difference instinct and cognitive biases would have made? NeonMerlin 14:19, 27 January 2013 (UTC)[reply]

About migration: It's apparent that migrations took place, particularly between remote Pacific Islands (like to Easter Island), which would have had a high risk associated with them (if a major storm hit them in transit, they would all be dead). The only reason I can see to take such a risk is if there was no other option. That is, those people would have died had they stayed put, quite possibly because the resources there were stretched to the limit. However, I don't know if those migrations took place before or after agriculture developed. StuRat (talk) 17:43, 27 January 2013 (UTC)[reply]
Well the Polynesian / Malay migrations happened after agriculture was developed, and they brought pigs, chickens, and other crops where they travelled. Graeme Bartlett (talk) 04:50, 28 January 2013 (UTC)[reply]

Proton Density of Various Materials edit

  Resolved

I'm wondering what the lower and upper bounds of average proton density for various materials and substances might be? I realize this is going to depend a lot on the type of molecules present (eg: density of fruit is much less than that of lead) - I'm just interested in ballpark figures anyway. 75.228.159.2 (talk) 15:13, 27 January 2013 (UTC)[reply]

Very roughly speaking, most atomic nuclei contain approximately equal numbers of protons and neutrons. Since protons and neutrons have (again, approximately) the same mass, about half the mass of most solids is down to the protons, so about half of the total mass density is the proton (mass) density. (The contribution by electrons is negligible.)
Two important caveats. First, heavier nuclei have proportionately more neutrons (to take an extreme example, uranium-238 nuclei contain 92 protons and 146 neutrons), so if you're looking at elements further down the periodic table you'll want to account for that. Second, the most abundant isotope of hydrogen (hydrogen-1) is a single proton and no neutrons, so pure hydrogen is essentially all protons by mass; extremely hydrogen-rich compounds like methane (1 atom carbon-12 and 4 hydrogen atoms gives 10 protons and just 6 neutrons) will also have a proton-enriched composition.
As an aside, the term 'proton density' is also used in magnetic resonance imaging to refer to the abundance of hydrogen-1 only, and not to protons that may be part of other heavier nuclei. TenOfAllTrades(talk) 16:25, 27 January 2013 (UTC)[reply]
Ballpark figures: The Atomic mass unit is 1.66×10−27 kg, which is the weight of one proton. Whit a crude assumption that protons represent half of the total mass, you get: lead, about 11000kg/m3 would have 11000/(2*1.6×10−27) or 3*1030 protons per m3, water has 3*1029 protons per cubic metre... Should be correct within 30% I think. Unless I made a mistake somewhere... When you're talking about MRI's, this answer is likely not helpful...Ssscienccce (talk) 16:52, 27 January 2013 (UTC)[reply]

Contributing to medical knowledge edit

Last year I had a run in with a prescription medicine which resulted in my emergency admimssion to hospital. I am wondering if there is a medical database in existence that I can contribute my experience of this drug to? (not medical advice of course) --TammyMoet (talk) 16:54, 27 January 2013 (UTC)[reply]

I'd actually like to see a "Rate My Drug" site where people rate drugs based on side effects, effectiveness, etc. This would have the potential to let people know of problems with drugs far earlier than the normal regulatory process, which unfortunately is filled with conflicts of interest, at least in the US. I wonder, could drug makers sue such a site, even if they post disclaimers that "These are only the opinions of our members". StuRat (talk) 17:36, 27 January 2013 (UTC)[reply]
  • Here's a real answer: FDA's MedWatch is a great tool, just click the "Report a serious medical product problem" link under "Resources for you" on the left. -- Scray (talk) 17:42, 27 January 2013 (UTC)[reply]
Thanks Scray for that link, however it won't work with my version of Firefox (!) and I guess I should have mentioned, I'm in the UK. Is there a similar site for the UK? --TammyMoet (talk) 18:17, 27 January 2013 (UTC)[reply]
TammyMoet: sorry that wasn't quite what you needed. I don't see a similar tool for UK/EMEA consumers. -- Scray (talk) 22:59, 27 January 2013 (UTC)[reply]
There's the yellow card scheme Jebus989 11:48, 28 January 2013 (UTC)[reply]
I wouldn't be surprised if on the whole this is discouraged by the powers that be, as it might be be as useful as your would think at first. Patients on the whole lack the knowledge and jargon to accurately describe what is happening to them, leading to possible confusion and contradictory evidence. If this goes via health professionals at least (one would hope) it gets reported in an accurate manner. Fgf10 (talk) 18:39, 27 January 2013 (UTC)[reply]
I'm sure the medical professionals involved reported the adverse event at the time. However, there were symptoms which have disappeared since I stopped taking the tablets which I didn't report at the time because I didn't associate them with the drug in question. It would complete the information around the incident if I could report this cessation of symptoms somewhere. --TammyMoet (talk) 19:33, 27 January 2013 (UTC)[reply]
Well, the "few experts" versus "many novices" paradigms come up often, including here at Wikipedia. The idea here is that many novices can provide more, and hopefully better, info, than a few experts. There are other examples, like stock markets being used to "rate" companies, compared with expert opinions. One problem with "experts" is, that since only a few control the data, they can be bribed or otherwise have a conflict of interest, while it's impossible to do so with millions of people. For example, avoiding damage to the company's profits may also be a concern for the experts, while the novices are only concerned with the health of the patients. StuRat (talk) 04:29, 28 January 2013 (UTC)[reply]
The whole issue of the international development and marketing of therapeutic drugs has some serious concerns. The commercial drugs world is driven much more (perhaps exclusively) by commercial gain than any other reason. Information about the effects of medicines is frequently withheld by manufacturers. It is a murky world. Bad Pharma by Dr. Ben Goldacre lays out clearly and authoritatively the shortcomings of the medicines market, a frightening but essential read if you are concerned about this topic. Richard Avery (talk) 07:44, 28 January 2013 (UTC)[reply]

Good grief. Some of you have some very unrealistic ideas of why drug reactions are not often "reported" to the FDA, or to the medical literature. By far the LEAST likely explanation is that a doctor whose patient has a reaction to a drug wants to conceal it to protect company profits (Stu: "well if there really was a conspiracy you would say that, wouldn't you?"). The real explanations (1) Reporting is just more unreimbursable paperwork. Doctors in the US spend hours of each day on data collection and entry that patients and payors demand but do not expect to pay for. If it takes 15 minutes to do a drug reaction report on Medwatch, would you pay your doctor to do it? (2) Uncertainty. The patient may be certain that the daily fukitol pill she has been taking for 2 years caused her hair to turn green last week, but the doctor will not be sure that it is not simply coincidence unless the same effect has happened to multiple people taking it. (3) Diffused responsibility. These days, at least in the US, the doctor who prescribed the drug and the one who handled the side effect problem are often different. (4) Ignorance of process. Not sure how many doctors even know about Medwatch. Certainly not all. The percentage of US doctors who have had a patient suffer a side effect from a medication: 99%. Percentage of US doctors who have reported even one reaction to Medwatch: I doubt it's 5%. (5) Pointlessness. Most doctors are aware that nothing happens after a Medwatch report except paperwork. If the drug has been widely used for years, the side effect has already been known. If the side effect is an unknown one, especially if trivial or self-limited, like a headache, it's unlikely due to the drug. Your doctor may not spend as many hours as you imagine pondering your problems when your are not in front of him, but I promise you he does not get paychecks from pharmaceutical companies to keep you ignorant or deny problems. alteripse (talk) 12:38, 28 January 2013 (UTC)[reply]

This could be a broader def of "conflict of interest". The doctor has more of an interest in getting home for dinner than improving the health of patients who aren't even paying him. I still say the cure for most of those issues is for the patients to self-report their own problems:
1) The patient is also not reimbursed, but the desire to "tell their story" may make them more willing to report it than the doctor. Also consider that for the doctor to report it, the patient has to have already reported it to them.
2) The uncertainty exists in either case. However, I'd prefer to see a count of all reported cases of hair turning green after taking a med, so I can decide for myself if I see a pattern there. A single doctor may only have one such patient, but, around the world, there might be thousands, making the trend far clearer.
3) Yes, lack of one doctor responsible for a person's health is a problem. However, I'd restate this as "the patient is ultimately responsible for their own health".
4) Ignorance of process will also be a problem with patients, but, with a much larger pool, hopefully enough will self-report incidents for patterns to emerge.
5) Here's the main benefit of self-reporting. If there are two meds for a given condition, and one is rated much higher by patients than the other, then this is valuable info a patient can take to the doctor, when asking for a prescription. StuRat (talk) 22:15, 28 January 2013 (UTC)[reply]

Sounds good but think a little. Common side effects are already known. Uncommon or previously unreported symptoms are statistically more likely to be unrelated to the drug under suspicion. The only way to solve that is to have a systematic collection of data to determine whether more events than would be expected by chance are happening. There already are internet discussion areas for patients to do exactly what you are suggesting. Most diseases now do have a active support and discussion groups where one could ask exactly the kinds of questions you propose: "has anyone else taking medication X had this symptom?", and "for those of you who have tried both X and Y, which worked better for you". Those are important functions of those types of groups and websites. I am not sure what you have in mind beyond that. As you perhaps have surmised my explanation of what is involved in reporting, what is useful and useless, etc, is derived from actual experience of reporting potential side effects of a specific medication and trying to ascertain what reports had already been made. alteripse (talk) 22:39, 28 January 2013 (UTC)[reply]

Well, a discussion group only works with small numbers of people, not millions. I'd have a list of possible side effects, and each person could enter their med and the side effect they experienced. For hair turning green, for example, there might be a section titled "changes to hair", then a subsection "hair on the head", then a sub-subsection "color changes", and then finally they could select "green" from the list of colors. The software would then note when a statistically significant number of people taking the same med report the same symptom, rather than relying on a discussion group stumbling upon it. If they wished, the patients could also leave their e-mail address, so anyone investigating this side effect could contact them and ask follow-up questions. And note that there would be no Latin ! StuRat (talk) 07:41, 29 January 2013 (UTC)[reply]
Just to inform you all, Jebus's Yellow Card link was the one I was looking for. It seems to be doing what StuRat is talking about above. Thanks to Jebus for that. --TammyMoet (talk) 11:46, 29 January 2013 (UTC)[reply]
There is no statistical software that would be able to analyze the reports for a voluntary database with no control group. And NO LATIN? How ever would we communicate precisely?! alteripse (talk) 12:20, 30 January 2013 (UTC)[reply]
The control group is the general population. If 1% of the people who take a med have hair that spontaneously turns green, compare this with the portion of the general population whose hair spontaneously turns green. StuRat (talk) 05:07, 1 February 2013 (UTC)[reply]
The matched control group is not the general population, but people of similar age, sex, ethnicity, culture, etc who have the disease but have not been treated with the agent in question. A perfect example of the uselessness of uncontrolled data for infrequent risk has been going on for the last few years regarding possible effect of growth hormone treatment in childhood on cancer risk in adult life. This data is scarce but quite important to parents and doctors making treatment decisions. The French tracked down some middle aged adults who had been enrolled in a registry of treated children back in the 70-80s, asked about 10-20 cancers and compared the reported rates to the national cancer rates for adults of the same age range and sex. And then discovered that for a couple of the less common types of cancer there was an higher rate among the former GH patients than current data suggests would be expected in the general population. There are many problems with the data. The first and most obvious is data dredging for multiple variables-- if you ask about enough conditions looking for an association that has less than a 5% chance probability, and you measure 20 associations, the likelihood that you find an association that in isolation looks real but is really chance becomes high. The second problem is that even if the association is real, if the control population is the general population, maybe the condition is a result of disease itself, rather than the treatment. If we didnt have plenty of evidence that retinopathy is a complication of diabetes, your proposed system would be likely to suggest it is a complication of taking insulin. This is the fatal flaw in unsolicited side effect reports. If a couple of patients in an online discussion forum discover they have the same uncommon complication, the next step is that one of their doctors attempts to survey a larger number of patients and if even a couple more cases are turned up they are reported as a possible association. It can live like that for many years as "lore" among doctors who treat the disease and patients who have it, and may or may not be true. I can give you examples of both. Or some doctor can decide to try to systematically study it epidemiologically to settle the question and quantify the risk. Bottom line: I can see no difference between the kind of voluntary "database" you propose and posing the question on a patient website and asking a doctor who treats many cases if he has seen one--- all simply raise the question and have about the same chance that someone will say "hm i know of another case. maybe its real. lets try to find out..." alteripse (talk) 11:47, 1 February 2013 (UTC)[reply]

If they give an IQ test to someone who's had a coma, do they subtract the coma length from the chronological age? edit

What if it's only 0.1 years? (I would still do it) They never asked me if I'd had a coma when they did it in school, though. Come to think of it, are things like brain damage from hits and hypothermia accounted for in the bell curves? Sagittarian Milky Way (talk) 18:31, 27 January 2013 (UTC)[reply]

I don't think anyone can tell exactly how much of an IQ score is a result of "mental age", experience and knowledge, and how much of it is a result of the physical age of the brain. Subtracting the duration of the coma would raise the score for a child but lower the score for an adult, so an adult would have to perform better on the test if he had been in a coma, which seems unlikely. Ssscienccce (talk) 23:47, 27 January 2013 (UTC)[reply]
Adult IQ test results are compared to adults of all ages, so chronological age doesn't really come into play. With children, adjusting the chronological age (say, pretending that a ten-year-old was only five after waking up from a five-year coma) would render IQ testing completely ineffective at measuring the effects of this long hypothetical coma. I'm not sure I completely understand the rest of your question though. EricEnfermero Howdy! 00:09, 28 January 2013 (UTC)[reply]
Our article Intelligence quotient discusses IQ and age. The gist of it is that once upon a time there was one test for children that computed IQ = (mental age / chronological age)*100. That computation is no longer in use. Also, all adults are usually lumped together; a 25 year old who answered all the questions the same way as a 50 year old would be assigned the same IQ score.. Jc3s5h (talk) 00:04, 28 January 2013 (UTC)[reply]
Are you sure? He showed me dividing the mental age by the actual, and it wasn't *that* many years ago. And yes, age isn't involved for adults. Sagittarian Milky Way (talk) 03:34, 28 January 2013 (UTC)[reply]
Those bell curves of IQs. Are they raw data, or do they adjust them for anything? Like is 100 the average of everyone, or the average of everyone if no one had their intelligence artificially altered by things such as brain damage from accidents, thalidomide, lead paint, crack babies and other drug abuse (maternal and personal), their time in a coma, and possibly other things? Shouldn't 50 be commoner than 150 in real world data? For some things you'd want the raw data (like calculating how many mentally retarded need care), for others you'd want the cleaned up data. Sagittarian Milky Way (talk) 03:34, 28 January 2013 (UTC)[reply]
The bell curves are the raw data -- if the researchers want to compare IQs of selected subgroups of people, they essentially have to select scores from just those subgroups and compile the data for those groups from scratch. 24.23.196.85 (talk) 07:28, 28 January 2013 (UTC)[reply]
Are there any studies that correlate hours of sleep per day to IQ?165.212.189.187 (talk) 20:12, 28 January 2013 (UTC)[reply]
Sleep deprivation lowers IQ μηδείς (talk) 23:00, 28 January 2013 (UTC)[reply]
How about too much sleep? and is there an optimal amount of sleep per day for the best potential IQ?165.212.189.187 (talk) 14:50, 29 January 2013 (UTC)[reply]
That's nothing, horniness can lower IQ more than that. Sagittarian Milky Way (talk) 00:41, 29 January 2013 (UTC)[reply]

Seasonal changes in temperature edit

I cannot reconcile the reasons why we have such differing seasonal temperature differences with the answers I have accessed.

If the seasons are mainly dictated by the tilt of the earth, then why should such a small difference in distance (to or away from the sun) of at the most, a few thousand miles cause this when the earth is 93 million miles from the sun? I understand that the earths orbit is elliptical, but given that when the earth is furthest away from the sun (July) the northern hemisphere has its summer. Surely this would be like having a roaring fire in a big room 20 meters away and expecting to feel a difference if you move 1mm nearer? 86.138.72.18 (talk) 18:49, 27 January 2013 (UTC)[reply]

From what I understand, it's not the distance between the earth and the sun but the distance through the atmosphere that the heat radiating from the sun must past through to reach the surface of the earth. Thus, the portion of the hemisphere that is tilting towards the sun is receiving its solar radiation through the thickness of the atmosphere, while that portion of the hemisphere tilting away from the sun receives its heat filtered through a non-straight line through the atmosphere, thus being 1.2, 1.5, etc. of the thickness of the atmosphere, and that's what's causing the dissipation of enough solar radiation to create what we perceive as the drastic temperature changes associated with the seasons. That's whats meant by more direct sunlight as explained in the seasons article. DRosenbach (Talk | Contribs) 19:01, 27 January 2013 (UTC)[reply]
Also this. Shine a flashlight at the center of a ball. Shine it at the edge (same distance) and move your head over that area. It's dimmer. Sagittarian Milky Way (talk) 19:05, 27 January 2013 (UTC)[reply]
To rephrase that: if you hold up a quarter to the Sun, it will get nearly the same amount of light anywhere - North Pole, equator, noon, sunset - subject only to the atmospheric absorption mentioned and some minor differences in distance. But if, in order to be face on to the Sun, that quarter is lying on its edge on an ice floe at the North Pole on the equinox, it casts a shadow all the way toward a distant horizon. All that ice shares the light now blocked by that one lousy little quarter. If it's lying flat on the ground at the Equator, it gets the same light on that one little spot. Wnt (talk) 19:18, 27 January 2013 (UTC)[reply]
Never thought of it that way -- nice! DRosenbach (Talk | Contribs) 21:24, 27 January 2013 (UTC)[reply]
A quarter? HiLo48 (talk) 20:37, 27 January 2013 (UTC)[reply]
A small cupronickel object about an inch :) Or are you seriously asking what it is? Sagittarian Milky Way (talk) 20:54, 27 January 2013 (UTC)[reply]
HiLo48 comes from an enlightened land where Vegimite is considered to be a delicacy and the coins are 5c, 10c, 20c, 50c, $1 and $2. The closest in size to the 24.26 mm (Inches? We don't need no stinking inches!!) US 25c coin ("quarter") is the 25.00 mm $1 coin. And we really shouldn't use US coins as examples, many Wikipedia editors live elsewhere. --Guy Macon (talk) 21:21, 27 January 2013 (UTC)[reply]
A quarter-dollar is a 25 cent piece. Do you call a 20 cent piece a "fifth" ? This could lead to disappointment when your kid tells you he just found a fifth in the driveway. :-) StuRat (talk) 04:46, 28 January 2013 (UTC) [reply]
Hell no, 20c would be a quinter, not a fifth. If we had a base-12 system, we'd have the 1/6 coin, too, the "sexter." - ¡Ouch! (hurt me / more pain) 11:33, 30 January 2013 (UTC)[reply]
I don't get it - why would it be dissapointing if your kid finds 20c rather than 25c? Such a small difference is hardly worth a second thought. (Second Thoughts on Special! Only 5c each! While stocks last!) Roger (talk) 08:18, 29 January 2013 (UTC)[reply]
Probably the term is not understood by many South Africans and Australians? Because Americans in all their weirdness think the most important attribute of a standard liquor bottle is that it's a fifth of a unit big enough to kill 6.4 men (gallon, 3.785 l), it's called a fifth despite being the whole bottle. (that is, 3.2 of our cups out of 16, 0.8 quarts, 25.6 fluid ounces, 1.6 pints, or about 0.75 liters out of 3.75) Maybe they're never actually 757.082356 mL anymore though, so travellers don't have to pay import tax on the entire thing just because the law is metric. I'd be relieved if my kid's fifth turned out to be an Australian 20 cent piece, though. Sagittarian Milky Way (talk) 18:48, 29 January 2013 (UTC) [reply]
Some of the above is right, some is wrong. The core reason why the Northern hemisphere gets less heat from the sun in winter is because it intersects less sunlight. The sun always illuminates about half the Earth (there is some minimal wiggle room because the sun is larger than the Earth, and also if you take into account atmospheric refraction). In northern summer, the pole is pointing (a bit) towards the sun, and more than half of the northern hemisphere is in the sun. So it gets warmer than average. To make up the difference, less than half of the southern hemisphere is in the sun, so that part gets colder. --Stephan Schulz (talk) 19:38, 27 January 2013 (UTC)[reply]
Or, to put it another way, unless you are near the equator, during the summer the sun gets higher in the sky (warmer for the same reason that the noonday sun heats the ground more than the sun right before it sets does) and the days are longer and the nights shorter. --Guy Macon (talk) 20:38, 27 January 2013 (UTC)[reply]
A simple experiment I recall from High School science class: in a darkened room shine a flashlight at a given distance directly overhead (90°) on a piece of graph paper. Count or calculate the number of squares that the light covers. Do the same (same distance) but this time have the flashlight at a specific angle (eg: 45°) - The number of squares covered by the light is larger. Think of the flashlight as a "beam" of sunlight. The same amount of light (energy) is spread over a larger area, and thus the energy per "square unit" is less. For me, this helps explain why a change of angle on the Earth's surface (due to axis tilt) creates a seasonal difference in temperature. ~:74.60.29.141 (talk) 23:02, 27 January 2013 (UTC)[reply]
P.s.: this experiment explains how latitude affects climate; but also helps explain seasonal variation - which also includes shorter daylight hours, etc. 74.60.29.141 (talk) 23:14, 27 January 2013 (UTC)[reply]
You may be interested in the information given here. If you can download series it will tell you all you wanted to know but were afraid to ask about the Earth's orbit and its effect on the seasons. Richerman (talk) 23:55, 27 January 2013 (UTC)[reply]
... and, just to show that distance from the sun makes only a small (7%) difference, the northern hemisphere is currently nearer to the sun in winter and further away in summer. (Perihelion was on January 3rd.) Dbfirs 12:26, 28 January 2013 (UTC)[reply]