Wikipedia:Reference desk/Archives/Science/2015 March 11

Science desk
< March 10 << Feb | March | Apr >> March 12 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


March 11 edit

basic atomic particles edit

Of the basic atomic particles, the one that would be attracted to a negatively charged metallic plate is the positive particle proton? 00:53, 11 March 2015 (UTC) — Preceding unsigned comment added by 149.78.30.191 (talk)

Can you explain that a little further? The proton would be attracted but that's too obvious. So what prompted you to ask this question. Is there something else on your mind, that you as yet, can not put into words?--Aspro (talk) 01:08, 11 March 2015 (UTC)[reply]
In the beginning it seemed to me a difficult, now it's clear, thanks. 149.78.30.191 (talk) 01:14, 11 March 2015 (UTC)[reply]
Note that, in this context, a proton would be called a hydrogen ion. StuRat (talk) 16:00, 11 March 2015 (UTC)[reply]

What is the reason that the neutron found in the nucleus? edit

What is the reason that the neutron is found in the nucleus if it's neutral? (all the attraction is between the proton and electron) 149.78.30.191 (talk) 01:10, 11 March 2015 (UTC)[reply]

Electromagnetism is not the only force. Atomic nuclei are held together by the creatively named strong force or nuclear force, which has short range, but is much stronger than the electromagnetic force. --Stephan Schulz (talk) 01:18, 11 March 2015 (UTC)[reply]

Early information feels more trustworthy. Why? edit

Let's say on Monday someone tells me that cats are attracted to blue light because mice look blue to them. The very next day, someone else says that they are indeed attracted to certain colors, and indeed it's because of mice, but it's not blue but red. My Monday expert has it all wrong, he says. Both people tell the same story, both have exactly the same credibility, etc. The Tuesday expert might even have more credibility because research has advanced a day.

I've tried to construct an example that has nothing to do with politics or religion, avoiding high horses etc. And even in this case, without a doubt I'd ask if he's really sure about "red", because I'm quite sure I learned it was "blue". I might even start looking for arguments to backup the "blue hypothesis", and would certainly not start looking for arguments to backup the red one. If the Monday expert shows up later that evening, I'd still be on his side and ask for more arguments, while the Tuesday expert is frantically Googling for more proof.

So: is "believing earlier sources more that later ones" a real phenomenon (I think it is but can't find it), and if it is, why? Changing views slowly leads to a more stable life? Humans are bad at changing views because the same berries were poisonous during our entire lifetime 50,000 years ago, and nobody at the time told us you could actually eat them when cooked so we never learned that someone with an opposing view might actually help us? Joepnl (talk) 01:27, 11 March 2015 (UTC)[reply]

Sounds like cognitive dissonance. The first thing someone hears and believes in, is easier to cling onto.--Aspro (talk) 01:46, 11 March 2015 (UTC)[reply]
Possibly related to availability heuristic? When you learn about it the 2nd time, you already had an opinion formed which is recalled and therefore reinforced? Vespine (talk) 02:48, 11 March 2015 (UTC)[reply]
See also confirmation bias. This is where people tend to accept evidence that supports their current position/understand more readily, as opposed to evidence that contradicts it. LongHairedFop (talk) 16:34, 11 March 2015 (UTC)[reply]
Possibly anchoring. Definitely not cognitive dissonance. -- BenRG (talk) 02:52, 11 March 2015 (UTC)[reply]
  • Maybe related to the primacy effect? Not exactly the same thing, but certainly in the same family. --Jayron32 03:09, 11 March 2015 (UTC)[reply]
That is exactly what I meant, thanks! I don't think that the Serial position effect (like the recency effect you and the article link to) really has that much to do with it. From pure introspection, it's only logical that you remember first and last items in a list but recalling items (or ideas) is different from almost stubbornly believing them. I find it strange that the article is almost a stub, based on just two studies. For instance, one effect could be the obvious correlation between a persons religion and that of his parents. For whole societies, Conventional wisdom seems to be researched more, but I wish more research was done to explain why it exists on a personal level as well. Joepnl (talk) 18:41, 11 March 2015 (UTC)[reply]
Note that the more time passes, the more distorted the version that gets to you might be, through multiple retellings. So, that's a reason to go with the earliest version. StuRat (talk) 15:56, 11 March 2015 (UTC)[reply]
Early psychologist William James discussed "perceptual old-fogeyism." It is the "tyranny of the established." Edison (talk) 20:24, 12 March 2015 (UTC)[reply]
It seems at least a few people mentioned it, but experiments appear to have only been done in 1925 and 1950. I've seen quite a lot of experiments where people did not act rationally when confronted with a math problem ("you can open another box or choose to get ten dollar" kind of experiments), their morality ("you can write down your own score, btw your test will be destroyed immediately after making it"), etc. If rationality is something that needs studying, this phenomenon is certainly worth looking into. It's obviously not rational to give more credibility to person A over person B because due to a late flight you happened to speak to A first, yet I'm quite sure a researcher would indeed find that A would have an statistical significant advantage. If so, and I'm not a psychologist, at first face it looks as if a thorough understanding would be helpful in many areas. For instance, possibly lobbyists shouldn't aim at working long and hard to make a perfect pitch to make to a politician but make sure any pitch gets to the politician first (and politicians should take the effect into account as well). At least it would make a fun TED talk where people are shown making a case for A who would have no reason at all to value A over B, and their peers doing the exact opposite. Joepnl (talk) 22:42, 12 March 2015 (UTC)[reply]
If indeed such a trend did exist as an empirically verifiable tendency. The fact that this area has not been pursued much in the last century, despite being in very robustly explored veins of social and cognitive psychology suggests that there just may not be as much here as you might impressionisticly expect. To the extent it does exist, its almost certainly does so in niche areas of cognition, as in the case of the primacy effect. Note that's a very specific cognitive task, and a very different animal from what you are talking about, which is the subject of trust and the valuation of information, which a highly modular and contextual area in which many different forms of prejudicing influences (derived from both various innate facilities and personal subjective experience) compete to establish the reliability with which a particular piece of information is treated. I suspect you are, to some extent, on a wild goose chase on this one, but in any event, I know of no contemporary research in this particular area. Snow I take all complaints in the form of epic rap battles 12:45, 15 March 2015 (UTC)[reply]

Impedance Related Query edit

Why is complex notation used for impedance and phasors and so great importance in AC cicuit analysis.There is no specific answer to this question in the web save mathematical convenience in solving differential equation.What is explanation fro m physical point of view what is the physical significance and ramifications.111.93.163.126 (talk) 15:01, 11 March 2015 (UTC)[reply]

That mathematical convenience is useful, it allows you to exploit certain symmetries that greatly simplify the problems. It's not just the elementary notation using the exponential functions like exp(i omega t), but you can also calculate the response using Laplace transforms and then use methods of complex analysis to relate the dominant asmptotic behavior to the pole closest to the imaginary axis in the complex s-plane. Count Iblis (talk) 17:58, 11 March 2015 (UTC)[reply]
See Electrical impedance for our article, which isn't particularly heavy on the maths. To summarize - using complex notation makes it easy to describe the phase and amplitude of a signal independently, and the real component of a signal is a measure of the (physical) work it can do. Tevildo (talk) 22:15, 11 March 2015 (UTC)[reply]

What is the physical significance ramification of this abstract notation of electrical impedance and what is real life manifestation.How did the inventing scientist justify this artefact logically 111.93.163.126 (talk) 17:56, 13 March 2015 (UTC)[reply]

The first practical use of complex notation in electrical engineering was the Telegrapher's equations, derived by Oliver Heaviside (he of the layer). You may also find Power factor a relevant article. Tevildo (talk) 23:46, 13 March 2015 (UTC)[reply]

Are humans included in "animal research"? edit

Yes, I am aware that humans can participate in research as test subjects, as long as they fill out an informed consent form. I am just wondering if this process is actually included under "animal research", because humans are animals that can be experimented upon. — Preceding unsigned comment added by 140.254.226.189 (talk) 15:18, 11 March 2015 (UTC)[reply]

In this context, I'm pretty sure that "animal research" is understood to mean "non-human animal research". That's something that's hard to prove - but there would be little point in doing a non-human animal study if humans were available as ethically reasonable test subjects. SteveBaker (talk) 15:27, 11 March 2015 (UTC)[reply]
The first sentence of Animal testing supports Steve. ―Mandruss  15:31, 11 March 2015 (UTC)[reply]
which doesn't mean that information obtaining from human research cannot be extrapolated to animals, in the same way we extrapolate from rats to humans.Senteni (talk) 16:47, 11 March 2015 (UTC)[reply]
Yes, that's a worthwhile point. Snow I take all complaints in the form of epic rap battles 22:16, 15 March 2015 (UTC)[reply]
Steve is right. "Animal research" means specifically research that uses non-human animals as subjects. Looie496 (talk) 17:08, 11 March 2015 (UTC)[reply]

Reliable information from reliable sources edit

People assume that information is reliable, when it comes from a reliable source. And we assume that a reliable source is reliable because it provides reliable information. That seems to be the case of librarians, wikipedia, and people in general. How can you escape the circularity, without performing all experiments from scratch? That would be difficult in some cases. Do you just have to have faith and go with the herd? 16:44, 11 March 2015 (UTC) — Preceding unsigned comment added by Senteni (talkcontribs)

You develop trust in sources when the information you get from them accords with the information you get from other sources, most importantly your own direct observations. Looie496 (talk) 17:06, 11 March 2015 (UTC)[reply]
I would add "other independent sources", to exclude cases where unreliable sources copy each other extensively. Many sources copy Wikipedia, for example, but that doesn't make the Wikipedia articles they copied any more reliable. StuRat (talk) 06:29, 12 March 2015 (UTC)[reply]
In science we only consider things to be a fact if it has been shown in experiments that have been repeated many times by independent research groups. The reuslts will have been rigorously reviewed. The interpetation of the meaning of the results may depend on theories that will have their own experimental foundations that will have undergone similar rigorous tests. You can then always trace back some fact you read about in the literature via the given references all the way to the original experiments or observations, but this will in practice involve a huge number of different experiments. Count Iblis (talk) 17:37, 11 March 2015 (UTC)[reply]
One approach I've seen people advocate is to look at a sources reliability on topics for which you do have detailed knowledge of, and on which you can assess accuracy. The reliability of that information is then taken as a proxy for the reliability of the source on other topics which you don't have good information on. Effectively, you're doing a (non-random) subsample over the set of information, and using the sample reliability as an estimator for the population reliability. Another way to do it is to use consistency. If most of the sources are saying the dress is blue, and there's only a few which are saying it's white, then it's more probable that the dress is blue than if it's white. That can also be applied to consistency over time: sources which started out saying it's blue are likely (but not necessarily) more reliable than those which started out saying white and then changed to blue. If you want to get fancy, you can employ Bayesian statistics to exactly quantify how much more - Bayesian statistics works on probabilities as a statement of belief, e.g. belief on how reliable a source is. Bayesian statistics also bring in "prior probabilities" where you can bias sources based on past performance. Generally speaking, you can start off with minimal information about the state of reliability of sources (an "unbiased prior"), and then use a large number of small pieces of information (like consistency of reporting) to establish a (non-uniform) reliability metric for sources. -- 160.129.138.186 (talk) 18:10, 11 March 2015 (UTC)[reply]
You can't escape such circularity. It's called solipsism which just means that ultimately, all knowledge rests on trust and faith, and not evidence. At some point, you have to just accept some things, and then make extensions of your knowledge from those unproven axioms. Evidence comes into play in extending knowledge from the unproven stuff. But until you can prove to yourself that you can trust even your senses (which you cannot, see brain in a vat) then you just have to work out for yourself what you're willing to accept as "evidence" and "reliable". --Jayron32 20:41, 11 March 2015 (UTC)[reply]
I have found that I could NOT consistently determine a "reliable source" from a "poor source" until I became interested in critical thinking, the scientific method in general and scientific skepticism specifically! The problem is that life and the internet are FULL of experts, experts on UFO abductions, experts on free energy, experts on alternative medicine, experts on religion, etc... And lots of these experts are NOT just "loons" or charlatans, a lot of them are extremely smart and well educated people, sometimes even very charismatic (Dr Oz springs to mind), to most people who do not know how to tell the difference between a reliable source and an unreliable one, these people can be extremely convincing. Vespine (talk) 22:19, 11 March 2015 (UTC)[reply]
It also helps to have a healthy understanding of scientific consensus: where it does and where it does not exist; how informed people behave when there are legitimate disagreements about facts and theories; and how to disentangle "wrong" information from "uncertain" information. Informed people who disagree about facts and theories tend to present their case very differently from uninformed people who disagree about the very same details. Nimur (talk) 22:47, 11 March 2015 (UTC)[reply]
With all due respect, just about everything Jayron has said is the opposite of reality. What he has described is Idealism (philosophy) of a mystical and skeptical type, along the lines of Kant or Plato, which assumes you cannot know for sure even what simple words refer to in the real world, but you can be absolutely sure of doubt based on the certain knowledge (where did it come from?) of what a "brain in a vat" is. Children do not begin with faith and axioms, they learn simple concrete things, first nouns and verbs for what is available on the percptual level. Then they learn adjectives and adverbs modifying those concepts. Read the works of Piaget.
Even before he can speak, a child learns by exploration what a ball is. Only after he learns to name such objects does he go on to differentiate their properties such as a "red ball". (Color words are always learned after words for simple nouns.) Then he learns how to describe the colors; "bright" red ball. Maybe around five he'll learn through the sum of all his prior experience, and without any faith, to make a certain assertion that, "This is not a very bright red ball."
It is only when he learns that things may not be as we assume from appearances (The moon doesn't follow you, it is far away; the stick in the water isn't bent, that's the way it appears through the surface) and that adults can deceive and be ignorant, does he ever learn what doubt and fallibility are. But he learns that there are reasons for these phenomena, quirks of physics and human psychology. This is still long before he ever acquires an "axiom", but long after he has a vocabulary of thousands of words he understands and uses correctly and with certainty in reality.
Later the child goes to school and learns to read and do simple math. Eventually, if he doesn't live under a cult, he can read on his own, and find out what people besides his elders tell him. He learns more difficult math, and basic science, all of this building on all the evidence he has acquired and induction he has done over his lifetime. He is introduced to things like the fossil record, which he can verify exist on field trips. He learns in chemistry class with the use of his own hands and senses that certain substances he knows of are the basic elements, and that they can react. He learns that a mouse will suffocate in a jar and a plant die under a cardboard box. Unless he's been abused and beaten about the head with nonsense, at no stage has he ever had to accept something on faith, or to derive anything from an axiom. Everything he has learned so far has been based on perception, and the ability to test the truth of things he has been told by experiment.
Finally he learns certain things are not yet known fully (particle physics) or cannot be determined due to distance in time or space. Only at the highschool level has he become prepared by experience to deal with mathematical and logical axioms and proofs. He learns principles like cui bono and Ockham's razor and that "travel broadens the mind." He is taught the comparative method, the scientific method, statistics, probability, how to use a card catalog, and what sort of people to mistrust when they offer him a free lunch (but not until he buys a time share).
None of this is solipsism, the untenable belief that other people don't have minds like your own. The solipsist doesn't try to convince other minds when he doesn't think they exist to be convinced. The person who doubts his senses doesn't know what he's typing on his keyboard. Ultimately the responsibility for judgment does rely on everyone, including you, to think for yourself, as an individual. That's not called solipsism or faith; it's called reasoning. Other people can give you the "answers" but it's up to you to check the math. The only people who demand faith or speak of it are those who demand obedience or want you to accept a lie; a cult leader, an abusive parent, a con-man, or a would-be-dictator. Empiricism, Foundationalism, and Coherentism are all true. Knowledge has structure, the truth coheres, nihil in intellectu nisi prius in sensu. μηδείς (talk) 03:24, 12 March 2015 (UTC)[reply]
Solipsism is not untenable, merely unpopular. Looie496 (talk) 14:53, 12 March 2015 (UTC)[reply]
Lol! μηδείς (talk) 16:52, 12 March 2015 (UTC)[reply]

For the purposes of Wikipedia it is not expected to determine what is true, only what is worth considering. Sources that employ rigorous processes, and whose results tend to agree with other sources that use rigorous processes are more worthy of consideration. It is the quality of the process, rather than the result, that determines reliability. Rhoark (talk) 19:45, 12 March 2015 (UTC)[reply]

Are diseases/disorders/dysfunctions merely evolutionarily maladaptive phenotypic variations? edit

Take phenylketonuria, for instance. A phenylketonuric can't break down phenylalanine. In a culture where there are lots of sugary drinks that may have asparatame added, the individual will have an evolutionary disadvantage, unless he adapts to a more restrictive diet and preserves his own life, keeping himself in the gene pool. Will phenylketonuria still be considered a "disorder" if the culture does not consume much of the amino acid in the first place and so the individual's phenotype will never express itself as a problem? 140.254.226.182 (talk) 23:50, 11 March 2015 (UTC)[reply]

Some are, sure. Lactose intolerance doesn't much matter in places where they don't drink milk, for instance. But many more are not. The BRCA1 and BRCA2 gene variants that cause breast cancer, for example. StuRat (talk) 23:58, 11 March 2015 (UTC)[reply]
Hmmm, phenylketonuria looks like a complicated example. Reading over [1] briefly I see mention of alleles in Polynesians which predate their split from Caucasians, and other alleles dating back to the 5th millennium BC. Such alleles clearly are not rapidly removed by evolution, and have to be considered as adaptive, if only for the heterozygous advantage of ochratoxin resistance. On the other hand, more severe alleles may be more rapidly removed - though if there was mention of the rate of sporadic mutation of the locus in OMIM, I missed it. I don't think that the most severe alleles can be counted as adaptive but I can't say that with authority. As a rule, the "younger" the mutant alleles for a given condition, the less that can be said in their defense. Wnt (talk) 00:28, 12 March 2015 (UTC)[reply]
As Stu alludes to above, you can't lump all diseases/disorders/dysfunctions into a neat little group to fit your definition. In fact and some can be viewed as the opposite i.e. "evolutionarily adaptive (or, at least, advantageous) phenotypic variations". See Heterozygote advantage. For example, the same genes that cause the blood disorder Thalassemia also confer resistance to malaria, which then results in that trait being found most commonly in areas where malaria is prevalent. The evolutionary benefits, or "selective survival advantage", of being resistant to malaria outweigh the negative consequences of having (or, more precisely, being a carrier of) Thalassemia.--William Thweatt TalkContribs 00:36, 12 March 2015 (UTC)[reply]