Wikipedia:Reference desk/Archives/Science/2010 January 12

Science desk
< January 11 << Dec | January | Feb >> January 13 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


January 12 edit

Thioglycolic acid depilatories in "sensitive" areas edit

According to the article, thioglycolic acid, which is used as a chemical depilatory breaks down disulfide bonds and breaks down the keratin in hair. Obviously this isn't the best solution as there is keratin in skin, which can become irritated by the acid. Most such creams caution against use in the genital areas, although I have seen at least one advertised for such a purpose, but I'm just curious as to why there would be a need for such a warning. They don't tell you not to put it in your eyes (though it does say that you should wash out with a lot of water if taken internally), so what are they protecting against? 219.102.221.49 (talk) 05:42, 12 January 2010 (UTC)[reply]

Product warnings protect a manufacturer from liability for what a user might be expected to do. If no reasonable person would want to put a substance in their eyes, eat it or feed it to their children then no such warnings are needed. Cuddlyable3 (talk) 13:41, 12 January 2010 (UTC)[reply]

cyclisation of tryptophan (C-4 self-acylation) edit

Is this reaction possible? Strong acidic conditions + PCl5 (or another acyl chloride generator), then self-acylate. I have a feeling C2-acylation will be the major product, but is there any way to get an intramolecular reaction at C4? (Is there anyway to "protect" an aromatic substitution site?) John Riemann Soong (talk) 09:06, 12 January 2010 (UTC)[reply]

speaker wires edit

Okay so I'm setting up a sound system ...

I'm wondering if speaker wires are worth the expense. What I'm thinking is going for speaker wire at the ends (near the amp and near the speakers) and running regular 12 gauge "primary" wire in between, and splicing the wires accordingly (and maybe sealing the splices with wire butts or something). I'm not really sure about the idea that using regular wire will result in diminished signal quality -- as long as I use a wire with a large enough cross-sectional area, right? I'm thinking of mainly using speaker wire for flexibility, at the terminals where I need to wrap the wire into speaker and into the amp. John Riemann Soong (talk) 09:39, 12 January 2010 (UTC)[reply]

There are lots of people who have very high cost HiFi systems that swear that extremely expensive speaker wire is worth every penny to make the best sound. Personally I'm sceptical. I simply use large gauge wire. Using thin gauge wire where you expect high volume would probably be a mistake, since high volumes can require 10s of amps, which would lead to significant voltage drop on thin gauge wire. However, in the final analysis there's only one way to tell. Try it and listen to the result. If you can't tell a difference, then it doesn't matter. --Phil Holmes (talk) 09:59, 12 January 2010 (UTC)[reply]

I use telephone wire, but the solid core is not flexible and it would be a bad idea for me to move my speakers around. SO stranded wire would be better. There is plenty of volume from my system. Amplification is cheap and speakers expensive. Put the money you save into better speakers, these are the weak point in the system. Graeme Bartlett (talk) 10:55, 12 January 2010 (UTC)[reply]
Ordinary stranded copper wire is good enough but make sure it has tight, preferably permanent, connections at each end. The issue that arises if wires are too long or thin is not loss of volume; it is loss of damping resulting in increased mechanical resonances in the speakers. Total resistance in wires and connections should be less than 1/10 of the nominal speaker impedance. Cuddlyable3 (talk) 13:33, 12 January 2010 (UTC)[reply]
What is the damping when cables of zero impedance are used? Infinity? No! --79.76.182.38 (talk) 00:34, 14 January 2010 (UTC)[reply]
Did you read High-end audio cables? --BozMo talk 13:38, 12 January 2010 (UTC)[reply]
And, because this is the internet, have you seen these Amazon reviews of an expensive speaker cable? It makes roughly the same point as our article (linked by BozMo), but more amusingly. 86.178.229.168 (talk) 16:39, 14 January 2010 (UTC)[reply]

are carboxylates still electron-withdrawing groups? edit

Can you straight-out hydrate acrylic acid with sodium hydroxide, and get 3-hydroxy propionic acid? I mean, a deprotonated COOH group is prolly not as electron-withdrawing as the protonated one, but I'm wondering if deprotonated acrylic acid is still a good Michael acceptor. John Riemann Soong (talk) 10:17, 12 January 2010 (UTC)[reply]

Also, what about the acid-catalysed hydration of Michael acceptors? First-semester orgo says that C2-hydration is favoured, but a secondary carbocation next to a carbonyl carbon may be worse than than a primary carbocation, yes? (Plus, there aren't even any hydrogen atoms on that carbonyl carbon to do any hyperconjugation...) Plus it seems that the enol mode would favour H attaching at the alpha carbon, not the beta-carbon. John Riemann Soong (talk) 10:21, 12 January 2010 (UTC)[reply]

Ummm, anyone? Can a deprotonated COO- group still withdraw electrons? Or no? John Riemann Soong (talk) 03:50, 15 January 2010 (UTC)[reply]

Design for a wood shelf bracket edit

I want to make some simple bookshelves somewhat like this design http://www.made-in-china.com/image/2f0j00AMvQtimCrTlyM/Shelf-Support-Bracket.jpg but in wood, not metal. The shelf and the vertical part are no problem to make in wood. The wooden bracket will be a deeper right-angled triangle than that shown. The fundamental problem is to stop the bracket from rotating forward at its corner away from the vertical part, due to the leverage of the weight on the bookshelf. What would be the best and simplest way to firmly secure the wooden bracket to the vertical part? The top part of the bracket touching the vertical part will be under tension, not compression, which is more difficult to fix securely. I have a lot of books, the weight may be a lot. Thanks 89.242.107.166 (talk) 12:20, 12 January 2010 (UTC)[reply]

Although it's not obvious from the picture, the metal brackets in this design are secured to the uprights by downward pointing L-shaped flanges - there are 4 flanges on each bracket, and you slide a bracket into two pairs of slots in the hollow upright then push it down so that bottom edge of each slot fits into elbow of the L-shaped flange. I guess you could produce a similar design in wood, but you would have to make the flanges and slots thicker to prevent the flanges snapping off under tension - maybe you could make the flanges as wide as the bracket. Also, you will have to make the flanges a very precise fit to the slots, otherwise you will have a shelf that slopes from back to front. But if you are happy to have a permanent joint, rather than one that can be taken apart easily, then I imagine there are specific woodworking joints that are designed to be strong under this type of tension/rotation load. Gandalf61 (talk) 13:03, 12 January 2010 (UTC)[reply]
Screw. Cuddlyable3 (talk) 13:25, 12 January 2010 (UTC)[reply]
Where would you position the screw(s) please? 89.242.107.166 (talk) 17:42, 12 January 2010 (UTC)[reply]
                |   |
                |   |
                |   |
 _______________|   |
|_______________|   |
    |__         |   |
       \....    |   |
        \  ├+######>|
         \''    |   |
          \     |   |
           \    |   |
            \   |   |
             |__|   |
                |   |
Cuddlyable3 (talk) 22:18, 12 January 2010 (UTC)[reply]
I've usually seen wooden shelves supported with a diagonal truss support. Wood is a totally different material than metal - this is a no-brainer. But it has implications - wood's tensile strength and compressional strength mean that the appropriate joint shape is not a direct scaling from a joint that works in steel. As Gandalf has pointed out, there's a unique compression/rotation force going on in a bookshelf - and books can be heavy. A shelf acts as a lever arm against the joint, and so you may inadvertently be applying hundreds of pound-feet of torque on the joint. This can concentrate to thousands of psi on the load-bearing contact point. Steel may handle this gracefully, but you might want to reconsider and add a support truss to share the load for wood shelves. Here's building a bookcase from Bob Vila. The video demonstrates their preferred wood joins. The load is spread over an entire joint to avoid force concentration anywhere. Nimur (talk) 14:00, 12 January 2010 (UTC)[reply]

The problem is that cantilevering a shelf is going to require a tension joint somewhere. With wood, because of its grain it can quite easily shear. It is difficult to think up catilevered designs in wood that avoid these problems. 89.242.107.166 (talk) 15:00, 12 January 2010 (UTC)[reply]

If you want to be a purist and use a proper woodworking joint, then the wedged through-dovetail (picture) is probably the best one under tension. But you asked for the 'simplest' way, so this won't do. I would use a nut and bolt with suitable washers to prevent pulling-through. You could countersink the screw head and then hide it with filler. --Heron (talk) 19:53, 12 January 2010 (UTC)[reply]

What makes all these different types of tea taste different? I know you can scent them with an external flavour like mint, jasmine, ad nauseum; I know that the seven-odd 'colours' of tea must impart something like freshness or toasty flavours. But what makes two black, two green, two yellow teas so different from each other? Is it the location, surrounding vegetation, air and soil quality? Lady BlahDeBlah (talk) 13:18, 12 January 2010 (UTC)[reply]

In addition to Tea blending and additives, we also have Tea processing. The main tea article states: "A tea's type is determined by the processing which it undergoes. Leaves of Camellia sinensis soon begin to wilt and oxidize, if not dried quickly after picking. The leaves turn progressively darker as their chlorophyll breaks down and tannins are released. This process, enzymatic oxidation, is called fermentation in the tea industry, although it is not a true fermentation..." So, I would suggest that this is the first order effect on the tea's flavor. Regional climate and soil differences probably do have an effect, but probably to a lesser degree than the processing steps. I think this question has come up on Science Desk before and we found some academic research on tea flavor. I'll look in the archive. Nimur (talk) 13:52, 12 January 2010 (UTC)[reply]
Ooh, wonderful, I think I missed that first one... *opens new tab*...I do understand that the 'colour' (wilting + oxidising + fermentation) is the initial indicator of the expected flavour of the result, but I would like to know more on the method. Lady BlahDeBlah (talk) 13:57, 12 January 2010 (UTC)[reply]
"The seven-odd 'colours' of tea" - never heard of that before, what are they? Is that a North American marketing gimmick perhaps? In the UK we classify out teas according to location such as Darjeeling, Ceylon, Assam, China, and so on. And brands also. In addition to the various different production methods for black or green teas as mentioned above, I expect different sub-species are suited to the different climates and soils of the different locations. 89.242.107.166 (talk) 14:54, 12 January 2010 (UTC)[reply]
I'm English, my dear. I unno, tis something I made up based on what I've read on the Tea article. There's black, green, white, yellow, oolong, post-fermented and what was the other one? I forget. They all depend on what happens to the leaf after it's picked. But you've illustrated my question nicely: Darjeeling, Assam, Ceylon, all are black teas fromt he same species of plant treated in...approximately the same way after picking, right? So why do they taste different? Lady BlahDeBlah (talk) 15:01, 12 January 2010 (UTC)[reply]
They are probably different varieties of tea plant, treated differently, grown in different soils, and probably picked at different times of the year as well. I don't know why that's not a good enough explanation—the same sort of things accounts for the major differences you find between different wines and coffee, for example. --Mr.98 (talk) 16:22, 12 January 2010 (UTC)[reply]
The Stash Tea site gives a simple explanation of the various types of teas and how they are processed. They divide teas into four main categories on the basis of the degree and type of processing: black, green, oolong, and white. Take a look here [1]. Incidentally I order all my tea through Stash and have been very pleased with their tea. I like organic loose tea, and they have several different types. (I am not in any way affiliated with Stash!)--Eriastrum (talk) 20:07, 12 January 2010 (UTC)[reply]
Part of the difference is terroir, "a French term in wine, coffee and tea used to denote the special characteristics that geography bestowed upon particular varieties". Don't forget that in the UK we categorise our tea as coming from Yorkshire and maybe even Cornwall too. BrainyBabe (talk) 03:57, 14 January 2010 (UTC)[reply]

Old weather data? edit

Hi, does anyone know where I can find weather data for, say, May 24 2008 in some town in Mongolia? Question is related to Winter_storms_of_2007–2008#May_26-27. Yaan (talk) 13:44, 12 January 2010 (UTC)[reply]

Try here [2].Accdude92 (talk to me!) (sign) 14:20, 12 January 2010 (UTC)[reply]
thanks a lot. Yaan (talk) 14:41, 12 January 2010 (UTC)[reply]

Synthetic Feather edit

Has anyone created the syntheic feather such as to use in costumes or such without buying the real thing? I referring to those giant feathers such as the one pictured here. --Reticuli88 (talk) 13:49, 12 January 2010 (UTC)[reply]

Well here's a patent: [3] and here's a manufacturer [4] --TammyMoet (talk) 18:13, 12 January 2010 (UTC)[reply]

are Zinc pills caustic? edit

i bought some from both gnc and the vitamin shop when i put them in my mouth they burned badly like i got lye in my mouth or something. it sems its like water activated like quicklime since they dont burn too much if you touch them with your hand. these are zinc only pills i got some with b6 in it that didnt burn but i didnt like those. whats the ph of these things? —Preceding unsigned comment added by 67.246.254.35 (talk) 14:05, 12 January 2010 (UTC)[reply]

I can't find the actual template but this sounds a lot like you're asking for medical advice? There are rules against us answering medical advice questions anyway, but I might hesetantly point you towards Zinc toxicity. Gunrun (talk) 14:48, 12 January 2010 (UTC)[reply]

I don't see this as a request for medical advice at all. The OP just wants to find out the ph of some zinc supplements. Regrettably, I could not find the information, but we do have an article on Dietary mineral which also lists some facts about zinc. 10draftsdeep (talk) 16:21, 12 January 2010 (UTC)[reply]
I'm a chemist, but I can't be 100% certain on my answer. Two parts: 1) Goodness knows whats in those pills, the FDA doesn't regulate supplements, for all you know they could in fact be lye (sodium hydroxide). 2) If the pill does in fact contain zinc, I suppose a particularly foolish manufacturer might simply make a highly concentrated (possibly even toxic at high enough doses....) zinc tablet without regard for the fact that zinc is in fact a Lewis acid and would cause such action. In a dry area like your hand you might not notice that, but if this were the case water would certainly have an effect as you describe. I would strongly suggest returning the pills to the place you purchased them regardless -- supplements, even shady ones, shouldn't hurt. 128.104.69.93 (talk) 19:25, 12 January 2010 (UTC)[reply]
Zinc chloride can be very irritating as it can form a strong solution, zinc sulfate will still have an astringent metallic taste, and the amino acid salts will be fairly benign. The idea is to swallow usually and not dissolve it in the mouth. Graeme Bartlett (talk) 20:58, 12 January 2010 (UTC)[reply]
If you have a bad reaction, don't take the pills. That said, lots of "dietary supplements" (which is the modern translation of the term snake oil) are entirely unregulated for either efficacy or quality, so major caveat emptor when dealing with such substances. Generally, if you are eating lots of "whole foods" from a wide variety of sources (meat, fish, dairy, fruits, vegetables, grains) it's entirely pointless to supplement your diet with anything. To the OP's original question, many homeopathic (read: quackery) zinc treatments like the now-pulled-from-the-market Zicam had high concentrations of sodium hydroxide which, it is supected, caused adverse reactions. Not knowing what zinc preparation you were using will make it impossible to know what else was in the pill, it could be literally anything at all, and since this market is unregulated, even if you knew the brand name of the pill, you may never know what you are really taking. If you are truly concerned, stop taking the supplements, and possibly see a qualified medical professional if you have serious questions about your health. --Jayron32 22:03, 12 January 2010 (UTC)[reply]


what exactly is a lewis acid? i read the article but it didnt help much? how is zinc a lewis acid?

I'd ask why the heck you think you need zinc supplement anyway? If you a meat-eater, you'll be getting plenty of zinc in your diet. If you are a vegan and eating a particularly poor vegan diet - then maybe you need zinc - but you can get it in a huge range of foods (cooked dried beans, sea vegetables, fortified cereals, soyfoods, nuts, peas, and seeds) - you don't need pills. These pills sound like you should be avoiding them like the plague! If you believe you have a zinc deficiency - then you should definitely see a doctor because you need to attack the cause of this strange and unusual problem and not just cover it up by taking pills. SteveBaker (talk) 05:15, 13 January 2010 (UTC)[reply]
Zn2+ has a positive charge, and is a good electrophile. It likes to bind to neutral water molecules. But when neutral water molecules do so, they acquire a positive charge, because they are donating electron density (solvating) to the Zinc. So it encourages protons to come off the water to form zinc hydroxide plus a proton (which is solvated by other water molecules). In reality, the proton is not very free, and there is an equilibrium involved, but Zn2+ still has corrosive properties.
Not all salts are neutral. NaCl is a neutral salt, but Na+ doesn't bind water as strongly -- sure water solvates it but it doesn't form very strong covalent bonds. MgCl2 forms covalent'ish bonds of a highly ionic nature with oxygen, so MgCl2 is slightly acidic. Zinc(II) chloride is a considerably more acidic salt, well, because zinc is a transition metal, a Zn-O covalent bond is more stable John Riemann Soong (talk) 05:16, 13 January 2010 (UTC)[reply]

how come you dont get burned by touching a zinc penny then?

Bacteria abundance edit

I am looking for references in the academic literature for:

  1. The abundance of bacteria on human skin? (Both typical and max, if possible)
  2. The abundance of bacteria on typical surfaces encountered in everyday life?

Any help appreciated, thanks. Dragons flight (talk) 14:13, 12 January 2010 (UTC)[reply]

This article doesn't provide the exact information you ask for, but might be a good starting point in your search. - Nunh-huh 00:47, 14 January 2010 (UTC)[reply]
The fomite article includes this link,[5], which mentions surface counts for different organisms. As for people, well, I remember my microbiology professor back at university saying how we live in a soup of bacteria, and we have somewhere around 1014 to 1030 skin commensals under ordinary conditions. I don't know where he got those numbers from, but it doesn't stop me from telling people this from time to time! Mattopaedia Have a yarn 00:20, 16 January 2010 (UTC)[reply]

first zero-net energy development of single family homes on the planet? edit

To whom it may concern-

I am the architect for Green Acres, a zero-net energy development of single family homes in New Paltz, NY, about 90 miles north of NYC. Construction began in summer 2008 and at this time 3 houses have been completed, purchased and occupied, and 4 more are under construction. I am trying to find out if Green Acres is the first zero-net energy development of single family homes on the planet. I know of an existing development of zero-net energy townhouses in Germany, but not much else that is already built in this or a similar category. Please contact me if you know of another development with proper claim to this title. You can find more information on Green Acres on my website. Thank you for your time.

-Dave Toder, RA
BOLDER Architecture
(email removed per guidelines) —Preceding unsigned comment added by 76.15.31.83 (talk) 15:31, 12 January 2010 (UTC)[reply]
What about the Earthship community in Taos? 75.41.110.200 (talk) 15:35, 12 January 2010 (UTC)[reply]
Note: The question appears to be about Zero-energy buildings. Would a primitive community of tents or caves with no heat qualify? As for modern buildings, some are listed in the article. How many are required to make a "development?" A 10 unit development in Washington state, zhome, is listed in the article and claims to be the first such development, with completion expected by the end of 2009. Edison (talk) 15:43, 12 January 2010 (UTC)[reply]
(ec) This post runs dangerously close to WP:SPAM. It's against our policies to use Wikipedia for commercial advertising under the guise of legitimate activity. I hope I'm not misinterpreting your post, but the way it is worded sounds like an advertisement. As far as "zero-net-energy", I think that is a dubious claim. How do you define energy input to the house? Technically, even if nothing is happening and there is no human activity in the housing development, the laws of thermodynamics dictate that energy will be transferred either into or out of the development, in the form of radiant heat, unless the housing and the rest of the universe are in complete universe-wide equilibrium. It would be more scientific to say that you import no fossil fuels or utility electric power - if that describes the development - but to import no energy would require constraints on (for example) the diet of the human occupants, and the total quantity of metabolic activity they generate, and preclude them from listening to radio or telecommunications of any kind (which by definition are conveyed by waves of energy). And, to have a zero-net energy balance, all you need to do is produce extract energy on site - e.g. an oil rig. But this doesn't really create energy so much as harness it. I think in general it is safe to say that "zero net energy" is a marketing-ese buzz-word with little scientific merit. Nimur (talk) 15:46, 12 January 2010 (UTC)[reply]
Well, assuming you mean zero electrical input and no gas, no oil, etc - then I think we understand what you mean...although technically, Niumur is and Edison are correct in that the house does have energy inputs - (the sun's rays, for example) - but I'm sure everyone actualy understands what this is about - so let's try to be helpful and talk about that.
Anyway - you are FAR from the first to claim this. A very quick Google search turns up this for example...a builder in Dallas who is building "zero energy" homes. So I'm very sure you cannot lay claim to being the first of your kind in the US - let alone in the entire world. You're probably going to try to claim that this is the first zero energy home community - but you've only built three and three is not a "community" by any reasonable definition!
Also, there is a difference between "near-zero" and "actually, for real, definitely, zero". If your homes to not connected up to the electrical grid - then I might perhaps believe your claim for the latter...but if they are still 'on the grid' then your claim is only valid if the people who live in the house use it carefully. If they install a bunch of high-energy consumption gadgets and leave them turned on inappropriately - then I'm 100% sure they'll use more energy than your solar panels (or whatever) generate. If so, what you're building is merely energy-efficient homes and those are EVERYWHERE to some degree or another. For that reason alone, I very much doubt you can truthfully claim "ZERO" energy inputs from the grid - so the issue of whether you might be the first is moot.
Anyway - being the first is far from everything. Being the best on the other hand - that would be impressive. SteveBaker (talk) 19:57, 12 January 2010 (UTC)[reply]
A claim given on the web[6] is that "Using photovoltaic solar panels and geothermal heating and cooling, combined with super insulation (insulated concrete form walls, triple-pane glass) and heat recovery ventilation, these buildings consume less energy than they produce.." New Paltz is surrounded by...the cultural mecca of Woodstock... I can believe the bit about Woodstock Festival. Cuddlyable3 (talk) 21:45, 12 January 2010 (UTC)[reply]
Yeah - but the problem with that claim is that they actually have no idea how much energy the buildings consume. If you have a large family with kids who habitually leave lights on - leave the doors open in the height of summer or the depths of winter - who leave the freezer door open or leave 4 TV's and 4 video games turned on all day and all night - lots of hot baths - many loads of laundry per week because of the baby - and a dishwasher which has to be run at least once a day...then the house is gonna consume a heck of a lot more than one that's occupied by a single person who works all day, eats out most evenings and has simple needs. You really can't claim literally zero energy consumption...not without a lot of explanations and caveats as to how the house will be lived in. SteveBaker (talk) 01:31, 13 January 2010 (UTC)[reply]
To be fair to the source, the claim wording continues "..than they produce (when occupied by an average family),.. Cuddlyable3 (talk) 02:36, 13 January 2010 (UTC)[reply]

The first home in Green Acres, New Paltz, NY, has now been lived in for a full year as of 26 Mar 2010. Central Hudson Gas and Electric provided an $86 check for the extra 1,477 kWh that the home produced over the course of the year (Source: me, David Shepler, the home owner). The only other energy source was a propane tank for the gas range. The tank, which holds up to 23.6 gallons of propane, was never filled over the course of the year (27 Mar 2009 - 26 Mar 2010). Even if the tank were now empty (which it's not), this would only equate to 643 kWh using the kWh electricity-to-gal-propane conversion factor (27.27 kWh/gal prop) used by the Northeast Sustainable Energy Association ([7]). So, the home has easily achieved "net-zero-energy", which is now a widely used term to reflect the amount of energy consumption of a home over a full year (the all important "net" term). [[For the National Renewable Energy Lab's definition of this, see NREL's discussion of the topic, which reveals the complexities of the definition but clarifies the misperceptions revealed in this Wiki thread. [8]]] It must be pointed out that this definition specifically excludes embodied energy and food energy of the occupants (calories of a carrot?) as referenced in this increasingly absurd thread; rather, it simply examines the amount of energy produced on site (via solar in this case) versus the amounts consumed from elsewhere (electricity provided from the utility and propane gas in this case). As for understanding household behavior, the home was built with "average family" consumption behaviors in mind. Sure, some family could blow through any level of on-site energy production through wasteful practices, but as the evidence shows, this family of four (my family) produced excess energy and easily achieved "net-zero-energy". I can testify that we have all the normal appliances, and I refuse to live like a cave dweller. I dry all my clothes in a clothes dryer, I use a dishwasher, toaster, large fridge, microwave, oven and I love my considerable entertainment center. OK, so I do turn lights off pretty religiously, but shouldn't we all? I have checked all the references to other net-zero-energy developments/communities above but none have panned out. I see links to concepts such as that in Washington, but nothing actually built. And in that case it is not detached single-family homes but rather townhouses. I found individual ZE homes in Dallas but no community. Yes, Green Acres only has three occupied homes at the moment, but already an additional four are in various states of construction. I'm not sure how you define a "community" but is seven homes getting closer?

Nobody has mentioned Embodied energy -- how much energy was used in the creation of the component parts of the houses? "The UK Code for Sustainable Homes and USA LEED Leadership in Energy and Environmental Design are standards in which the embodied energy of a product or material is rated, along with other factors, to assess a building's environmental impact. Embodied energy is a new concept for which scientists have not yet agreed absolute universal values because there are many variables to take into account." How far were materials transported? (A lot farther than for That Roundhouse, I'll bet.) Have the houses been designed for the full life cycle -- will they be easy to demolish and recycle safely, or will all that embodied energy go to landfill? Also, where are the facilities that the residents need? Are offices, shops, childcare, schools, eldercare, libraries, doctors' surgeries, parks, swimming pools, and allotments within walking or cycling distance, or does each chore involve a car journey? BrainyBabe (talk) 04:09, 14 January 2010 (UTC)[reply]

physics edit

In a double-slit interference experiment, what actions cause the fringe spacing to increase? —Preceding unsigned comment added by 206.209.102.240 (talk) 15:56, 12 January 2010 (UTC)[reply]

Welcome to Wikipedia. Your question appears to be a homework question. I apologize if this is a misevaluation, but it is our policy here to not do people's homework for them, but to merely aid them in doing it themselves. Letting someone else do your homework does not help you learn how to solve such problems. Please attempt to solve the problem yourself first. If you need help with a specific part of your homework, feel free to tell us where you are stuck and ask for help. If you need help grasping the concept of a problem, by all means let us know. You may wish to start at Double-slit experiment. -- Coneslayer (talk) 16:20, 12 January 2010 (UTC)[reply]

radio waves and their structure edit

If a radio wave was passing through a vacuum why doesn't it loose its energy and how can it gain more energy? Also is there any way for radio waves to be held in one place —Preceding unsigned comment added by 82.38.102.58 (talk) 16:43, 12 January 2010 (UTC)[reply]

It doesn't lose energy in a vacuum because it doesn't -- that's a basic fact, not a derived fact. One way that it could gain energy is by interaction with gravity. And according to the Theory of Relativity there is no way for a radio wave to be held in one place. As a note of historical interest, Einstein said that his first motivation for developing Relativity was that he tried to imagine what an electromagnetic wave would look if it were held in one place, and decided that such a thing ought not to be possible. Looie496 (talk) 16:54, 12 January 2010 (UTC)[reply]
Where do you get the idea that a radio wave can "gain more energy?" It simply does not happen (without an amplifier of some sort). The opposite in fact occurs. The RF energy is the same but becomes dispersed over a greater area thus effectively weakening. --220.101.28.25 (talk) 17:16, 12 January 2010 (UTC)[reply]
A radio wave spreads out in a vacuum such that the new wave-front becomes a larger and larger sphere - but the total energy remains the same. This makes the radio waves harder and harder to detect as you get further away - but that's because you're intercepting an ever smaller fraction of the total expanding sphere of energy that the radio put out. The laws of thermodynamics apply to radio waves just as they do to any other form of energy - and one of those laws says that energy is neither created nor destroyed - it just changes from one form to another. If the radio waves "lost" energy somehow, that energy would have to turn into something else.
When you shine a light (or a radio wave) through the earth's atmosphere, it loses energy because the light/radio is being absorbed by atoms in the air - which means it more or less all ends up as heat - which is just another form of energy. The same thing happens with radio waves in the atmosphere. But out in the vacuum of space, there are almost zero atoms out there to absorb the energy and turn it into something else - so radio waves can travel across the entire width of the visible universe and still be detectable. This is also true of light and other forms of electromagnetic radiation.
Those same laws of thermodynamics also prevent the radio wave from gaining energy - because whatever energy it might hypothetically gain would have to come from somewhere...and in a good, hard vacuum - there is nowhere for the energy to come from.
SteveBaker (talk) 19:38, 12 January 2010 (UTC)[reply]
Regarding gaining energy, I see that I should have given a longer answer -- as our Gravitational redshift article points out, light (or radio, same thing) that passes into a a region of stronger gravity shows an increase in energy, and is said to be gravitationally blueshifted. Looie496 (talk) 20:06, 12 January 2010 (UTC)[reply]
Similarly it'll lose energy when it leaves a gravitational well. That works both ways of course. Rckrone (talk) 23:49, 12 January 2010 (UTC)[reply]

Alien Technology edit

In this video, do you have any examples of this technology being used/created today? --Reticuli88 (talk) 17:20, 12 January 2010 (UTC)[reply]

The video looks like crank-bait to me. The guy makes a lot of assertions with zero evidence. I wouldn't take it too seriously. Whether you want to believe that modern nanotech is the product of a long (and easily documentable) progress of Earth scientists, or whether it magically fell into our hands from the aliens in the 1940s, will determine whether you think there are examples of such technology being used today. --Mr.98 (talk) 17:45, 12 January 2010 (UTC)[reply]
As Mr.98 infers, this seems to be a load of crap synthesis and scientific gobbledy gook. A similar 'source' asserts we have velcro because it was found in the UFO that allegedly crashed at Roswell. Actually, can you be more specfic about which "examples of this technology being used/created today?" you actually mean, as there are several mentioned in the video. --220.101.28.25 (talk) 18:04, 12 January 2010 (UTC)[reply]

Oh the humanity...Stop this bullshit progress before it is too late! Cuddlyable3 (talk) 21:20, 12 January 2010 (UTC)[reply]

I fear it is way too late.  Go Luddite! --220.101.28.25 (talk) 00:37, 13 January 2010 (UTC)[reply]
The way I know that the video is unscientific is because every chart and graph is displayed for just about 0.2 seconds. I have been through scientific and engineering lectures often enough to know that when a researcher displays a chart or graph in a presentation, they invariably spend about 20 minutes droning on about the axes, the labels, the scatter points that look like incoherent noise, and their elaborate curve-fitting regression algorithm which suggests strongly that their original hypothesis actually holds, even though the data may as well have been collected by Jackson Pollock. This video simply flashes up a series of "science pictures" without droning on in long, boring, incomprehensible fashion. No anomalous data was discussed which might discredit competing research groups' work that was published in 2002. Nobody mentioned that, while this technological innovation is all very well and good, Gauss and Euler both invented all of the fundamental analytic techniques for the meta-materials in the 18th century. These symptoms of actual scientific presentations are remarkably absent - the video belongs solidly in the crank bin - but kudos for creative science-fiction writing. While the vast majority of the explanations are jargon-slathered nonsense, it's mildly more accurate than your average generic sci-fi explanation. Nimur (talk) 15:54, 13 January 2010 (UTC) [reply]

Genius Offspring edit

What are the chances that the offspring of notable geniuses, like Einstein, will have children with the same natural gifts? --Reticuli88 (talk) 17:54, 12 January 2010 (UTC)[reply]

Just as rare as any other child. Most geniuses do not have children. Those that do will rarely have a genius as a child. Talent is passed from parent to child, but childhood is not. Whatever happened to a child to turn them from a talented baby into a genius adult is very likely to be missing from the genius' child's life. -- kainaw 18:01, 12 January 2010 (UTC)[reply]
See also regression to the mean. The child of a genius is likely to be more intelligent than average, but unlikely to be as intelligent as his or her dad (or mom). Similarily, Bush's kids aren't likely to be as big of douchebags as their father. Buddy431 (talk) 18:41, 12 January 2010 (UTC)[reply]
"Most geniuses do not have children." [citation needed] --Sean 20:28, 12 January 2010 (UTC)[reply]
For the google-impaired... [9] [10]
For anecdotal evidence (in no particular order): Michelangelo, Tesla, Leonardo, Newton, Kant, Beethoven, Galileo, Descartes, Spinoza, Florence Nitengale, Copernicus, Handel, Cavour, Flaubert, and Chateaubriand. -- kainaw 23:50, 12 January 2010 (UTC)[reply]
Well if you look at Albert Einstein's kids, 1 became a professor of I think some kind of engineering, one died very young, and the other had some kind of mental condition (Schizophrenia?) I do not know if the professor was a literal genius, but it is a fair bet that he was above average (as are most professors I would hope). Googlemeister (talk) 19:44, 13 January 2010 (UTC)[reply]
The question of hereditability of I.Q. is a controversial one. See Heritability of IQ. Most studies say that I.Q. is .75—meaning that about 3/4ths of ones I.Q. is determined by your genes. How that plays out in percentages of offspring, I don't know (it's been awhile since I took biology—but I don't think you can make that determination just based on the above information), but it is probably not as random as the above answers have suggested. There is more to it than just genes, of course, but genes do seem to play a non-trivial role (which is unsurprising). Whether a "genius" will be recognized as such is an entirely different question, though. In such a case, having a "genius" parent may or may not be helpful (it is easy to be overshadowed if your father in an Einstein, though on the other hand, you will have potentially great possibilities for training and networking.) --Mr.98 (talk) 18:53, 12 January 2010 (UTC)[reply]
I think my comment about regression to the norm still applies. Even with a hereditory aspect, a genius is a clear outlier in terms of intelligence, and it's still quite likely that any offspring will be closer to the average value than their parents. I suppose that a given genius is more likely than a given person of average intelligence to give birth to a genius, but the probability is still in favor of the dad being smarter than the kids.
That being said, there are some notable cases of very intelligent families. The Bohrs immediately come to mind. Both Niels and his son Aage won Nobel prizes in physics. And it's not like Niel's dad Christan and brother Harald had trouble tying their own shoes either. The Bernoulli family also seems to have a number of very smart members, though I'm not sure how many of them could be described as "geniuses" on the caliber of Einstein. Buddy431 (talk) 20:00, 12 January 2010 (UTC)[reply]
Yes, but none of this gets to the heart of the question: How much of this is "being born with smart genes" versus "being born in a smart environment" Your parents raise you, so one cannot discount the influence of having smart parents in the house in terms of heritability of intelligence (which is a dubious concept anyways!) So, what we need are studies of the intelligence of a) twins, b) seperated at birth, c) raised in different environments d) where their biological parents are known and also have had their intelligence testes and e) enough of these to have a meaningful sample size. I'm not saying such a study does or does not exist, but until I see it or one like it, I would be skeptical of any attempt to draw meaningful conclusions about genetic vs. environmental influences on intelligence. --Jayron32 21:55, 12 January 2010 (UTC)[reply]
Jayron, such studies do exist. Look at the article I linked to. People have been studying the heredity of I.Q. since the 19th century. Obviously it is still controversial, because splitting nature and nurture in such a complex trait is fairly impossible. Still, most of the studies point towards intelligence being fairly heredity. That doesn't mean that someone with "good genes" will do well despite their environment, or that someone without them will necessarily be dumb. The article is pretty good on explaining these sorts of caveats. The reason this is controversial (and hair color is not) is because once we start getting into questions of heredity and I.Q., people start seeing this either as a way to start thinking about making racial-superiority arguments (which the science does not support, in any case, even though I.Q. is determined a lot by genetics), and people recoil at the idea that something as fundamentally "individual" feeling as their own thinking ability is "locked in" by their genetics (which is not exactly true, though more true than the former). But assuming that all of us are, in good faith, just interested in the basic scientific question (and are not trying to enforce racial policies based on it), I think we can put out there that the most likely case is that intelligence should have a large genetic component, as do all human traits. --Mr.98 (talk) 14:21, 13 January 2010 (UTC)[reply]
I don't think the OP's question requires us to determine why a child of a genius is more likely to be a genius, but rather what the actual probability is. If we want to do that, we first need to pin down what a "notable genius" is, and what qualifies their kid as having "the same natural gifts". We could conduct a bit of OR here, and come up with a list of "notable geniuses", and then look at their kids, and determine who can be said to have these "natural gifts". Or, we could broaden the question, and look just at geniuses in general, and determine what the probability is that any one of their kids is also a genius. We should probably also look at the general population, and see what the probability that a kid born to anyone (with no more information) is a genius. The trouble is, our article doesn't list a clear cutoff point for what a genius is, giving anywhere from the top 1.2% to the top 0.005% in terms of IQ (and this doesn't even include people like Michael Jackson who are described as "geniuses" in a given field, but clearly don't have super high IQs). Let's peg our "genius" cutoff at the top 0.1% of the population (in terms of IQ), which would imply that over the whole population, the probability of any given child being a "genius" is 1/1000. Now, we need to find a source that tracks the IQs (or a suitible proxy; perhaps we could find enough SAT scores, or something like that) of parents and children, and that has enough people at the very high end, so that meaningful conclusions can be drawn for what the probability of a given child born to someone in the top 0.1% of the population also are in the top 0.1%. I have no clue where such a source will be found, and I suspect that the OP isn't going to get any sort of numeric answer to their question. Buddy431 (talk) 23:37, 12 January 2010 (UTC)[reply]
Incidentally, one of the first studies of the heredity of intelligence was none other that a study of the heredity of "genius", broadly defined. Francis Galton, Hereditary Genius (1869). Of course, it is not rigorous by modern standards, but it did make the strong argument (for the time) that talented people seem to have talented offspring. --Mr.98 (talk) 14:21, 13 January 2010 (UTC)[reply]
You don't need twins that have been separated at birth. You can do a standard twin study. You get lots of pairs of identical twins and lots of pairs of non-identical twins and give them all an IQ-test (or whatever other test of intelligence you choose). You then see if the identical twins are more likely that the non-identical twins to have similar results. If identical twins get similar results more often, then the characteristic in question is probably hereditary (by looking at the numbers very cleverly you can quantify how much of the characteristic is determined by genes). Since both identical and non-identical twins will have the same up-bringing as their twin, that factor cancels out leaving just genetics. You do need very large sample sizes to get a reliable result, though. --Tango (talk) 02:20, 13 January 2010 (UTC)[reply]

What's the sampling limit of human hearing? edit

At what sampling rate does the human ear distinguish continuous audio? Is there an upper limit (i.e. can someone tell the difference between 96KHz and 192KHz sampling on the same sound)? --70.167.58.6 (talk) 18:35, 12 January 2010 (UTC)[reply]

This isn't the exact answer to your question but will be close: I think you've confused the Kb rate of a music file with the KHz sampling rate. Our Hearing (sense) article says that humans can generally hear sounds from 20 Hz up to 20,000 Hz; and the Nyquist rate for 20,000 Hz is 40,000, or 40 KHz. That's presumably why compact discs use 16-bit samples at 44 KHz — there will be no aliasing of any sounds in the range detectable by humans, even the highest-pitched sounds. As for the difference between a data rate of 96 Kb versus 192 Kb, this depends on the sample size (16-bit samples? 8-bit samples? 1-bit samples?) and the sample rate, and on the lossy compression algorithm (like mp3 or AAC) that is being used — and so the actual answer to your question will have to come from a study where people are asked to evaluate their subjective perception of the music files. Comet Tuttle (talk) 19:03, 12 January 2010 (UTC)[reply]
I'd add to that great answer by mentioning that the upper limit varies dramatically with age. Younger people can hear significantly higher frequency sounds than older people. SteveBaker (talk) 19:26, 12 January 2010 (UTC)[reply]
SteveBaker said I wrote a "great answer" on the Science Desk! I get an Achievement! Actually I am uncomfortable with my lack of relating the 16-bit sample size to the Nyquist rate and perception. Comet Tuttle (talk) 19:41, 12 January 2010 (UTC)[reply]
Do Quantization error and Audio bit depth#Dynamic_range help? -- Coneslayer (talk) 20:07, 12 January 2010 (UTC)[reply]
They do, thanks. I'll use them next time. Comet Tuttle (talk) 20:17, 12 January 2010 (UTC)[reply]
For intelligible but poor "telephone quality" audio the sampling rate can be as low as 8kHz. If no compression is involved nor different distortions introduced by analog filters in A-to-D or D-to-A filters then the answer to the OP's 2nd question is No. Cuddlyable3 (talk) 20:55, 12 January 2010 (UTC)[reply]
(EC) I'm not so sure that the OP was confused although I suspect he/she didn't understand the meaning of the terms and how they relate to the audio. Since the OP mentioned a sample rate of 96kHz versus 192kHz not a data rate of 96 kb versus 192 kb I'll take this question at face value. Most ABX listening tests (and other reliable listening tests) I've seen from places like Hydrogenaudio (and published ones) show that with uncompressed samples, a decent bit depth (e.g. 16 bit), few people can tell the difference between 48kHz vs 96kHz even on very high end equipement. In fact, I can't recall if I've seen anyone shown to be able to tell the difference. Not surprisingly, few people bother to test 192kHz. This obviously agrees with the scientific understanding of audio sampling and the limits of human hearing. If the theory says one thing, the experiments reach the same conclusion, I think we can safely say both are correct unless some very strong evidence is presented to the contrary.
A bitrate of 24 bit however can be an improvement over 16 bit (i.e. detectable) by some listeners although I've seen it suggested 20 bit may be enough. A higher sampling rate could be useful for future mixing and for non human listeners or for scientific purposes. (A higher sample size/bit rate is generally important for future editing.) It's also possible a device capable of outputting 96 kHz may be better then one capable of only 48 kHz
The OP may wonder why Bluray and other such systems offer 192kHz if even 96kHz is useless. Well other then a few audiophile nuts, most be agree it's just fancy marketting. Note however that you can get a benefit from such formats in that in many cases the mastering is different and in particular may be less processed then the more mass market material like CDs which some listeners may prefer. And sometimes the options may be something like 44.1kHz/16bit (i.e. CD) vs 96kHz/24bit (or higher) and as I've mentioned there is a small chance you can detect the difference between 24bit and 16bit and it can especially be of benefit if you plan on editing.
You can easily perform such tests at home, get a bunch of 192 or 96kHz/24 bit samples and then using a very high quality algorithm convert them to 48kHz/16bit or whatever you want to test. Then use one of the various software programs that has the option of ABXing different samples to compare them and see if you can tell the difference. You can probably get some help at Hydrogenaudio if needed since this is a fairly common practice there. While convering the samples has the possibility to produce problems relating to the algorithm, it's the fairest method. In practice, I wouldn't bother particularly if you don't have very fancy equipement, I doubt you'd even be able to tell the difference between 24 bit and 16 bit.
Nil Einne (talk) 20:59, 12 January 2010 (UTC)[reply]
Actually looking at [11] I may have overestimated the number of people who can tell the difference between 24bit and 16bit Nil Einne (talk) 00:35, 13 January 2010 (UTC)[reply]

Thank you all. My original question is correctly stated. I'm not interested in music audio compression (which has been debated ad nauseum on infinite forum boards). Similar to persistence of vision, is there a persistence of hearing? What sampling rate can the human ear detect individual audio "frames"? And is there an upward limit where it's impossible to detect the difference between sampling rates (my previously mentioned comparison of 92KHz and 192KHz -- which are the upward limit of what current consumer audio technology is available) --68.103.143.23 (talk) 14:09, 13 January 2010 (UTC)[reply]

In terms of "persistence of vision"-type effects, audio doesn't really work like video. You could imagine "video" at a very low frame rate, like 0.1 fps (a slideshow where the image is changed every 10 seconds). It obviously wouldn't be smooth, but each individual frame would still be a perfectly good image. The framerate doesn't have anything to do with your ability to record and show an individual frame. Sounds is different; you need to sample at 20 kHz to be able to record and reproduce a 10 kHz sound. It doesn't matter whether that sound plays for just a few milliseconds or for hours, before changing to a different sound. Sound—even constant sound—is a wave, and you need to record all the peaks and troughs of that wave. So the high sampling frequency is necessary to record the sound in the first place... it's not related to how quickly your ears and brain can detect a change in the sound. Does that help? -- Coneslayer (talk) 16:42, 13 January 2010 (UTC)[reply]
Vision doesn't work in "frames" either - each rod and cone produces a continuous signal - not something that works via a sequence of snapshots like movie film or TV. However, our brains have adapted to cope well with 'interrupted' images. If you're a caveman chasing down a rabbit so you can chuck a large rock at it and eat it for lunch - then you need a visual system that can allow you to target the rabbit - even though it's running between trees or through tall grass. You need to maintain a mental model of where the rabbit is - even when it's out of sight for a tenth of a second. Hence we are able to mentally extrapolate the position of a moving object even though it's briefly invisible to us. As parts of the rabbit's body disappear and reappear behind blades of grass, we still "see" the entire rabbit - we aren't consciously trying to reassemble an image from little vertical strips that are changing all the time. This ability appears to be what produces that 'persistance' effect - and there are some rather subtle experiments you can do with computer graphics to demonstrate that. But the actual rods and cones are not snapping a sequence of still images like a movie camera - that's just not how our eyes work. We don't have 'persistance of hearing' because sounds can go around corners and are therefore not interrupted by the brief interposition of some small object between you and the sound source. We therefore have not evolved a tolerance for brief 'breaks' in an audio stream. SteveBaker (talk) 18:34, 13 January 2010 (UTC)[reply]
Right, I realize real life "analog" vision doesn't have frames. So my question was meant to focus on computer captured, sampled digital audio. My "persistence of vision" analogy was meant for captured images (film/video) which are "sampled" at so many times per second. Fall under that limit and your eye sees a series of still images and not continuous movement. Hearing has nothing similar -- a sound sampled so low that it sounds like chopped up samples? --70.167.58.6 (talk) 23:07, 13 January 2010 (UTC)[reply]
That's what I was trying to address above. If your sampling rate is too low, the sound doesn't become chopped up; instead, your recording can only reproduce lower and lower frequencies (pitches). A 1000 Hz (1 kHz) sampling frequency could only accurately reproduce tones of 500 Hz or lower (around Middle C). This is a fundamental limitation of signal processing, not our physiology. -- Coneslayer (talk) 23:32, 13 January 2010 (UTC)[reply]
Right - that failure to reproduce tones higher than the nyquist limit (half the sampling rate) is called 'aliassing' - and the visual analog of that isn't slow frame rates - it's poor resolution. If you play a computer game at 320x200 pixel resolution - it has horrible stair-steps in the straight edges of objects. Run the same game at 1600x1200 pixels and the edges look MUCH smoother. That's essentially what's happening with the audio. Those smooth audio sine-waves - plotted as a graph - get more and more jagged looking as you reduce the sampling rate. This is a much better analog than the 'frame rate' and 'persistence of vision' phenomenon. SteveBaker (talk) 00:26, 14 January 2010 (UTC)[reply]
If I understand the question right, I'd suggest "about 20 hz"--the low end of human hearing range. Somewhere around 20 hz is where perception shifts from "rhythm" to "pitch". Of course this depends very much on the underlying waveform. If you're listening to sine waves you aren't going to hear anything at all under 20 hz or so (though you might feel something if it's loud enough). If the waveform is more of a "pop" you are more likely to hear the shift from pitch to rhythm. Perhaps I misunderstood. Pfly (talk) 06:54, 14 January 2010 (UTC)[reply]
If it's not a sine wave then it has higher harmonics - and those harmonics would be within the audible range. It doesn't really make sense to talk about the frequency of anything that's not a sine wave. Theoretically, a sawtooth or square wave has frequencies going all the way up to infinity...some of which you can hear even if the base frequency is 0.000001 Hz! At those lower frequencies, you go from hearing the sound with your ears to feeling it in your gut. Some profoundly deaf people can appreciate music that way - some even play instruments like drums that produce high amplitude/low frequency sound. SteveBaker (talk) 13:56, 14 January 2010 (UTC)[reply]
Well as I said in my answer, there's no evidence in either theory or practice that it's possible for any human to tell the difference between 48 kHz and 96 kHz digital audio sampling rate, let alone 96kHz and 192kHz; no matter what the equipement or sample AFAIK. You can probably fairly easily test this yourself if you are interested and have equipiment capable of 96kHz and some samples on your computer. Nil Einne (talk) 14:49, 16 January 2010 (UTC)[reply]

Magnetic Permeability of Free Space edit

I've got a decent grasp on E&M, quantum, and related physics, so feel free to give a fully technical answer to this question. My understanding is that magnetic permeability governs the strength of the response of a medium to a magnetic field traveling through that medium (classically, anyway). By that definition, why isn't the magnetic permeability of free space zero? It doesn't seem to me that the vacuum should be responding to the magnetic field, it should simply 'carry' it (I suppose 'allow its passage' would be a better way to phrase that). 128.104.69.93 (talk) 19:19, 12 January 2010 (UTC)[reply]

See the article Permeability (electromagnetism). Magnetic fields pass through a vacuum. The permeability of free space μ0 is an observed physical constant that is related to defined units by μ0 = 4π×10**−7 N·A**−2. Permeabilities of media are measured relative to μ0. Cuddlyable3 (talk) 20:47, 12 January 2010 (UTC)[reply]
Well, yeah. But that doesn't address why the vacuum responds to the magnetic field at all. It seems to me that if there is nothing there (again, classically) then there shouldn't be anything to propagate the magnetic field (since there is nothing to respond to it). A definition is not the same as a physical rationale. If this isn't explainable classically, by all means use relativity/quantum. 128.104.69.93 (talk) 21:03, 12 January 2010 (UTC)[reply]
Let me rephrase my last reply a different way: The definition of μ[sub]0[/sub] is equivalent to the observation that there is a magnetic permeability of free space. What is the origin of that magnetic permeability? What is responding to the magnetic field if there isn't any matter there? 128.104.69.93 (talk) 21:07, 12 January 2010 (UTC)[reply]
It can't be zero or else the magnetic field strength would be zero. Since we observe magnetic fields are not zero everywhere, μ0 must have some non-zero value. The actual value is arbitrary and merely acts as a way to convert between several convenient unit definitions. A better question might be why is the Fine-structure constant approximately 1/137. When you figure that one out make sure you send me some of the Nobel Prize money. Truthforitsownsake (talk) 21:19, 12 January 2010 (UTC)[reply]
This is, I believe, again equivalent to simply observing that the magnetic field propagates through vacuum and it has the same value everywhere. I don't really care what the particular value is, I just want to know why its not zero. 128.104.69.93 (talk) 21:21, 12 January 2010 (UTC)[reply]
I believe the real answer is that it is simply a historical accident of the way E&M developed. The vacuum doesn't respond to the field (not in a classical sense any way, which is all we need for this discussion since Maxwell's equations are purely classical). More explictly, the magnetization and magnetic susceptibility of the vacuum are zero always. Presumably one could recast E&M in terms of this other items in order to make the constancy of the vacuum explicit, but as it happens we historically chose to describe E&M in terms of a permeability instead. Dragons flight (talk) 21:27, 12 January 2010 (UTC)[reply]
Mu0 isn't a property of the vacuum or a measured physical constant. The article Vacuum permeability discusses this at length. Puzl bustr (talk) 21:32, 12 January 2010 (UTC)[reply]
Thank you! Question answered. 128.104.69.93 (talk) 18:40, 13 January 2010 (UTC) (OP)[reply]
The OP seems to be confusing permeability with susceptibility. Dauto (talk) 23:30, 12 January 2010 (UTC)[reply]

CYMK space models edit

Since the primary colors of printing computers is cyan, magenta, and yellow notice the cyan looks more azureish and magenta looks bringht pink is not the magenta they have from computer screen. Some secondary from CYMK is actually R, Green and the blue is not RGB blue it is indigo, then is there tertiary color on CYMK? turquoise is actually looks more tertiary in CYMK printing computer models.--209.129.85.4 (talk) 20:34, 12 January 2010 (UTC)[reply]

I'm sorry, what is your question? --Tango (talk) 21:18, 12 January 2010 (UTC)[reply]
natural tertiary color for CYMK, since cyan and magenta looks blue and pink when print on newspapaer. I don't know how else to clear this question up.--209.129.85.4 (talk) 21:21, 12 January 2010 (UTC)[reply]
Tertiary color refers to a process of mixing primaries and secondaries (did you look at the article?), so there is always tertiary color for any color system. I think the point you are getting at it is that since the definitions of the primaries vary from one system to another (or one physical implementation to another) the actual results will appear differently depending on the original colors. This is a pain in the ass for graphic artists but it doesn't change the fact that you mix some colors to get others. Dragons flight (talk) 21:38, 12 January 2010 (UTC)[reply]
Often RGB are taken as the secondary colors in CYM color schemes and vice versa (cyan is the absence of red, etc). In that sense, the tertiary colors in each scheme are very similar, since each tertiary color is between a primary and secondary color in the scheme. Rckrone (talk) 23:34, 12 January 2010 (UTC)[reply]
In printing, there are lots of subtleties that prevent tertiary colours having much meaning (though in some processes they can be reproduced). For example, printing, like mixing paints, is essentially a subtractive process, but halftoning can mimic an additive mixing (as for light). You might like to read our articles on CcMmYK color model, Hexachrome and color printing? Dbfirs 07:53, 13 January 2010 (UTC)[reply]

Gall stone disease urine dipstick results edit

If a person has gall stones present (and causing biliary colic), other than a raised bilirubin what findings might be present if a urine dipstick was performed?

Many thanks 188.220.144.215 (talk) 21:56, 12 January 2010 (UTC)[reply]

The best person to ask this of is your doctor. If you are concerned about the results of any medical test, you should contact a trusted medical professional. Wikipedia cannot interpret the results of any medical test. See Wikipedia:Medical disclaimer. --Jayron32 22:22, 12 January 2010 (UTC)[reply]
A urine dipstick test is common for checking for kidney stones. A blood test (for raised bilirubin and liver enzymes) is common for gallstones. It is not common to expect the results of a blood test to equate to the results of a urine test. -- kainaw 22:32, 12 January 2010 (UTC)[reply]


There are all kinds of urine dipsticks; some test only glucose, some only glucose & acetone; the more complete have tests for glucose, ketones, blood, protein, nitrite, pH, urobilinogen, bilirubin, leucocytes, and specific gravity. None of these are particularly useful for diagnosis of gallstones, though if complete biliary obstruction were present there might be decreased urinary urobilinogen. - Nunh-huh 04:55, 13 January 2010 (UTC)[reply]
With respect Nunh-huh I think that it is the other way round. If there is complete biliary obstruction then there will be a 'raised' urobilinogen level, together with pale faeces, because the serum bilirubin is raised leading to a raised renal excretion level. But the OP should go see a doctor if he has concerns about his/her health. Caesar's Daddy (talk) 08:25, 13 January 2010 (UTC)[reply]
"When jaundice is due to an obstruction in the flow of bile: (1) The patient's stools are pale. (2) His urine is dark, and contains little or no urobilinogen. (3) His skin itches." [12] - Nunh-huh 09:43, 13 January 2010 (UTC)[reply]
Yep, ISC. I confused it with urobilin. Caesar's Daddy (talk) 12:38, 13 January 2010 (UTC)[reply]

Vacuum energy level in heterojunctions edit

Hello,

I have a question about heterojunctions. As displayed in the figure of the article, the vacuum level can be chosen to be the zero-energy reference level before contact. If I am correct (because it is not displayed anymore) the vacuum level bends along with the other bands after the two materials have been brought in contact. It will thus be higher for the first material then for the second. An electron taken out from the junction to infinity distance has, in my view, zero potential energy left; but this 'zero' seems to differ for both materials now. Where is my mistake? Any help is highly appreciated --Gnorkel (talk) 22:27, 12 January 2010 (UTC)[reply]

Animal optical illusion edit

 
This?

What is the name of the optical illusion where looking below the animal, typically a horse or elephant, in a drawing it looks like the animal has more or less legs than it should in the way its been drawn? The legs appear, then appear as gaps between the other legs. Simply south (talk) 23:33, 12 January 2010 (UTC)[reply]

I distinctly remember a similar illusion (and one that I think predates and in fact initially inspired the elephant-leg modification) in which an object that appears similar to the head of a fork has tines that descend to the base of the fork's head only to be the space between the tines as they attach to the head. I always thought it was called a widget, but Wikipedia does not have an article on it (yet). DRosenbach (Talk | Contribs) 00:22, 13 January 2010 (UTC)[reply]
Ah...yes. Here it is. And you may be interested in the related Penrose triangle. DRosenbach (Talk | Contribs) 00:23, 13 January 2010 (UTC)[reply]
Thank you. The article was located under Blivet (and I'll create a link). I'll make a mention there that maybe the elephant could be featured. It looks like it is the same one. Simply south (talk) 00:56, 13 January 2010 (UTC)[reply]

Elements edit

Which elements does a person need to survive? --70.244.235.220 (talk) 23:51, 12 January 2010 (UTC)[reply]

Are we to assume that your hypothetical person already exists as an adult and that on some randomly chosen day, you want to know from now on which elements does he or she require for survival? I would say the basic organic elements (COHNS), as well as P for membranes, nucleic acids and the like. Then there's all the ions in solution that perform necessary functions, such as Na+, K+, Ca2+ and Cl-. I think Mg and Li are necessary in trace amounts, as is I for thyroid function (at least). Then you have the necessary metal co-factors in various ion forms like Fe, Cu and Zn. (And as a dentist, I recommend a very small dose of F- to help prevent tooth decay and Sr for tooth sensitivity.) There may be a few other necessary ones, though. DRosenbach (Talk | Contribs) 00:37, 13 January 2010 (UTC)[reply]
There's apparently a need for some Se for selenocysteine. DRosenbach (Talk | Contribs) 01:00, 13 January 2010 (UTC)[reply]
I meant all elements that the body uses in any quantity. --70.244.235.220 (talk) 01:03, 13 January 2010 (UTC)[reply]
Check out our human nutrition article. The minerals listed are: Calcium, Chlorine, Magnesium, Phosphorus, Potassium, Sodium, and Sulfur (greater than 200 mg/day), and Cobalt, Copper, Chromium, Iodine, Iron, Manganese, Molybdenum, Nickel, Selenium, Vanadium (speculative)and Zinc (less than 200 mg/day). Add in the Carbon, Oxygen, Nitrogen and Hydrogen found in organic molecules, and you should be good to go. Of course, some of these elements need to be in a specific form for us to use them (we need the essential amino acids already put together, phosphorus usually comes as a phosphate and would kill us in elemental form, etc.). Buddy431 (talk) 01:07, 13 January 2010 (UTC)[reply]
What foods contain those trace elements? I've never noticed most of them on nutrition labels. --70.244.235.220 (talk) 02:01, 13 January 2010 (UTC)[reply]
Most living things need the same elements, so most food will contain some of each, the issue will be is it enough for humans. The more famous ones for being deficient, such as iodine may not be found in land plants. Cobalt is needed in the form of cobalamin and the lack can cause vitamin B 12 deficiency. Graeme Bartlett (talk) 02:17, 13 January 2010 (UTC)[reply]
Selenium is found in large amounts in brazil nuts (in addition to a lot of other foods apparently: http://dietary-supplements.info.nih.gov/factsheets/selenium.asp#h2)
Composition of the human body would be a useful article to read. --Tango (talk) 02:33, 13 January 2010 (UTC)[reply]
A page that article links to, chemical makeup of the human body, shows that the human body contains more arsenic than some of the trace minerals, such as molybdenum. Why is it that trace amounts of those elements have an effect on the human body, but larger trace amounts of arsenic don't? --75.28.54.203 (talk) 03:06, 13 January 2010 (UTC)[reply]
Reading the Molybdenum article, I see that Molybdenum is necessary for the activation of certain enzymes. Enzymes merely catalyze reactions, and so there don't necessarily need to be a lot of them to be effective. Presumably, arsenic is not needed in any necessary reactions in the body. On the other hand, I'm not sure that we can definitively say that minute levels of aresenic don't have some effect on the body. I didn't read the articles on arsenic toxicity and arsenic poisoning, but they may have some information on how much arsenic is needed to cause ill effects. Buddy431 (talk) 03:34, 13 January 2010 (UTC)[reply]
Arsenic and silicon may actually be essential elements in humans. Arsenic appears as trimethylarsenobetaine. On the other hand it may be acting as a substitute for another element, like fluorine or strontium. Graeme Bartlett (talk) 12:10, 14 January 2010 (UTC)[reply]