Wikipedia:Reference desk/Archives/Science/2015 September 12

Science desk
< September 11 << Aug | September | Oct >> September 13 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


September 12 edit

neon lamp driver from 9V battery+step up transformer+R+C+neon lamp edit

hello. is it possible (in principle) to make %topic% with just the above components, using the lamp instead of a transistor for switching (plus perhaps some diodes "at strategic points")? I imagine it must be something in between an inductive boost converter and a Pearson-Anson oscillator, but I somehow totally lack the mental tools to make such a circuit (if this can be done at all) Asmrulz (talk) 02:27, 12 September 2015 (UTC)[reply]

See Pearson–Anson effect, induction coil, and diode logic. It would certainly be possible to use a diode network to switch the battery input to the transformer primary, giving you the HT output to drive the neon. Historically (before semiconductors were widely available), this would have been done with a mechanical interrupter on the primary, but diodes could also be used. However, you're still going to need some sort of non-linear component in the transformer primary circuit, and a transistor would be a more obvious choice than a diode network. Tevildo (talk) 08:42, 12 September 2015 (UTC)[reply]
Can't the lamp itself be such an element (because negative resistance and everything)? The thing is, it is across the secondary... Asmrulz (talk) 22:59, 12 September 2015 (UTC)[reply]
The lamp can provide the switching waveform - you won't need any other active components in the circuit - but you'll need some way of getting the waveform back to the primary. You might be able to do something with a saturable core and a tertiary winding - see magamp - but circuits using a magamp as an oscillator seem to be confined to the makers of perpetual motion machines (such as the Motionless electromagnetic generator), and I wouldn't recommend it. Tevildo (talk) 23:26, 12 September 2015 (UTC)[reply]
The OP is asking about a 9V battery. The problem with the Wikipedia article on the Pearson–Anson effect is on answering this simply. That diagram does not show the source voltage (Vs). To simplify. The source voltage has to be equal or above the breakdown voltage of the tube. 9 volts is not enough. In a pure DC circuit there is no configuration of components, consisting of just resistors and capacitors possible, that will enable a 9 volt source to light any form of gas-discharge lamp. A switch or transistor (interrupters) and some inductance is required. ( ah, to other editors: would an explanation of a Joule thief help or would it just complicate the issue?)--Aspro (talk) 23:47, 12 September 2015 (UTC)[reply]
"The OP" also mentioned a transformer Asmrulz (talk) 15:02, 13 September 2015 (UTC)[reply]
It might be possible to use a DIAC instead of a neon in the Pearson-Anderson oscillator, which should be able to drive a step-up transformer. 20:11, 14 September 2015 (UTC) — Preceding unsigned comment added by LongHairedFop (talkcontribs)

does it make more sense to buy a USB bitcoin miner (seen on ebay / amazon) edit

or can the bitcoins be mined with every computer without this usb stick? --Motorolakzrz (talk) 03:45, 12 September 2015 (UTC)[reply]

See Bitcoin#Mining Given that the likely profit from using either is unlikely to cover the cost of electricity used, never mind the cost of purchasing the equipment, neither makes sense. AndyTheGrump (talk) 04:36, 12 September 2015 (UTC)[reply]
The thing is that there was a point in time when you could mine bitcoins with just your CPU (and you still can - if you're extremely patient!), but the bitcoin algorithm was designed to prevent inflation of the bitcoin economy by making it increasingly difficult to make new coins as time goes on. When it became unreasonably slow to make bitcoin that way, you could still just about mine bitcoins profitably with the GPU in a high-end graphics card, but as the time it took got longer and longer - the cost of electricity to run the thing started costing more than the bitcoins it could mine. Subsequently, it took a fancy FPGA circuit to do the math fast enough - but as the effort gets bigger, even those are now impossibly slow and full custom chips are now the only reasonable way forwards. But even with a state-of-the-art miner (which is NOT what is on sale on eBay/Amazon) you need an extremely cheap source of electricity, it's going to be hard to make money no matter what hardware you have.
I think it's unlikely that anything you could buy on eBay/Amazon for a reasonable sum of money will be profitable without a bunch of solar panels or something similar to power it...and even then, it's going to be painfully slow. The people who do this 'for real' do it on an immense scale with huge farms of servers filled with very fancy custom circuitry.
Having said that, the small startup company I worked for about 18 months ago had bitcoin miners sitting in every office, churning away. While they didn't make enough money to pay for the electricity they used - it was reasoned that they were 100% efficient converters of electricity into heat - and therefore the electricity was 'free' in the winter months when the offices needed heating. It's hard to argue against that - but the entire winter, they didn't turn out a single bitcoin (value: around $500) between half a dozen of them...so unless the cost of buying them was VERY low, it wasn't worth the investment.
I don't think there is any way whatever for you do make money mining bitcoins unless you're doing it on an industrial scale.
SteveBaker (talk) 05:11, 12 September 2015 (UTC)[reply]
It's true that mining difficulty has generally increased with time, but that has nothing to do with inflation or with any kind of built-in increase in the difficulty of mining. The introduction of GPU and FPGA and ASIC miners makes the difficulty go up, not the other way around.
The difficulty is adjusted dynamically so that new blocks are mined every 10 minutes on average. That means that if all the miners are paying their own costs and motivated by profit, the network equilibrates at a point where the reward for mining a block is slightly larger than the electricity+hardware cost (to the whole network) of mining it. Actually some people mine for fun, and some people use their employers' electricity and/or hardware but keep the profit for themselves; that makes things somewhat worse for everyone else, but I don't know how much worse in practice.
The built-in block reward halves every few years and will eventually go to zero, so that there's a cap on the total number of BTC, but that has little to do with mining difficulty. The security of the network depends on mining being expensive forever, and the reward for mining a block is the only motivation to spend all that money on electricity, so the per-block reward must remain high forever. In the long term, the fee per transaction and/or the number of transactions per block will have to go way up if Bitcoin is to survive.
If you want to figure out whether you can afford to mine, I suppose you can divide the hash rate of your hardware (in GH/s) by the total network hash rate to figure out what fraction of the network you are, multiply that by the total revenue (in USD/day) to figure out how much you'll make, and compare that to your costs. -- BenRG (talk) 07:48, 13 September 2015 (UTC)[reply]
Thanks for the clarification - I know that the difficulty factor has been rising due to some designed feature of the system - I wasn't aware that it depended on the rate of production rather than the absolute number of bitcoins. The effect is much the same though...once a new technology for mining coins kicks in, it automatically obsoletes all of the slower mechanisms - and anything you can buy on eBay is at least a couple of generations behind the curve. SteveBaker (talk) 16:19, 14 September 2015 (UTC)[reply]

Space flight. edit

When an aircraft flies in Earth's atmosphere it maintains attitude (roll, pitch & yaw) utilizing gyroscopic instruments referenced to Earth's surface. When a spacecraft leaves Earth what is the new attitude reference? Also, for navigation, what replaces Earth's "North" when a spacecraft needs to maintain heading? I've sent these questions to NASA several times but have never received an answer. — Preceding unsigned comment added by R1W2J3 (talkcontribs) 11:20, 12 September 2015 (UTC)[reply]

This is probably not a reliable source, but it's apparently the answer a NASA Flight Controller gave and it seems legit. Here's some related reading you might find interesting. 99.235.223.170 (talk) 13:02, 12 September 2015 (UTC)[reply]
In the Apollo days, the technical term was REFSMMAT - the reference attitude would be updated at various points in the mission to give the most helpful indication on the astronaut's instruments. Tevildo (talk) 13:09, 12 September 2015 (UTC)[reply]
The OP has a misunderstanding of how a gyro woks. The are not referenced to the Earth's surface. They will work anywhere. Also spacecraft don't maintain 'headings' but trajectories.--Aspro (talk) 13:22, 12 September 2015 (UTC)[reply]
We have articles on Attitude control and modern celestial navigation. As an example, the Cassini (spacecraft) had a special camera / computer. In one special mode, it could perform projective geometry calculations and directly drive the attitude control system for optically-guided attitude control. In other words, the telescope could be used for starsighting, and the computer would keep firing the rockets to maintain the star in the center of the field of view, (just like the technology used on an ancient sea vessel, but with rockets in interplanetary space!)
Inertial navigation systems - gyros - are very useful in space (and in aviation) - but they need an initial attitude setting, and typically need to be recalibrated for precession, among other gyro errors. This detail even affects terrestrial gyro equipment.
In other cases, a spacecraft can use radio navigation - RADAR - to know its position; and it can assess its attitude by visual identification of some fixed point of reference (for example, Earth and Sun are usually both pretty bright and easy to identify in a wide range of electromagnetic spectra). If you can get two fixed reference points, and a RADAR-based distance, you have enough information to constrain your position and attitude in all degrees of freedom.
If you're interested in more specific detals for any particular spacecraft or type of operation, we can point you to more resources. The techniques commonly used in low earth orbit are different from the techniques used in, say, famous interplanetary probes like New Horizons or Voyager; and those are different from manned spacecraft like the Space Shuttle Orbiter or the Apollo Lunar Module. Each of those spacecraft had specific instrumentation - RADAR, optics, and so forth, with special parameters designed just for their particular missions. Nimur (talk) 14:28, 12 September 2015 (UTC)[reply]

Condensed Matter edit

I just ran across the term "condensed matter" and found they were talking about ordinary solids and liquids. I remember taking an astronomy class some years ago and I could have sworn that "condensed matter" referred to the stuff neutron stars and white dwarfs were made of. Or has my memory slipped a gear? — Preceding unsigned comment added by 50.43.33.62 (talk) 17:25, 12 September 2015 (UTC)[reply]

Two applications of the same term. That's all. "Condensed" means "all packed together tightly"; in the context of chemistry it means solids and liquids, and in the context of stellar physics, it means neutron star matter. It's just like how the word "nucleus" means different things depending on whether or not you're in biology class or chemistry class. --Jayron32 19:00, 12 September 2015 (UTC)[reply]
Bose–Einstein condensate is rather like the neutron star stuff (except it's at the opposite end of the temperature scale) — very strange matter (... and I'm using strange in the everyday sense here). Dbfirs 20:52, 12 September 2015 (UTC)[reply]
In neutron stars one finds degenerate matter. μηδείς (talk) 00:09, 13 September 2015 (UTC)[reply]
Thanks, that's the link I was looking for. (very strange that I missed it!) Dbfirs 11:50, 13 September 2015 (UTC)[reply]

Airline safety edit

I have always heard that "flying is safer than traveling by car" on a per mile basis, and I believe that is true. But what about on a per trip basis? Say I make about two car trips a day (I drive to work and then I drive home). In 40 years that could add up to 25,000 trips. I've had two accidents that were serious enough to make my car undrivable. I don't know how many airline flights I've taken, not many, maybe 100. My point is all you can do about these kinds of things is you can either decide to make the trip or not, so the odds of an accident on a per trip basis are more meaningful then on a per mile basis. — Preceding unsigned comment added by 50.43.33.62 (talk) 17:55, 12 September 2015 (UTC)[reply]

This document has data on airplane accidents per departure, which is (I think) what you're looking for. --Jayron32 19:03, 12 September 2015 (UTC)[reply]
Even if one trip is considered, cars typically are driven among many other cars, pedestrians, other obstacles, faulty traffic lights, etc. Airliners instead fly along dedicated air corridors and the aircraft have autopilots which often guide them, plus there are two pilots who cross-check each other, plus air traffic controller, unlike lone car drivers. Additionally, you can strike a drunken car driver. Because of that even a single air trip is considered safer than that by car. Brandmeistertalk 19:10, 12 September 2015 (UTC)[reply]
All those extra safety steps are necessary because airplanes are inherently more dangerous. For example, if a car catches fire while being driven, you only have to pull over and get out. If an airplane catches fire mid-flight, that's very difficult to survive. Now, if you put in enough extra safety features, it's possible to make an inherently unsafe thing safer than an inherently safe thing (with no safety measures taken), and cars are inherently somewhat dangerous, too, just not as bad as airplanes.
If airplane quality, maintenance and operation requirements were as lax as they are for cars, you'd have far more deaths in airplanes. StuRat (talk) 16:19, 16 September 2015 (UTC)[reply]

Jayron's reference provides accident and departure statistics for worldwide commercial flights from 1959-2013 (though USSR and CIS planes were excluded due to lack of data). This group of plains flew 660 million flights over 44 years and suffered 1859 accidents, including 612 that caused at least one fatality, with about 30,000 fatalities overall. So that's about a 1 in a million chance of stepping onto a plane and having someone on that plane (not necessarily yourself) die, and a risk of any type of accident only three times as much. According to the US government, there are about 10 million car accidents per year, with about 40,000 deaths by car accident per year (though this number counts every individual death, not just number of accidents that caused death, as well as pedestrians killed by cars, so not totally comparable to the air accident stats). Let's take your numbers as typical and assume the average american makes 700 or so car trips per year. That would be 210 billion car rides per year, with one accident per 21,000 car rides average, and one death per 5 million or so car rides. Of course keep in mind this is a very back-of-the-envelope comparison, and missed the fact that the most modern airplanes are much safer than those of the previous generation. I'm basically comparing today's cars to every plane from the last five decades, which is a little unfair. Someguy1221 (talk) 00:43, 13 September 2015 (UTC)[reply]

Indeed; ultimately, what we choose as our denominator when we normalize statistics for comparison is purely a heuristic. Should we compare number of trips? Number of miles? Number of passengers multiplied by number of miles? Number of dollars spent? Whichever value we choose represents a heuristic model of our threat: if we count miles, that implies that (for some reason) we believe the risk is uniformly distributed over distance; if we choose number of trips, that suggests that we believe the risk is quantized - e.g., because of the adage that the riskiest part of a flight is the takeoff and landing. There is no universally correct answer.
I feel safer when I fly myself than when I fly on a commercial airline, even though accident statistics very clearly show that airline transport is safer than general aviation. This may be an illusion, but it is a real psychological effect. The process of Aeronautical Decision Making is the "systematic approach to the mental process of evaluating a given set of circumstances and determining the best course of action." It means becoming fully informed about all the pertinent objective facts of a situation. It also means to think clearly through the details, and to be aware of our own mental limitations. Finally, it means to take a reasonable course of action, based on all available data.
It is also worth emphasizing the formal distinction between probability and statistics, because this is relevant to making good decisions about outcomes.
In my short career as a (non-commercial) aviator, I've seen many mechanical failures - on the ground and in the air. Thus far, none of these interesting occurrences have resulted in a fatal accident. But from this perspective, I recognize the widespread fallacy of fixating on Gaussian distributions. The mean and median event rate for a large population has zero impact on when I will experience an event. This is the causation-correlation fallacy. It is tragic that in the basic tiers of formal schooling, we spend so much time studying bell-curves for large populations. We ought to spend more time studying Poisson distributions and their effect on probability. Bell curves are fantastic ways for institutional regulators to study safety on the macro-scale, and do provide actionable information if your decisions can affect large numbers of events. However, I am only one individual. I do not represent 300 million air travelers; I do not actually feel effects of n-accidents-per-hundred-million. What I care about is likelihood of a single event - one single event, not n-events-per-mille - and all I care about is how that single event will affect me (and my aircraft and my passenger). Recognition, and realistic understanding, of these types of probability distributions, is more useful for me to inform my judgement than all the bell-curves in the world. My aircraft will not suffer an accident because the national average for aircraft predicts it. My aircraft will only suffer an accident if a mechanical or systems failure occurs, or if there is a fire, or if I command the aircraft to do something unsafe, or if some outside occurrence creates an unrecoverable situation. None of that is caused by nation-wide average. Quite the opposite: the nation-wide accident rate is caused by the aggregation of all of these individual events.
This perspective completely changes the way I evaluate, and reduce, my risk - whether the risk is related to automobile traffic, operating or riding in aircraft, or participating in any of the other uncertain activities of ordinary life.
Nimur (talk) 15:44, 14 September 2015 (UTC)[reply]
I liked your answer so much I stole it for my blog. — Preceding unsigned comment added by 50.43.33.62 (talk) 15:30, 15 September 2015 (UTC)[reply]
Sure... thanks for letting me know. If you're sending it out to a very large audience, I can copy-edit my writing to bring it up from "conversational" to "print-worthy" quality. In any case, all of my contributions at Wikipedia are released under the GFDL license, which permits you to republish my work, subject to those reasonable terms. Nimur (talk) 15:01, 16 September 2015 (UTC)[reply]
But Poisson would scare them even more as the probability of an event occurring is highest right after an event just occured. It's not causative just an outcome of a Poisson distribution of counting (i.e. wait for a bus and every minute mark down when it comes. If it takes 5 minutes, the probability jumps to 1 out of 5 from 0). It's really hard to convince people that random events aren't causative when the probability jumps up after each event. --DHeyward (talk) 23:14, 16 September 2015 (UTC)[reply]

Taser conductivity edit

So why those folks holding tasered persons and elsewhere aren't affected by the electrical current too, even though it's about 50KV in voltage?--93.174.25.12 (talk) 18:58, 12 September 2015 (UTC)[reply]

The firers are insulated, and the voltage is not relative to earth, though if one person is holding the victim then they are likely to get a small shock because they form a parallel circuit. When there is one person each side, as in the video, the circuit is completed only through the victim, and the restrainers get no shock because they are not part of the circuit. Dbfirs 20:39, 12 September 2015 (UTC)[reply]
Electricity follows the shorter path, it does not spread randomly. That's between the two electrodes, which are in contact with the target.--Jubilujj 2015 (talk) 22:27, 14 September 2015 (UTC)[reply]