Wikipedia:Reference desk/Archives/Science/2015 August 28

Science desk
< August 27 << Jul | August | Sep >> August 29 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


August 28 edit

Looking at Earth from Space edit

If someone were looking at Earth from a distance of 2,000 light years, would he be able to see what was going on here? Would he be able to see the Romans from Jesus's time? Or would the information be lost, as the light travels through space, in the same way that we cannot see a picture through a cloud? --Jubilujj 2015 (talk) 01:05, 28 August 2015 (UTC)[reply]

Assuming you had the technology to resolve images at that distance (which as of today is impossible), yes, you would see the Earth as it was 2,000 years ago. Whenever you look up at the night sky you are looking into the past. In order for you to see a star (or a planet) the light must reach your eyes first. So if something is 2,000 light years away, the light would have needed 2,000 years to reach your eyes. --Stabila711 (talk) 01:17, 28 August 2015 (UTC)[reply]
There may not even be technology to resolve the light at a certain distance. Given that photons are quantized, eventually, you don't get enough photons to make meaningful images more than "kinda bluish dot". I'm not sure what the math behind such limits are, but eventually even if you could collect and perfectly resolve every single photon, you may not have enough to make a meaningful image. --Jayron32 02:23, 28 August 2015 (UTC)[reply]
@Jayron32: Valid point. We have been able to directly image a few exoplanets at this point (see: List of directly imaged exoplanets) however those are all extremely blurry images and the furthest away was 477 light years. There must be some set limit at which the photons physically can't be resolved regardless of how powerful the imaging technology is. Right now we have the technology to resolve crystal clear images from satellite range (they have been known to pick out clear shots of license plates on cars). That isn't light year range but that is still pretty impressive considering the first photograph was taken in 1822 (less than 200 years ago). I don't really doubt that in the future we will be able to directly resolve close exoplanets. As for the further away ones, only time will be able to tell for sure. --Stabila711 (talk) 02:34, 28 August 2015 (UTC)[reply]
That "reading a car license plate from orbit" is urban legend - based around a myth that popped up during the cold war. Oft-repeated, never actually verified.
Let's do the math:
The angular resolution for an ideal telescope is: R = w/D (R is the angular resolution, w is the wavelength of the light, and D is the diameter of the mirror/lens). Assuming they got the thing up there in a regular rocket, the mirror is unlikely to be bigger than Hubble - and that's 4 meters. The longer the wavelength, the worse the resolution - so let's be generous and pick 400 nm, which is way up in the ultra-violet. The angular resolution is therefore: 10-7 radians. If the satellite is up at 100 km then x = tan(10-7 radians) * 100,000m - assuming it's looking straight down...but if it's looking at an angle in order to read the license plate - it'll be much more than that...but we'll be generous. So the likely best resolution is around 10cm...you'd need to be around ten times closer or have a ten times bigger mirror to read a car license plate...so no, they can't do that. In practice, they get around 15cm...good enough to see that the car *has* a license plate - but not good enough to read it.
SteveBaker (talk) 03:48, 28 August 2015 (UTC)[reply]
@SteveBaker: Yeah math! But tan xx for small x so "tan(10-7 radians) * 100,000m" is 1cm, not 10cm. Of course you were quite generous with other assumptions, from dropping the 1.22 factor from the Rayleigh criterion to using an altitude of 100 km. Hubble orbits at 550km, though reconnaissance satellites are often in eccentric orbits with perigees half that of Hubble's. For instance USA-245, the most recently launched KH-11, is in a 266km x 1008km orbit according to its most recent TLE. Also Hubble's mirror has a diameter of 2.4m, not 4m. One other nit: the visible spectrum runs about 390nm - 700nm, so 400 nm is not "way up in the ultra-violet", but is just plain violet. -- ToE 09:31, 29 August 2015 (UTC)[reply]
My mistake. I striked my comment in my last message. Thank you for that information. I hate when I get caught up in a common misconception. --Stabila711 (talk) 04:06, 28 August 2015 (UTC)[reply]
Thanks for the math, Steve. Interferometric imaging techniques can allow for sub-wavelength imaging. The CHARA array, for instance, achieves a resolution of 2.5x10-9 radians, despite having component telescopes of much lower nominal resolution than the one in Steve's example. Jayron is concerned about running out of light at long distances - the answer to that is to capture more light with a bigger telescope. You can calculate this in both directions: With a telescope of given parameters, how far can it be from Earth while still resolving something of a given size; and with a given distance, how large does your telescope have to be to resolve something of a given size? If you want to see people from two light years away, I have a feeling your telescope will be a real monster. Someguy1221 (talk) 04:40, 28 August 2015 (UTC)[reply]
Interferometry, and its friends synthetic aperture and superresolution, are not magical ways to get around the angular resolution limits. What these techniques do is trade off spatial sampling against time-domain sampling. That works great if the object you're imaging is perfectly stationary. It doesn't work so well if the object moves!
Let me provide an example to tie into this question - to help develop some intuition - have a look at our article on Pluto. Until very recently - like, last month (July 2015), our best photographs of Pluto were barely a few pixels. Here's a webpage by Marc Buie on Earth-based photometry of Pluto. See how blurry the composite maps are, even after using all kinds of algorithms and technology to enhance them? That's an object in our solar system. All the biggest ground-based and space-based telescopes had photographed Pluto; all the best computational imaging algorithms had been applied; all the state-of-the-art image combination methods were used ... and the best photos we had were just a few pixels. When we talk about extrasolar planets, where ranges are measured in light years instead of astronomical units, the prospects for imaging the planet are much worse. We're lucky to know that a planet is there - one single pixel! It's far beyond our best current technologies to actually see what the planet looks like, let alone to image features or structures on its surface.
If you wanted to start playing around with theoretical physics - what could we image if we had no constraints on our telescope... suppose we could build a telescope whose collecting area had a diameter measured in A.U.s? Suppose we ignore the engineering challenges, and only concern ourselves with known limits of optical physics. Would it buy us anything to build a telescope mirror the size of the entire solar system? There are other problems besides angular resolution. The optical depth of interstellar space is non-zero: light is attenuated, and refracted, and distorted, by stray dust and atoms floating around in the cosmos. And the planet's atmosphere might not be transparent either! There are issues related to noise - there really might be very few photons arriving in our direction. We can be very quantitative about these limitations; but the reality is, long before those laws of physics dominate the problem-space for imaging extrasolar planets, the practical details of building such a device would manifest. We cannot build an optically-smooth mirror of such a size. We could not aim it or stabilize it. These are real problems of material science, control engineering, and optics - not to mention economics!
So instead of hypotheticals, take a look at what real imaging scientists do to solve more tractable problems: we use stochastic signal processing - statistics and math - to improve image quality. We use adaptive optics and active control to build better machinery. We study advanced materials to build more perfect glass for reflective and refractive optics. We go back to core principles of mathematics to formalize optimal solutions to optical questions about focus, color, illuminants, and the imaging condition. We really have entered a fascinating time - the era of computational photography - because it is now easier to build powerful computers out of silicon than to build powerful optics out of silica. This is how we are going to image in the future - whether we are photographing distant astronomical objects or nearby domestic events. You can bet that if there are any intelligent E.T.s out there on some distant world, they probably learn more about our planet by using a computer than a telescope.
Nimur (talk) 05:01, 28 August 2015 (UTC)[reply]
So what you're saying is...Hack the planets?   --71.119.131.184 (talk) 07:20, 28 August 2015 (UTC)[reply]
I think what I'm saying is, I once read a book called Basic Earth Imaging, expecting to learn how to photograph Earth from space. I was still coming down off a high where I thought I might be going to the moon, and I was seeking some rad photo tips. It took me at least a few chapters before I even realized that the "imaging" exemplified in that book did not use visible light. (I now recognize this failure-of-comprehension as one of my most stubborn academic accomplishments). It took a few more chapters and a couple of years of more math to realize that arbitrary distinctions between types of waves doesn't make any difference for imaging.
It's a good thing that humans have already developed the technology to cluster, register, and synthesize composites from hundreds of millions of individual conventional photographs...
Nimur (talk) 14:53, 28 August 2015 (UTC)[reply]

Would it ever be practical to use a telescope to look into Earth's past ? edit

OK, 2000 light years is not going to happen, but how about the very recent past, like a few minutes ago ? For example, if we had a huge telescope on Mars, could we have used it to find which way the MH370 plane had turned when it disappeared from RADAR (if Mars was facing that side of Earth at the time). I'm still rather skeptical. By the time we could construct such a telescope on Mars, I'd expect us to have enough satellites around Earth, recording every moment, that those would provide much better info. StuRat (talk) 16:13, 28 August 2015 (UTC)[reply]

Recall that every picture is a picture of the past [1] ;) SemanticMantis (talk) 16:16, 28 August 2015 (UTC)[reply]
Yeah... That... --Jayron32 16:48, 28 August 2015 (UTC)[reply]
When you look at your reflection in the bathroom mirror - you're seeing yourself as you looked about three or four nanoseconds in the past. So, yes!
BUT no matter how big or how far away you place the mirror, you can never see anything back here on earth that happened before the day you launched it. So no matter how fast or how far you flung it, you wouldn't be able to see what happened to MH370 because that happened before the mirror was launched. The reason for this is that the sunlight that reflected off of MH370 on March 7th (~6 months ago) is already about half a light year from Earth - and speeding outwards at the speed of light. Since we can't launch our mirror at greater than lightspeed, it can never catch up with the light from MH370 in order to reflect it back towards us. The only thing we might be able to do would be to find a gigantic natural reflector out there someplace and wait for the MH370 light to bounce off of it and head back to Earth. That's really only a theoretical possibility - it doesn't seem remotely likely in practice.
We could indeed speculatively build a giant telescope on Mars and use it to record pictures of ourselves as we were somewhere between 4 and 24 minutes ago (depending on where Earth and Mars are in their orbits) - and by the time they'd been transmitted back to us, it would be like having a camera looking 8 to 48 minutes into the past.
Of course such an insanely high-resolution telescope would only be able to record a tiny area of the Earth's surface at a time, and for a good part of the year, it would be looking onto the night side of Earth, which might not be of much use. Another practical problem is that if you suddenly realised that something important happened (say) 1 minute ago, it would be too late to command the Mars Telescope to look towards that event because your radio signal to tell the telescope to move would arrive one minute AFTER the light you wanted it to capture. When you consider all of those things, it might just be easier to install a camera here on earth in the area you care about and simply video-tape the images for later viewing!
Interestingly though - you probably could do this kind of thing with sound waves. Imagine a very sensitive, highly directional microphone, maybe hovering in a drone aircraft high above a city. You could use it to listen to conversations that happened in the recent past because your commands to tell it where to point the microphone would travel much faster than the sound waves you want it to listen to. If the thing was hovering a mile above the ground, you could theoretically listen in to conversations that happened about five seconds ago. I'm not sure that really helps! SteveBaker (talk) 18:36, 28 August 2015 (UTC)[reply]
How would it know where to look for something specific? — Preceding unsigned comment added by Baseball Bugs (talkcontribs) 17:04, 28 August 2015 (UTC)[reply]
Right, by the time you told the telescope where to look, the interesting event you were hoping to capture has already shot past the telescope, since your communications and the light from the event of interest travel at the same speed. So this would only work if the telescope were looking everywhere at all times, and at that point you may as well just have them orbiting Earth, not far away. Someguy1221 (talk) 09:42, 29 August 2015 (UTC)[reply]
And even that doesn't take into account the probability of clouds blocking the view. ←Baseball Bugs What's up, Doc? carrots→ 11:33, 29 August 2015 (UTC)[reply]
When in doubt, say "wormholes". True, it would make more sense to just wormhole right back to the past, but as an author you can plop that plot device down wherever you feel like, and nobody's likely to make off with it. Wnt (talk) 20:04, 29 August 2015 (UTC)[reply]

Kriegslok edit

Did the Kriegslok have a mechanical stoker, or did they have to shovel on the coal by hand? 2601:646:8E01:9089:F88D:DE34:7772:8E5B (talk) 05:55, 28 August 2015 (UTC)[reply]

From our German article, this photo seems to suggest that the Class 52 did not have a mechanical stoker; and the DRG Class 45 were only outfitted with a Rostbeschicker many years later (after 1950). Nimur (talk) 06:46, 28 August 2015 (UTC)[reply]
Thanks! 2601:646:8E01:9089:F88D:DE34:7772:8E5B (talk) 08:41, 28 August 2015 (UTC)[reply]
Not mentioned in our article is the fact that mechanical stokers were hardly ever used in the UK (or I suspect, the rest of Europe). Perhaps due to the smaller size of engines used here or because a skilled fireman could distribute the coal more efficiently? I'm still looking for a source to confirm this. Alansplodge (talk) 19:57, 28 August 2015 (UTC)[reply]

Placing telescopes on the Moon edit

This would seem to have some advantages:

1) Almost no atmosphere to distort the image.

2) Low gravity would make it easier to do maintenance, as tools, etc., don't float away, as in space, but they are also are much lighter than on Earth.

3) Would be easier to use electricity to rotate the telescope than in space, where any movement causes the "equal and opposite reaction" problem, requiring counter weights, etc. The Moon would be the counterweight, instead.

4) Nuclear power would be less dangerous, as you wouldn't have to worry about an old satellite falling to Earth and spreading radiation in the Earth's atmosphere.

5) If used during lunar night, that would ensure no nearby dust particles illuminated by sunlight. (Is this a problem on Hubble ? If so, do they prefer to use it during Earth night ? That would mean each observation period would be rather short on Hubble, as it's orbital period around Earth is short.)

Disadvantages:

A) Would have to rotate telescope to account for Moon's rotation. The Moon rotates more slowly than Earth, so this would be less of a problem than Earth-based telescopes. Space-based telescopes might have no rotation at all, though.

B) The big disadvantage is getting to and from the Moon. Much tougher than getting to and from Hubble, for example. Still, not something we haven't done before (45 years ago). Also, might be good practice for sending people to Mars some day. We could set up a station there for astronomers, if we decide that's important in prepping for Mars.

C) Solar panels could only be used during lunar day.

So, is there any such proposal ? StuRat (talk) 16:36, 28 August 2015 (UTC)[reply]

Proposed, implemented, completed in 1972 during Apollo 16. Here's a photo of Charlie Duke setting it up. Here's our article, Far Ultraviolet Camera/Spectrograph. Nimur (talk) 16:44, 28 August 2015 (UTC)[reply]
Nice, unfortunately, and perhaps predictably, the 15 year-old APOD page has some link rot issue. The scope is still there, but I don't think it can phone home... SemanticMantis (talk) 16:48, 28 August 2015 (UTC)[reply]
It could not be remotely controlled; and even if it could be, it was built using digital electrooptical imaging technologies of the 1970s... in other words, even though it had an electronic photocathode, it required photographic film to store images. This telescope was specifically intended to be operated by astronauts and its photographs were captured on film that had to be hand-carried back to Earth. Nimur (talk) 16:55, 28 August 2015 (UTC)[reply]
(EC)Google scholar is your friend here. Lots of people have thought about putting telescopes on the moon. Most of the hits here [2] are directly relevant, a few are spurious and are instead about lunar imaging, but I think you can sort that out. Here's one concrete "proposal", in that someone proposed something [3] at a NASA workshop. SemanticMantis (talk) 16:48, 28 August 2015 (UTC)[reply]
...Exactly. It's one of those things that has been so thoroughly studied, over and over and over by experts for literally centuries, right up until today, when we are actually able to build the technology and execute on it. The fact that this is not a top priority for NASA today seems to imply that a lunar observatory is counterindicated by that expert analysis of cost-benefit. Nimur (talk) 16:55, 28 August 2015 (UTC)[reply]
Yup, some info on the politics and cost/benefits of putting humans on the moon again at Constellation_program#Budget_and_cancellation. SemanticMantis (talk) 16:57, 28 August 2015 (UTC)[reply]
Earlier this summer, I linked to Charles Bolden's talk at Keck on the roadmap for American space exploration. That video, and the NASA.gov website (which always contains a link to the present Mission Statement, the Strategic Plan, and so on) are great places to start. Our imaginations are wonderful things, but space exploration requires imagination modulated by reality - and that's what lets us really push the boundaries of human knowledge. Nimur (talk) 16:58, 28 August 2015 (UTC)[reply]
It's really hard for me to see a good reason for taking a telescope to the Moon. A telescope in space can be pointed any direction at any time, can be exposed to a constant amount of sunlight for consistent temperature, can be extremely lightweight once unfolded in the absence of any gravity, and could even (with sufficiently good control...) separate into entirely separate pieces with nothing but space in between for long focal lengths or accurate starshades. The only advantage the Moon would seem to have is that it's a source of raw materials, if you have a good robot infrastructure in place, but still, why not shoot the materials up into space for assembly? Wnt (talk) 17:05, 28 August 2015 (UTC)[reply]
Many moons ago Scientific American had a cover story of a lunar synthetic aperture interferometer telescope called LOUISA (Lunar Optical-UV-IR Synthesis Array.) The Moon is nice for setting up an array of small telescopes because the surface is geologically stable. Though I suspect the advent of adaptive optics takes away a lot of the need for going off-Earth (for visible light that is, IR and UV still need to be done outside the atmosphere.) 91.155.193.199 (talk) 18:12, 28 August 2015 (UTC)[reply]
Are you limiting your question to optical telescopes? Far side of the Moon#Potential discusses the placement of a radio telescope there. NASA Sites Lunar Far Side For Low-Frequency Radio Telescope (Forbes, Aug 2013) describes the the far side of the moon as "the most radio quiet spot in the inner solar system". -- ToE 18:33, 28 August 2015 (UTC)[reply]
Yeah. The far side will shield it from noise (from say) American truckers going: “Ah, breaker one-nine, this here's the Rubber Duck. You gotta copy on me... yahoo we've got a convoy ”--Aspro (talk) 18:42, 28 August 2015 (UTC)[reply]
No, I didn't mean to limit the discussion to optical telescopes. Another advantage is that old junk can be just left on the Moon, versus having to carefully deorbit it so as to not kill anyone on Earth. StuRat (talk) 21:31, 30 August 2015 (UTC)[reply]

Why is skin more protective than mucosal membranes? edit

Why is skin more protective than mucosal membranes for some pathogens? What makes them more protective? Is it because they are thicker? Is it because of their composition or structure? 140.254.136.157 (talk) 19:59, 28 August 2015 (UTC)[reply]

The skin has additional layers of cells which provide additional protection. There are several, but the toughest is probably the Stratum corneum, the so-called "horny layer" that consists of dead and dessicated Epidermis cells, and has high concentrations of keratin. Mucous membranes are not properly skin, lacking the structure of skin, and consisting instead of Epithelium (the same cells that line each of your internal organs) along with types of mucus-producing cells called goblet cells. Despite being exposed to open air in some places (like the lining of the nose and the genitals) these mucous membranes aren't skin; the place where mucous membranes meet the skin are called Mucocutaneous zones. --Jayron32 20:09, 28 August 2015 (UTC)[reply]
Skin is thicker and more resilient. Your skin is full of keratin, which makes it elastic and waterproof. Sweat and sebum (skin oils) inhibit microbe growth and physically trap microbes so they can be shed. And, your normal benign skin flora compete for resources with any harmful pathogens, limiting their growth. --71.119.131.184 (talk) 20:15, 28 August 2015 (UTC)[reply]
That makes sense. Sucking one's thumbs or biting one's nails is disgusting and unsanitary. And yet, people lick open wounds, suck thumbs, and bite nails. Why do people do bad habits? 140.254.136.157 (talk) 20:18, 28 August 2015 (UTC)[reply]
Kind of curious though that these habits like nose-picking and nail-biting always seem to target the parts of the body where a pathogen can be exposed to bodily secretions and dry out a while. It's as if the habit were trying to arrange some manner of inoculation with an inactivated pathogen. But I don't know that... Wnt (talk) 11:06, 29 August 2015 (UTC)[reply]
Licking wounds may provide moisture which will aid in the healing process. ←Baseball Bugs What's up, Doc? carrots→ 11:31, 29 August 2015 (UTC)[reply]
See lysozyme and wound licking for our relevant articles. Tevildo (talk) 22:29, 29 August 2015 (UTC)[reply]

Is 3D printing really "printing"? edit

I checked the archives and 3D printing#Terminology and methods; the former had an instance of someone referring to "3D 'printing'". My question: is 3D printing really printing in a meaningful sense? Isn't it really more like small-scale, adaptable manufacturing? --BDD (talk) 20:19, 28 August 2015 (UTC)[reply]

Words mean what speakers and listeners understand them to mean. Insofar as the term is nearly universally understood to mean the process so described in our article, that's exactly what it means, and it is 100% the correct term. The mistaken belief that words can only mean what they used to mean, and thus language can never change, is called the etymological fallacy and the fallacy is an important part of that term for a reason. --Jayron32 20:24, 28 August 2015 (UTC)[reply]
3D printing is done using a printing type process for each layer so yes 3D printing is a very apt name for the process. On the other hand having a robot drill out shapes or use a lathe or punching things out or molding has nothing in common with printing. Dmcq (talk) 20:28, 28 August 2015 (UTC)[reply]
I think it's an appropriate word here. The real difference between an inkjet printer and a 3D printer is that there is a third direction of movement...some of the liquid resin printers work by drawing a sequence of 2D images - a lot like a laser printer. But if we had to find a different word then maybe "3D plotter" might make more sense than 3D printer.
But as Jayron says - this is the name that we've adopted - it doesn't have to make sense - and that's that. It's common to re-purpose words - what makes a computer "mouse" be anything like Mus musculus? Why do we still call letters "upper case" and "lower case" now that we don't have a nice wooden case of metal type on the upper shelf of a typesetter's desk for capital letters and a separate case of type on a lower shelf for the rest? Where is the disk-shaped object in a solid-state disk drive? Why does the key that nowadays ends a paragraph, or enters data into a form called "RETURN" - well, "Carriage Return" - returning the carriage on a typewriter. Some keyboards label it "ENTER"...which makes more sense - but change is slow to happen.
Language is made up on the fly...and not always in the most sensible way. SteveBaker (talk) 04:06, 29 August 2015 (UTC)[reply]
3D plotter might be an appropriate name for the ones which extrude plastic and draw shapes rather than doing 2D images one on top of the other, I guess though they tend to work in terms of layers of 2D images at the moment. It would be interesting to see a 3Doodler driven by computer. Dmcq (talk) 14:25, 29 August 2015 (UTC)[reply]
The term 3D "printing" is about as appropriate as "dialing" your telephone. But inappropriateness never stopped people from screwing with the language. ←Baseball Bugs What's up, Doc? carrots→ 11:30, 29 August 2015 (UTC)[reply]
3D printers are not really printing. In the same way, submarines are not really swimming. --Yppieyei (talk) 15:29, 29 August 2015 (UTC)[reply]
It used to be called "3D rapid prototyping", or referred to by the individual technologies, such as stereolithography, but then the name changed overnight. I'm not sure why. StuRat (talk) 21:35, 30 August 2015 (UTC)[reply]
"3D rapid prototyping" is a terrible term. Firstly, it's often exceedingly slow. Some 3D printers might take 6 to 10 hours to make a large object that a skilled person could make with a lathe, drill press and milling machine in 20 minutes...so "rapid" doesn't always apply..."prototyping" is also a misnomer because plenty of people use the technology for manufacturing as well as for making that initial prototype. So for sure that terminology had to die!
A common use-case for me is: "The shelf bracket on my refrigerator broke - so I sketched up a replacement in blender and 3D printed a new bracket using ABS plastic in about 15 minutes". In an ideal world, I'd have gone to the SUPPORT department of the refrigerator manufacturer (or maybe WikiCommons) and downloaded the design from there. (This actually happened to me a couple of weeks ago.)
So with that use case in mind, I think the name came about like this: These machines shrank from costing $100,000 and being the size of a refrigerator to costing $500 and being the size of a microwave oven. We collectively realized that eventually, we'd all have a machine on our desktop, where you took a 3D description of an object, and with a single mouse-click, it would be manufactured for you. Arguably, we've already reached that point - but equally arguably, we have a long way to go yet...but either way, we're all very clear that this is the objective of the exercise. That experience is so close to what we do when we print a diagram on an inkjet printer that we truly feel that we're just printing out a diagram, but with the third dimension fully realized. 3D printers should cost about what inkjet printers cost - they should be similar in size (and it would be REALLY nice if they were also similar in speed).
The analogy is so compelling that the use of the term just 'feels right'.
The term "3D printing" carefully leaves out the precise mechanism by which the 3D description gets turned into a physical object - there are at least a dozen totally different technologies that can be described with this one handy phrase. It's nice that I can conveniently tell you that I'll just do a quick 3D print of my design without having to tell you that I'm using plastic deposition or selective sintering or whatever...just as, when I print a document, I don't have to tell you whether I'll be using a laser printer or an inkjet.
So the terminology conveniently abstracts away the detailed mechanism and describes only the start of the process and the end result...just as "printing" does for documents.
SteveBaker (talk) 06:34, 31 August 2015 (UTC)[reply]