Wikipedia:Reference desk/Archives/Science/2015 August 11

Science desk
< August 10 << Jul | August | Sep >> August 12 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


August 11 edit

Can stimulating the brain really cause an orgasm? edit

Amy Farrah Fowler said in the Big Brain Theory: "Does volunteering for a scientific experiment in which orgasm was achieved by electronically stimulating the pleasure centres of the brain count?" Is it possible to stimulate the brain to orgasm? Or is orgasm more complex than that? What about release of oxytocin? Wouldn't oxytocin be required in an orgasm? What would cause the release of oxytocin? 71.79.234.132 (talk) 00:26, 11 August 2015 (UTC)[reply]

Minor nit. It's the Big Bang Theory, not 'Brain'. Dismas|(talk) 00:42, 11 August 2015 (UTC)[reply]
A scientist simulated various parts of mouse brains with electrodes to see what each part does. When he found the rodent pleasure center he gave the mouse the on switch and it pressed the lever until it starved to death. Sagittarian Milky Way (talk) 00:43, 11 August 2015 (UTC)[reply]
These were experiments by James Olds and Peter Milner in the 1950s, involving rats not mice. None of the rats starved to death, though they did self-stimulate to the point of exhaustion. They were disconnected as it was thought they would starve to death if left attached. Additionally, females ignored their pups while males ignored potential mates, and both genders became willing to endure pain in exchange for stimulation. I haven't been able to find any online-resource that discusses the experiment in detail. Someguy1221 (talk) 00:54, 11 August 2015 (UTC)[reply]
Then I must've forgotten that they only found out rodents likely would stand on the lever till death but it didn't happen. And also that they were rats. Sagittarian Milky Way (talk) 01:10, 11 August 2015 (UTC)[reply]
You most likely remembered accurately something you read online, possibly even at a normally reputable source. Unfortunately, this experiment has been reported and re-reported so many times with no one checking the original publication (because oh God, you'd need to visit a library), that the version of the experiment remembered in popular culture is not quite what really happened. I also found we have an article on Brain stimulation reward that discusses some of these experiments. Someguy1221 (talk) 01:25, 11 August 2015 (UTC)[reply]
There was indeed an experiment in which rats starved, but it is important to understand the details. The experiment was set up so that for an hour each day a rat had access to two levers, one of which delivered food, the other brain stimulation reward. However, a rat needed to press the food lever so many times for each food pellet that it essentially had to spend all its time on the food lever to get enough food to maintain weight. It was found that in this situation some of the rats nevertheless spent most of their time pressing the brain stimulation lever, and starved. Reference: http://psycnet.apa.org/journals/com/60/2/158/. Looie496 (talk) 13:13, 11 August 2015 (UTC)[reply]
That's mean. I do remember that now, there's no reason there could be one famous experiment and one famous follow-up (besides tightening ethics rules). Sagittarian Milky Way (talk) 13:50, 11 August 2015 (UTC)[reply]
We have decent coverage on Orgasm and Oxytocin#Synthesis.2C_storage.2C_and_release. Here is a nice review article summarizing the science of the human orgasm [1]; it has information on psychology, biochemsitry, neurotransmitters, and hormones. Lots of refs therein. Here is a more specific article that discusses brain imaging and sexual response [2], and it discusses all kinds of things about what regions light up (and how, and when) in the sexual response cycle. In a sense, all orgasms come from stimulating the brain. I take it you mean without other tactile stimulations and relying solely on electrical stimulus in a laboratory context? I can't find any record of that, but WP:OR it's not an insane claim. It would probably be available in a mouse model before it got tried on humans. SemanticMantis (talk) 00:50, 11 August 2015 (UTC)[reply]
Pleasure center#Human experiments describes a few accidental experiments of this on humans. These were people that were already getting electrodes implanted into their brains for other medical reasons. In the three cases listed, it seems the doctors just happened to land the electrodes in such a position that stimulation caused pleasure. I should note that the actual scientific literature I've read describes the experience as "orgasm-like" rather than "orgasm". Someguy1221 (talk) 00:57, 11 August 2015 (UTC)[reply]
This idea was explored in a highly believable, futuristic movie in the early 1970s.[3]Baseball Bugs What's up, Doc? carrots→ 01:54, 11 August 2015 (UTC)[reply]
And also in The Terminal Man by Michael Chrichton. Spoiler: it doesn't go well. Sjö (talk) 07:49, 14 August 2015 (UTC)[reply]
I haven't seen a reference to this, but I keep thinking there are philosophical problems when stimulating the brain. How do you know that the person really feels pleasure and isn't just displaying positive reinforcement? Couldn't the treatment simply create a compulsion to seek out stimulation (including claims of pleasure) without actually being pleasant? I suppose at some point after the fact you just take someone's word for what he felt, but ... could positive reinforcement deceive even his memory of the sensation? Wnt (talk) 11:41, 11 August 2015 (UTC)[reply]
You can't know that the person feels pleasure as feelings are a private experience and cannot be measured directly. A person might tell you that they are experiencing pleasure, but what does this mean? Don't forget that whereas one person being whipped or caned might report this as being unpleasant, others might report it as being pleasurable. This is where the concept of positive reinforcement is of benefit as this refers to a change in observable behaviour, rather than the feeling of pleasure which (as above) cannot be measured directly.DrChrissy (talk) 13:33, 11 August 2015 (UTC)[reply]
Do it in an fMRI machine. But this is worse than giving humans crack. Not ethical. Sex has a ramp up and ramp down (however short it can be, it's not 200 milliseconds) and temporary satiation for a reason. Sagittarian Milky Way (talk) 13:36, 11 August 2015 (UTC)[reply]
Even with imaging technology, the issues which Wnt and DrChrissy touch upon remain in play; what you gain from these observations is that you can correlate the neural activity to a self-reported state of mind or to stimuli, but the experience itself has no empirical quality, is subjective, and can never be directly compared against what we assume to be a similar experience in another individual. This is a recurrent theme in the cognitive sciences that continues to baffle researchers armed with the most advanced investigative technology to basically the exact same extent it did philosophers amongst the ancients. Epistemologically speaking, we still have no (and I mean "no" in as literal a sense as we can ever know) understanding of what an "experience" (or consciousness broadly) is, why or how it results from physical mechanics, or whether it is a "real" phenomena at all, cogito ergo sum presumptions or no. ([4], [5], [6], [7], [8]).
As to the ramp up you speak of, there are many forms of experience that actually simulate various pleasure centers of the brain much more broadly and powerfully than orgasm and can be triggered much more immediately. Orgasm as a unique experience is not defined just by pleasure; it actually has many features which set it apart neurophysiologically and experentially from the general notion of pleasure and other forms of experience which are often classified as pleasurable. Which is not to counter your root argument--that there are deep ethical implications of allowing a person to have direct access to their own neural architecture with regard to pleasure--as these are numerous and, as you imply, significant. Snow let's rap 05:30, 12 August 2015 (UTC)[reply]
This would be reinforcement stronger than anything than anything physically possible in life, and people have let their pre-pubescent boy be routinely anally raped to afford crack. Sagittarian Milky Way (talk) 13:43, 11 August 2015 (UTC)[reply]
  • There are a variety of drugs that sometimes cause women to experience spontaneous orgasms, including Prozac. The drug clomiprazine sometimes produces orgasms that are triggered by yawning. Also orgasm can be triggered directly by brain stimulation, as described in http://www.violence.de/heath/jnmd/1972paper.pdf, one of the most remarkable (and remarkably unethical!) papers I have ever read. Looie496 (talk) 13:28, 11 August 2015 (UTC)[reply]
Jeeeeeeeezus. That has to be seen to be believed. We have a small article on Robert Galbraith Heath that says the work was (surprise!) financed in part by the CIA and US military. If this is remarkable... imagine what research they must have funded that wasn't made public. Wnt (talk) 13:58, 11 August 2015 (UTC)[reply]

The specifics of studies with electrode stimulation with regard to orgasm have already been referenced above, but there is a more general brain-in-a-vat point here that ought to be made for the OP. That is to say, not just orgasm but indeed the vast majority of all physical sensations and physiological responses that the brain is capable of generating could in theory be produced by similar methodology of direct neural stimulation. There are forms of physiological feedback between the brain and the rest of the body which cannot presently be reproduced, or even vaguely approximated, by artificial stimulation, but, in theory, any conscious experience the brain is capable of experiencing through the medium of the rest of a body's physiology could be experienced without a body, given the right (and in many cases, highly hypothetical) circumstances. The number of sensations which have been experimentally generated (compared against the overall possibilities) are actually quite small, but orgasm is actually incredibly easy to achieve (compared against, for example, generating a specific hallucination), since it does not depend on replicating a highly specific sensation; you instead need only generally stimulate certain pleasure centers of the brain in order to cause a cascade of signals which approximate the sensation of sexual climax. Remember too that some people (either naturally or through experimental processes) have been known to have orgasms as a result of stimulation that decidedly non-sexual for most people. On an off-topic side note, I would definitely need numerous electrodes buried in very specific locations of my own brain to ever stand a chance being entertained by The Big Bang Theory, so heavily does that show miss the mark with regard to both comedy and science. Snow let's rap 22:44, 11 August 2015 (UTC)[reply]

Invertebrates edit

I lifted this from Pain in invertebrates. It has been shown that snails will operate a manipulandum to electrically self-stimulate areas of their brain. Balaban and Maksimova[1] surgically implanted fine wire electrodes in two regions of the brains of snails (Helix sp.). To receive electrical stimulation of the brain, the snail was required to displace the end of a rod. When pressing the rod delivered self-stimulation to the mesocerebrum (which is involved in sexual activity) the snails increased the frequency of operating the manipulandum compared to the baseline spontaneous frequency of operation. However, when stimulation was delivered to the parietal ganglion, the snails decreased the frequency of touching the rod compared to the baseline spontaneous frequency. These increases and decreases in pressing are positive and negative reinforcement responses typical of those seen with vertebrates. DrChrissy (talk) 13:54, 11 August 2015 (UTC)[reply]

References

  1. ^ Balaban, P.M. and Maksimova, 0.A., (1993). Positive and negative brain zones in the snail. European Journal of Neuroscience, 5: 768-774

How were these videos made? edit

At several points in this video there are shots of the night sky where... well, I'm not sure what to call it, but I guess it's the Milky Way that is clearly shown with gas clouds and such. Complete with purple and blue hues. I've been out in nature quite a bit in my life but I've never seen anything like that with my naked eyes. The most evidence of the Milky Way (yes, I know we're in it) I've seen is a 'band' of stars across the sky. So, is this achieved with some telescopes and some post-processing? Dismas|(talk) 00:41, 11 August 2015 (UTC)[reply]

This website has some weird formatting issues, like it was copy pasted from somewhere, but it includes detailed instructions for pulling that off. Simply googling "photographing the milky way" pulls up lots of other sites with instructions, including complete details about appropriate camera settings for different camera models. Someguy1221 (talk) 00:45, 11 August 2015 (UTC)[reply]
I don't see any formatting issues. Thanks for the link! Dismas|(talk) 01:16, 11 August 2015 (UTC)[reply]
That's because there's few places in the East half of the US or Europe or industrialized Asia that are dark enough. If it's a video it's unlikely to be much more sensitive than the human eye even with today's tech. It's probably more grayscale in real life, though. But it's been described as veined marble and casting shadows. Sagittarian Milky Way (talk) 00:53, 11 August 2015 (UTC)[reply]
Yes, I'm reminded of the lyrics to Kodachrome: "...brings out the greens of summer, ... makes all the world a sunny day". That is, certain films can enhance the colors dramatically. StuRat (talk) 03:18, 11 August 2015 (UTC)[reply]
If it's a video it's unlikely to be much more sensitive than the human eye even with today's tech. This premise is incorrect. This is not technically just a "video", at least not in the naive sense: recorded "live". This is a timelapse, clearly because the rotation of the sky. I have a friend who takes exactly these kinds of "videos". What you do is take a series of long exposure photos in widefield, with probably a DSLR not a video camera (my friend uses a 6D, the same camera I have), over the course of several hours. By long exposure I mean up to 30-40 seconds is possible before stars start to "streak" in widefield. This is FAR more sensitive than the naked eye. Then you just "stitch" those exposures into a "video". I am somewhat of a hobby astronomer and I have been to very remote dark skies and it never looks anything like that, it can be extremely impressive, but not like that video. Vespine (talk) 05:31, 11 August 2015 (UTC)[reply]
In that case it could be more sensitive. If the sky much over 15 degrees from the MW centerline is not almost black when no moon than it's either more sensitive or light polluted. Otherwise, it's as or less sensitive. Reciprocity failure of course was a problem before digital. The Moon and Sun is very bright though, they can make the sky bright enough to detect with the naked eye up to 7 and 18 degrees below the horizon respectively. Venus and Jupiter are bright enough to affect dark adaption. And maybe the solar cycle and the hours long decay of sunlight-induced mesosphere phosphorescence. Where's the darkest sky you've ever seen though? Even if you went to the California desert or mountains you probably had enough light pollution to affect the naked eye view, only an unusually off-the-beaten-path part of Death Valley and small corner of the Sierra Nevada doesn't have enough. 6-9 thousand feet altitude and looking at the galaxy's center the minimum possible distance out of the atmosphere (only possible at 30°S) are even more impressive. Night vision is one of the first things to go with hypoxia though so more than 3,000 meters should hurt the view even with acclimatization. (w/ some variation from individual hypoxia-tolerance of course) Sagittarian Milky Way (talk) 15:42, 11 August 2015 (UTC)[reply]
Perhaps I'm missing something but I don't see the relevance of places outside the US in your earlier comment. Sure you could see better in places with less light pollution than anywhere in the US, and you could produce better videos too, but the description for this video strongly implies that this particular video was shot entirely within the US. It's possible that the parts of the video being referred to were all shot in the Death Valley or a small corner of Sierra Nevada, but I'm not sure about that either (but I know little enough about US geography that I can't reliably say). Nil Einne (talk) 17:49, 11 August 2015 (UTC)[reply]
P.S. When I originally wrote the comment, I mistakely believed you wrote above "few places in the US", which implied that it would be difficult or impossible to see anything like this in nearly all of the US (or Europe or industralised Asia), whereas this video was obviously shot in the US so the US is the best comparison. I later realised you actually said "few places in the East half of the US", which changes things a fair amount. I tweaked my comment slightly since no one had replied to it, but it really needs significiantly more substanial modification or just deletion. But it's been there for long enough I'm not sure this is fair, so I'll leave it be. I do think there remains an open question, namely are all the scenes being referred to really shot in the places you consider necessary for such observations? If they aren't then that seems to counteract the notion you need such places, even if you may very well need such places for naked eye observations. Nil Einne (talk) 18:01, 11 August 2015 (UTC)[reply]
The OP's user box say he's from Vermont, so unless he's traveling far something closer to the video than what the OP's seen exists. You shouldn't need utter darkness if you use a camera, but I never said that. It could have more dynamic range than the eye and a huge lens and a custom filter that blocks the bigger emission lines of streetlights.
[9] combined with Bortle scale and the next link show that very small amounts of anthropogenic light really do prevent the eye from seeing everything (even telescope vs. same telescope). And that the zone where you can see everything is only about 10% of the land area of the West and 0.016% of the East if you go by "real East" maybe 52/48 instead of naive bisection. About the California example, that is not the darkest state. It's only dark by my Eastern standards. It still has much desert and tall mountains. And the highest mountain in the non-Alaska US (14.5 thousand feet), the hottest weather ever measured (Death Valley, 134°F), which are pretty far from populous areas so I was surprised how little pristine area it had. (minor nitpick: I thought Mt. Shasta and stuff are more east than they are so that must be Sierras but Sierra Nevada (U.S.) says I'm wrong, my bad). Arizona is the most famous state for darkness and it's like 95% spoilt by land area. Montana might be the second most famous state for darkness and it's not much better. Utah, Colorado, almost every state it's the same, none is more than about 1/3rd pristine besides Alaska. So it's slightly hard to vacation in the pristine zone by accident, [10], but certainly easy to do it on purpose (with a surprisingly large choice of scenery). I wasn't in any way saying that it must be Death Valley or the Sierra Nevada if it's the US.
I guess I wasn't clear. The 150,000 square miles of utterly dark sky in the Lower 48 is surprisingly diverse though (high mountains, flat plains for hundreds of km radius, plateaus, low mountains, hills, geysers, lakes (natural and artificial), forests, deserts, endless sand, major rivers, short grass, mixed grass (no 100% tall grass), geothermal stuff like hot springs and sulfurous pools, 1 Florida atoll where 40°F is the all-time cold, peaks that are almost ice caps (or even true ice caps?), and at least one dry salt lake of the kind they try to break land speed records in). Sagittarian Milky Way (talk) 23:26, 11 August 2015 (UTC)[reply]
Well I don't think anyone doubts the OP has only seen whatever they say they've seen. My point was that the video suggests there are many different locations in the US you can image (whether or not you can see with the naked eye) what's seen in the video, so you don't need to go to the specific locations you mentioned in your earlier reply to image what's seen in the video. I'm not BTW suggesting imaging equipment is magic and can somehow counteract all the effects of light pollution, simply that it's foolish to assume you're going to see exactly what you can record (including with post processing) using decent equipment in the same location. Cameras and the eye each have their own advantages so you should expect significant differences, particularly when you consider post processing of the camera out put. In other words, while the OP could surely see better night skies if they visit a place with a lot less light pollution, this doesn't actually answer their only question of how to record what was recorded in the video. Nil Einne (talk) 13:35, 12 August 2015 (UTC)[reply]
Even in terms of normal real time video, I doubt it's true that they aren't more sensitive than the naked eye with todays tech. Of course comparing naked eye to what's produce from a camera can be misleading. If you've ever watched a sport broadcast in the evening, say a cricket match in a stadium without lights, you often wonder or hard to see when it doesn't look that much darker than before. But there are other ways to measure sensitivity e.g. [11] [12] Nil Einne (talk) 12:43, 11 August 2015 (UTC)[reply]
Then why do Hollywood movies not film in moonlight and use fake lights. I think they still do that, right? Even civil dusk still has orders of magnitude more illumination than moonlight and people can see in moonlight just fine. And no one will play sports where you have to hit a 100 mph cricket ball from 66 feet away after civil twilight. During civil twilight there can be a lot of contrast between sky and ground, the eye tries to keep the brightest things in view a light color when physiologically possible so it might make the pitch seem harder to see than it could be (in the day the Sun adds brightness to the ground without being in view itself (cause it'd hurt the eyes) so there's less contrast. The blue sky is still brighter than the ground but close enough for the eye to handle). Besides being huge and professional quality, the camera doesn't care if it'd lose detail in the sky at this setting and instead cares about making the pitch easiest to see. Maybe you could dark adapt at least 30 minutes like a skygazer would and then look at the ground at civil dusk before you could see the sky for even a fraction of a second. Maybe push paper towel tubes into your eye sockets, close your eyes, open your window, and don't open your eyes until the tubes are facing the ground. It should look a lot brighter than any ground you've ever seen at civil dusk before. Sagittarian Milky Way (talk) 15:42, 11 August 2015 (UTC)[reply]
I think you're conflating different things. My point about videos of sports at night is that comparing what you see on video with what you see in real life is not a good way to compare the sensitivity because you probably aren't comparing sensitivity but instead different things. And BTW, there was one famous game where parts of it were controversially played in what's normally described as almost complete darkness 2007 Cricket World Cup Final although with only spinners and the stadium did have limited lighting. However I'm not sure if it was after civil twilight. Video of that is actually a good example of what I'm thinking about as these generally don't make it seem that much worse than a typical Day-Night game. But again, my main point is that this doesn't really tell us much about sensitivity of cameras.
BTW, I agree that there's a difference between dark adapted and non dark adapted eyes. That was part of the point I was making about comparing videos with what you see.
Second, we're talking about the absolute sensitivity of cutting edge cameras. These cameras may be expensive and targeted at specific usages. They will often produce fairly noisy, low resolution and low frame rate results. People shooting video professionally for television and movies generally want high resolution, sometimes high frame rate, which looks very good (so has good contrast, highlights etc) and that looks something like what people would expect for the time it's supposed to be.
Good lighting with good cameras gives that. No lighting but with specialised cameras doesn't (and probably never will) but it doesn't mean that there aren't cameras out there which are significantly more sensitive than the human eye. Notably just because we have very sensitive cameras (and as I remarked earlier, it can be confusing sometimes precisely how dark it is), it doesn't mean a moonlight scene can be made to look like daylight and for various reasons professional shoots do sometimes happen at times when it is dark even if it's meant to be light.
The specific cameras used for the precise video we're discussion which are all listed may or may not be significantly more sensitive than the human eye when used for real time video. I don't know but I never commented on the specific cameras simply on what's out there since you implied it was impossible for videos to be shot with significantly more sensitivity than the human eye "with today's tech" (rather than the specific cameras used for this video).
It's probably fair to only include commercial products, so I'm not sure if those earlier links were entirely fair as it may not be a commercial product, but I don't believe it's something that much higher than what exists out there already and there's nothing in the discussion to suggest it is.
Ultimately what matters is resonably objective measures of sensitivity of "today's tech", not how people feel stuff looks like, or should look like, or may be capable of. My sources weren't the best, but they did seem to imply there are and I'm fairly sure there are people here who know far more about this than me, so I couldn't be bothered looking for better ones for what was intended to be a quick comment.
Nil Einne (talk) 17:44, 11 August 2015 (UTC)[reply]
Maybe I underestimated the growth of video camera sensitivity. The eye is still logarithmic though not linear and has up to 10,000:1 contrast ratio so you'd need a big advancement in camera tech to make a moderate difference. There were only 5 classifications of star brightness per 2 orders of magnitude before the 1850s. A star could be right next to one that's 9.6% brighter and even an expert variable star observer could tell only with great difficulty.
I said it's been described as veined marble and bright enough to cast shadows (maybe that's only if the galactic nucleus is up and not low though?) so I'm wondering if he's seen a pristine sky if "a band of stars" is his description. I've heard that the center of the galaxy looks like steam pouring out of the Teapot (some stars in Sagittarius look exactly like teapot) and the summer Milky Way looks like a gigantic vulva (30 degrees wide) with cloudy bright labia and a long black rift in the middle. This is what the Central American civilization thought it liked like. It's a lot more than just the band of visible stars (the star clouds (technical name) are made of stars too dim to see individually). Sagittarian Milky Way (talk) 23:26, 11 August 2015 (UTC)[reply]
But again, you seem to be conflating different things. Cameras may very well have a poor dynamic range compared to the eye (although I think this is more the brain than the eye), but that tells us little about their best achievable sensitivity.

In fact I'm fairly sure the most sensitive video cameras would generally be used for other night scenes (like footage of nocturnal animals), and probably not used for recording the milky way (if they are used for the night sky, only for very dim stars). Incidently I suspect one big advantage they have is their ability to record wavelengths the human eye can't see.

The poor dynamic range does actually limit how you can use cameras including I think in the night sky case and the more you talk about it, the more it sounds like this is what you're referring to. And as I said the most sensitive cameras probably aren't even that useful for recording most night sky cases. But your original post seemed to imply that cameras aren't likely to be much more sensitive than the human eye which having researched and thought about it even more (albeit without finding any links I would consider great RS for this discussion), I'm fairly sure is wrong. You didn't initially say the eye had a poorer dynamic range or whatever else.

Also, I'm fairly sure you could improve the dynamic range of even real time video (as opposed to time lapse I mean, not as in viewed in real time) either by using multiple very close cameras with different sensitivites and post processing to combine them or by more complicated processing of specialised sensors.

This is already fairly common for daylight with photos and time lapse video, see High-dynamic-range imaging. It's a fair amount more complicated for realtime video. You can use specialised sensors and processing (e.g. [13] although this seems to be trying for real time in both sense), but I'm not sure this will give you the desired dynamic range, particularly for night sky situations where you may need different cameras since we may be looking at highlymore sensitive specialised ones. (Although as said earlier, I'm not sure how useful extremely sensitive cameras would be for record the night sky, I suspect they won't be that useful in the vast majority of circumstances. It may very well be more normal cameras with appropriate settings would be the best choice, particularly when recording the Milky Way.)

Using several cameras is likely to add even more complexity since they won't be in the exact same location (and if they are different cameras, their output may be different resolutions, sizes etc). [14] is one daylight example which used a beam splitter and the same kind of cameras. Alternatively, if you know the precise spacing of the cameras and ensure they are all angled correctly, you could probably develop you processing to account for this and still produce high quality videos.

AFAIK this is rarely done, probably amongst other reasons since time lapse with long exposured makes much more sense for most night sky situations. (Which is probably another reason why you'd rarely use the most sensitive cameras for recording the night sky.) But this doesn't tell us much about what you can do if you really wanted to. In any case, poor dynamic range is still seperate from sensitivity.

Nil Einne (talk) 13:55, 12 August 2015 (UTC)[reply]

Is there any way to isolate Citrulline from Watermelon ribs? edit

I understand that Citrulline is found in decent amounts on the Watermelon ribs.\

  1. Can I isolate it somehow to make a powder or at least some kind of a concentration.
  2. Can I do this in the house's kitchen? Thank you, Ben-Yeudith (talk) 20:01, 11 August 2015 (UTC)[reply]
  3. What will be the best way to preserve it after I made it?

Thank you Ben-Yeudith (talk) 20:01, 11 August 2015 (UTC)[reply]

The modern watermelons here don't have "ribs". Do you mean rinds ? Or do you mean something like this historic watermelon that did have ribs: [15] ? StuRat (talk) 22:20, 11 August 2015 (UTC)[reply]
1. Yes 2. Probably not. 3. No clue. See Process for the production of L-citrulline from watermelon flesh and rind: http://www.google.com/patents/US8173837. Also interesting is Method of producing citrulline by bacterial fermentation: http://www.google.com/patents/US3282794 Justin15w (talk) 22:24, 11 August 2015 (UTC)[reply]
Yes, I meant to the rind...
Thanks, Ben-Yeudith (talk) 03:14, 12 August 2015 (UTC)[reply]