Talk:Petabyte

Latest comment: 6 years ago by 81.105.172.225 in topic Question

Initial discussions

edit

Discussion about centralization took place at Talk:Binary prefix.

Stored here temporarily:

As of March 2001, The Internet Archive [1] had snapshots of the entire World Wide Web from 1996-1991, totalling a tenth of a petabyte.

I was thinking the "1991" cited here was probably a typo of "2001," so I went to look it up on the IA website linked, and found nothing; nothing shows up on Google either. I don't doubt the accuracy, but maybe it would be possible even to get a more up-to-date figure somewhere? - Hephaestos 00:01 Apr 25, 2003 (UTC)

penta --> pente ?

edit

I'm Greek living in Athens, never heard of the word 'penta'. Perhaps pente?

Yes, although what was really meant was the (ancient) Greek prefix penta-, rather than the word pente. --Zundark 12:03, 3 Jan 2005 (UTC)

Human vision note

edit
Interestingly enough, in order to accurately remember the visual experiences alone of an approximately 40 year old human in UHDV, one would require 2.5 exabytes, far greater than the 88.8 petabyte storage cited.

I call shenanigans on this one. First of all, no source. Second of all it's a gross misestimation of how the brain stores visual memory. Your brain is not a freaking HDTV recorder. It remembers symbols and letters and objects, not pixels. Think about it, if you read a book you remember the plot of it, not the shape and positioning of all the letters on every individual page. Whatever extremely-high resolution your eyes have, it's only stored for a few seconds in sensory memory while your brain is interpreting it. --158.130.13.4 19:25, 22 May 2006 (UTC) I agree, however Data being an android would have to store every tiny discrepency, I also think that it would be dumb to use UHDV anyway as it takes up far too much space, also his memory would probably compress that memory until he axcessed anyway. Also keep in mind this isnt a human brain we are talking about it is a highly sophisticated computer. --68.113.214.242 21:41, 25 June 2006 (UTC) Maybe Datas ceator implemented a better video codec than MPEG-4 or only new data is stored. Or maybe the StarTrek producers randomly picked a quite lage data amount that sounded cool, who cares, i think this statement should be deleted, does not improve the article. And it is definitely not "Petabytes in use" as data does not really exist it is rather "Petabytes in fiction".--172.206.79.43 13:03, 5 July 2006 (UTC)Reply

I agree with the above comments, this makes the article sound decidedly unscientific. This sounds a lot like a common misconception regarding a) the current state of memory research (i.e. not nearly far enough along as to make accurate judgments regarding storage capacity) and b) the increasingly common notion that memories might be "downloaded." I'm not saying that it's what is intentionally being conveyed by the passage, but it perpetuates that unfortunate myth. By the way, only small portions of the retinal projection into the brain actually end up being stored for more than a second (e.g. the common 4+-1 standard for working memory). If you are not attending to an object or region, most reasonable estimates hold the "storage time" of the rest of the visual representation (via iconic memory; Sperling, 1960) at around 80-200ms.

Also, there's no specific reason to believe that Data's memories are veridically encoded, right? I mean, couldn't he just have some superadvanced way of storing complex sequences of events, a la MIDI files? 74.136.218.213 02:42, 9 August 2006 (UTC)Reply

An android with DivX-compressed visual memory? Did I mention I have 5.1 surround microphones in both of my ears? -- Zelaron 00:27, 28 August 2006 (UTC)Reply

Question

edit

What is this supposed to be,

RapidShare in 2006 had 1,08 Petabyte of hard-disk storage.

is it supposed to be 1,008 , 108 , 1.08 , or what? please clarify. -- Griggs08 07:48, 20 January 2007 (UTC)Reply

1,08 is just a common way of writing 1.08 - see Decimal comma for further information :) Kumiankka 19:29, 20 March 2007 (UTC)Reply
Comma is used as a thousands separator in English speaking countries. 81.105.172.225 (talk) 11:23, 9 July 2018 (UTC)Reply

I'd like to call attention to Reference #2

edit

It doesn't link to anywhere where the reference may be verified. 202.128.56.110 22:03, 14 March 2007 (UTC)Reply

We don't require web links to all sources, as in particular that would preclude the use of most books as sources. Of course, we encourage providing a link when possible, but it isn't necessary. --Sopoforic 23:23, 14 March 2007 (UTC)Reply

Drexel University Reference

edit

I removed it, since it came up with an internal server error, and I couldn't find anything searching for "petabyte" at drexel.edu, ref is here just in case: Drexel Professor: For a Bigger Computer Hard-drive, Just Add Water Thunderhippo 21:40, 28 October 2007 (UTC)Reply

Worth mentioning?

edit

Sometimes a petabyte is called a quadrillobyte. I know it's incorrect, but it still happens. Should this be in the article? --76.107.251.178 (talk) 21:26, 28 July 2010 (UTC)Reply

1 PB = 1000 TB

edit

I'm not sure all the problems with this are solved. The main article text is OK, but the template still contains misleading information. It gives the impression that the binary definition of the PB (as 1024 TB) is an accepted one. Frankly I doubt that this is the case, but at the very least it should be backed up with a reference. Trouble is, I don't know how to slap a [citation needed] sticker on the template. Dondervogel 2 (talk) 15:42, 1 November 2010 (UTC)Reply

I worked out how to request the citation in the template. Dondervogel 2 (talk) 17:08, 2 November 2010 (UTC)Reply
I found a reference to 1 petabyte = 1024 TB of RAM clearly showing Intel use the binary power of 1024 with PB and TB. Glider87 (talk) 20:09, 2 November 2010 (UTC)Reply
There is also a [2] for exabyte using 260 bytes so it can be used for the exabyte page. Glider87 (talk) 20:18, 2 November 2010 (UTC)Reply
The same book also includes binary use for terabyte, megabyte, gigabyte etc. Glider87 (talk) 20:21, 2 November 2010 (UTC)Reply

Citation Errors:

edit

Pirate Bay

edit

The pirate bay citation doesn't have any information on the cited page that backs up whats mentioned in the article. Just thinking about how a .torrent works it wouldn't make any sense for that much data to be on the PB servers. Each torrent takes up, at most, a 100kb or something. — Preceding unsigned comment added by 35.13.107.92 (talk) 13:07, 24 October 2011 (UTC)Reply

Google

edit

The citation for "Internet: Google processes about 24 petabytes of data per day." doesn't mention how much Google processes a day, it doesn't even have mention of petabytes in the article.

  1. Official Article Cited [[3]]
  2. Where I accessed it [[4]]

122.58.184.203 (talk) 09:19, 27 April 2012 (UTC)Reply

Possible Vandalism?

edit

As I have no exact clue I'm not sure, but this just doesn't sound accurate:

" is a unit of information equal to one quadrillion (short scale) bytes, or 1 billiard (long scale) bytes. "

"billiard" links to the quadrillion page. I'm pretty sure that'd be billion or something, but again, I don't really know. Stating that it's equal to a quadrillion of one item and then a quadrillion of an item just seems somewhat redundant if nothing else.

38.126.15.130 (talk) 20:35, 6 October 2012 (UTC)Reply

There is neither ambiguity nor redundance. The sentence means that the number of bytes in a petabyte is one thousand to the fifth power. The NAME of this number (1000^5 is) "one quadrillion" in one way of counting (a thousand, a million, a billion, a trillion, a quadrillion) but this SAME number (no contradiction) is "one thousand billion(s)" in ANOTHER way of counting (a thousdand, a million, a thousand million(s), a billion, a thousand billion(s)). In that system apparently you can leave off "thousand" in "thousand million(s)" by saying "milliard" and off of "thousand billion(s)" by saying "billiard".76.8.67.2 (talk)Christopher L. Simpson
And a bezilliard is a thousand bezillion, a paviliard is a thousand pavilions, and a thousand strips of julienned meat and cheese is a Juilliard.76.8.67.2 (talk)Some guy at the same IP address as Christopher L. Simpson

Omission

edit

Measures of bytes each equal to (1000 to the ninth power) bytes (i.e. 1000 yottabytes) are termed "wholelyottabytes".76.8.67.2 (talk) 04:54, 25 May 2014 (UTC)Christopher L. SimpsonReply

"One petabyte can hold four times the population of the US's DNA"

edit

Upon reading the Petabyte article, I found the claim under Usage Examples that "One petabyte is enough to store the DNA of the entire population of the USA - with cloning it twice." I was amazed when reading this and decided to do some calculations to figure out the cost of sequencing and storing such data. During my calculations I figured that you can only hold (2/3) * 106 diploid human genomes which is nowhere near the 304 million people the 2010 census says live in the US. The source itself, an article on the Computer Weekly website, had no math whatsoever and so I was obviously unable to check my own against it.

My Own Math
edit

According to Google, the Human Genome Project and Wikipedia, a human has about 3 billion base pairs in their sex cells. This is doubled in somatic cells, which are what I assume one would want to sequence. There are four possible base pairs in human DNA: A-T, T-A, C-G and G-C. With four possibilities, you can represent each base pair in a minimum of two bits (00, 01, 10 and 11) and thus 4 base pairs in a byte. A diploid human genome has 1.5 * 109 B worth of base pairs (1.5 GB). One petabyte is 1 * 1015 B. 1 * 1015 divided by 1.5 * 109 is (2/3) * 106, giving us roughly 700,000 complete human genomes in a petabyte. Again, this is nowhere near the nearly 304+ million people in the United States.

If my own math doesn't check out, please correct me. Also, please correct me if I misedited the Petabyte or Talk page in any way (this is only my second Wikipedia edit.)

Lizardthunder (talk) 17:26, 22 August 2014 (UTC)Reply

DNA can usually be compressed quite significantly, even a simple huffman encoding will reduce the size by a lot. Perhaps the claim is for compressed DNA. Smk65536 (talk) 15:29, 20 October 2014 (UTC)Reply

Decimal vs binary meanings of 'terabyte'

edit

There is a discussion of the decimal and binary meanings of 'terabyte' at Talk:Terabyte#Disputed_references. The discussion has possible implications for this page. If you wish to comment, please do so on the terabyte talk page. Dondervogel 2 (talk) 10:54, 29 July 2015 (UTC)Reply