Talk:Advanced Technology Attachment/Off Topic Threads

I think too many people in the industry are seemingly ignoring, they seem to only evulate whenever things go wrong, while when new ideas that people never thought of pop up, and question about how they are responsible for certain things, they always refute, go to lawsuits.

The fact is a lot of industry keep repeating the same mistake and they are not willing to admit it, Intel is one of the worst companies in the industry who NEVER admits things. Apparently they keep having the same problem over and over again, like they invested only a small part on Instruction Level Parallelism (Itanium)

Wikipedia

The architecture is based on explicit instruction-level parallelism, with the compiler making 
the decisions about which instructions to execute in parallel. This approach allows the 
processor to execute up to six instructions per clock cycle. By contrast with other 
superscalar architectures, Itanium does not have elaborate hardware to keep track of 
instruction dependencies during parallel execution - the compiler must keep track of these at 
build time instead.


When it doesn't work well, they say it sucks and it doesn't work. But Sun Microsystems already prove them wrong before in MAJC (aka UltraJava). When Atoms fails, they didn't re-evaluate, instead they try to find new partners to help them. When CPU is going downhill they don't try and make it better, instead they try to look for new roads in graphics, software and Intel CISC-RISC 386 which they are not expert on.

All of the above are just speculation and musing on your part. Article talk pages are for discussing changes to the article; your text here really has nothing to do with ATA. Perhaps you should post such speculation in the talk pages of articles about Itanium, Intel Atom, etc. Or perhaps in a personal blog, not in Wikipedia at all. Jeh (talk) 12:36, 7 July 2008 (UTC)Reply


Really? Well then prove to me why they are people claiming PerfectDisk 2008 of utilization constructive sequencing cluster doesn't increase performance when they do
Apparently many of the society doesn't even understand how parallelism work at the physical level and parallel level. Physical level is (Microsoft Defragmentation utility) defragmenting info onto separate, but they are loosley defragemented, so when info is requested (the stress is several different read / write heads not one read and write heads and apparently some enterprise software didn't show a understanding of it. Referring to PerfectDisk 2008 and need me to tell you that you guys graduated from university. --Ramu50 (talk) 21:17, 12 July 2008 (UTC)Reply
This is a completely different topic. It has nothing to do with SSDs or your contention that SSDs are not supported by ATA, in no way proves your contention, and whether or not I can "prove" anything in this area is irrelevant to your contention. Jeh (talk) 22:03, 12 July 2008 (UTC)Reply
But I will mention here that nothing in a modern hard drive subsystem outside the hard drive itself is generally aware of which heads are used to access a given piece of data. The OS does not even need to know, or care, the number of heads in a hard drive. (If you ask you generally get a bizarre number like "63".) That is what LBA is all about: the drive presents the storage as a linear array of blocks, indexed by LBA from 0 to n-1. The drive maps the LBAs to cylinder, head, and sector numbers in a manner completely unknown and completely transparent to the host controller, let alone to anything like a defrag utility. In other words no defrag utility, not PerfectDisk's, not Microsoft's, etc., can move things to different heads, or to the same heads, or anything of the sort. The OS is simply not accessing the drive via CHS addressing and has no idea which LBAs happen to map to a given head! It is possible to run modern hard drives in CHS mode but only if you don't allow the OS drivers to access the drive at the same time. So the only thing a defragger can do is to make a file contiguous as far as LBA numbers are concerned. Even that is not necessarily a good idea in the face of Windows File Placement Optimization, but any decent defragger is FPO-aware these days... but that's another topic. Jeh (talk) 00:23, 13 July 2008 (UTC)Reply

The same thing is happening with the Consumer Storage Industry, it just most people don't understand it that well. Also on the virtual part of the scheme, we don't understand much about management system of relational, distributed, parallel and connectivity relationship, therefore info is often hard to backup.

Actually many of us understand all of these things just fine. And many of us are not coming into this information anew in the last month or so. The problem is that you don't understand the material you've just read, yet you want to make changes to the article based on your misunderstandings. Your text constantly reads as if you have been filling your brain with acronyms and buzzwords and names for concepts that you don't understand; you are interpreting them in incorrect ways and then insisting that everyone else is wrong when they disagree with you. Jeh (talk) 12:36, 7 July 2008 (UTC)Reply

I think only the following companies are quite responsible Sun Microsystem, Via and maybe IBM, for the storage companies such as Western Digital, Hitachi, Fujitsu and Seagate its hard to say Samsung & Toshiba (= = I will add them just for the credits)

Your personal beliefs about which companies are "responsible" have no bearing on a wikipedia article. Jeh (talk) 12:36, 7 July 2008 (UTC)Reply

Yea I can say most part I am wrong, except for Host Controller Interface and SSD.

No, you are wrong about HCI. I repeat: SATA spec'd a standard HCI so that various SATA controllers could all use the same driver. You claimed that the same could be done with code in the BIOS, but that does not help, because modern OSs do not and cannot rely on directly executing BIOS code after the very first phase of booting. BIOS code is not designed and is not intended to execute within the requirements of a modern OS. Note that no modern OS uses the INT 13 interface for disk IO once the OS is up and running. It was fine for DOS, and it's fine for booting, but that's the extent of its usefulness. Jeh (talk) 12:36, 7 July 2008 (UTC)Reply
Not true, in Window Vista there already a technology around that I will provide the detail later, because that information was form one of the magazine, but I might have lost it. Might require to go the internet to search for the minor details. --Ramu50 (talk) 21:17, 12 July 2008 (UTC)Reply
Uh, yes, it is true. I write drivers, including storage device drivers, for Windows; I know what I'm talking about. Heck, I have access to the source for Windows' standard disc controller drivers (as do many other MVP's). They do not use the INT13 or any other BIOS-supplied mechanism for accessing hard drives, ever, after the first stage of booting. They cannot, for the reasons I described. BIOS code just doesn't expect to run in a preemptive, multitasking, virtual memory OS and is not designed to do so. This was true in NT 3.1 and it is equally true today. If you want independent verification of this you might ask the question in the newsgroup comp.os.ms-windows.programmer.nt.kernel-mode , or join the discussion forums at www.osr.com. Now there is something called the ACPI BIOS, in which the BIOS exports "methods" expressed in AML, ACPI Machine Language. This is similar in concept to Java byte code: It is interpreted by the OS in a carefully protected sandbox. This allows the BIOS to export methods for enumeration and control of motherboard-specific resources such as power controls. But these ACPI methods are not useful in high-performance code paths such as storage controller drivers; they're too slow. In any case this issue has nothing to do with SSDs or your contention that SSDs are not supported by ATA. Jeh (talk) 22:03, 12 July 2008 (UTC)Reply