Talk:Word (computer architecture)

Article title edit

This article has many possible titles, including word, computer word, memory word, data word, instruction word, word size, word length, etc. I believe the general form should be used since none of the specific forms is well established. Of course "word" has to be qualified to disambiguate from non-computing uses. I'm not particularly happy with "(computer science)", though. Perhaps "Word (computing)" would be the best choice overall. -R. S. Shaw 05:00, 6 June 2006 (UTC)Reply

Word is the best name, but I agree that (computing) is the better choice. "computer word", "memory word", and so on can redirect here. --Apantomimehorse 15:55, 30 August 2006 (UTC)Reply
I've moved this from Word (computer science) to Word (computing); I'll fix up the double redirects now. -R. S. Shaw 01:07, 19 January 2007 (UTC)Reply
I noticed that the 64-bit architecture page has a side panel which allows you to click on 16-bit data size link, which links directly to this page. I think that's anathema to the idea that a word-size is defined by the architecture. There should be a page that redirects from 16-bit data size to Word (computer science). This way, the link from 64-bit architecture does not lend the web surfer the wrong impression that 16-bit data size is always equal to word-size -- anonymous jan 18 2007 2:58 pm est —The preceding unsigned comment was added by 24.186.76.45 (talk) 19:56, 18 January 2007 (UTC).Reply
Preceeding comment copied to Template talk:N-bit since that template is the source of the table of links in question. -R. S. Shaw 05:29, 19 January 2007 (UTC)Reply

Merge Articles edit

I can't see why not to do this. It would be silly to have a different article for every word length. Merge them all into the same article. I don't mind under what category you qualify word, seems like a bit of an arbitrary choice anyway. --FearedInLasVegas 22:39, 28 July 2006 (UTC)Reply

Agreed for merge. Redirect "dword" et al. here. --Apantomimehorse 15:56, 30 August 2006 (UTC)Reply

OK, I did the merge, but it's fairly unintegrated. Anyone feel free to smooth it out...Lisamh 15:46, 21 September 2006 (UTC)Reply

Causality backwords? edit

It's my understanding that the concept of "word" size is a nebulous one and not very defined on some systems. Should the article make this more clear? The way the article makes it sound, computer designers pick a world size and then base other choices around it. While there are good reasons to use the same number of bits for registers as you do for bus-widths and so on, I just don't think this is the reality any more. The x86/x64 archictectures, for instance, are a mess of size choices. So what I'm proposing is a reversal of emphasis by saying: 'computers have certain bus-widths and register-sizes, etc., and the size of a "word" is the number of bits most common to them; this is not always a straight-forward determination". --Apantomimehorse 16:04, 30 August 2006 (UTC)Reply

The article shouldn't imply (if it does) that a word size is chosen out the blue. It isn't. It's chosen exactly because it and its multiples are judged a good compromise for all the detailed design choices - memory bus width, numerical characteristics, addressing capability of instructions, etc., etc.
A word size is rarely undefined, but may be a somewhat arbitrary choice out of the several sizes that belong to an architecture's "size family". The article does discuss the concept of a family of sizes. A modern, complicated computer uses many different sizes, but most of them are members of the small family of tightly-related sizes. In the x86 case, it's clear that the word size was 16 bits originally, and that the extension to primarily-32-bit implementations extended the size family to include 32 as well as 16 and 8. As the article mentions, the choice of one of the members of the family as the single "word" size is somewhat arbitrary, and in the x86 case has been decided by historical precedent. The word size for x86 is straightforward: it is defined as 16 bits, even on a Opteron. On a new design, the definition would be chosen to be closer to what is thought to be the most important size, e.g. 64 bits (but the new design would also have much less use for other members of its size family). -R. S. Shaw 02:57, 31 August 2006 (UTC)Reply
I'll take your word [<--pun!] for it that a word is still a meaningful concept and that selection thereof is important in design (I think learning x86 as your first processor rots your brains), but I still feel the article gives a strange impression about which defines which: does the word size define the size of the things which gel with the word size or vice versa? I see you changed 'influences' to 'reflected in', and that's a good start. I'm not happy, though, with the phrase 'natural size' as it's a bit mysterious to anyone who might wonder what in the world is natural about computers. By 'natural' is it meant 'best for performance'? Or 'convenient for programmers'? I understand this is addressed down in the article, but I think the opener needs something about the importance of word size other than just saying it's important. I'd address these issues myself, but I'm not qualified to do so.
(Heh, just realized I mispelled 'backwards' in the subject heading. Let's just pretend that was a pun.)--Apantomimehorse 22:33, 31 August 2006 (UTC)Reply

"Word" and "Machine word" edit

"Word" seems quite ambiguous in modern usage, since people very often mean it as 16-bits rather than the actual word size of the processor (likely now to be 32 or 64 bits). There's Intel and Microsoft who, for reasons of backwards compatability, use "word" to mean a fixed size of 16-bits, just as "byte" is used to mean a fixed size of 8-bits. Although in formal contexts, "word" still means the basic native size of integers and pointers in the processor, a huge number of people believe that "word" just means "16-bits". Would it be useful to mention the phrase "machine word" as being a less ambiguous way to refer to that native size, and try to avoid the danger of the fixed 16-bit meaning? JohnBSmall 14:10, 8 October 2006 (UTC)Reply

Word has always been ambiguous, as if it is not to be vague it requires knowledge of the context: the particular machine. A huge number of people only have knowledge of x86 machines, so until their horizons extend naturely think of a word as 16 bits. The "native size" of an undefined machine is intrinsically vague and/or ambiguous, and that is fine for many uses where a precise size isn't needed for the discussion to make sense. Modern computers use many different sizes, and designating one of those sizes the "word" size is rather arbitrary. Using "machine word" doesn't really clarify anything, since it is no more or less ambiguous than just "word"; you need to know the particular context to pin down the meaning. Additionally, if the context is the x86 series of machines, "machine word" is more ambiguous than "word", since the latter is known to be 16 bits by definition (a good convention, IMO), but the former leaves open questions as to what is meant. Even if you know the context is, say, a AMD64 machine, you would mislead many people if you used "machine word", since different people would take you to mean 16b, or 64b, or even 32b since most operands in use on such a machine are probably 32b. -R. S. Shaw 03:00, 9 October 2006 (UTC)Reply
The meaning of "Word" is ambiguous, but that is not reflected in this article. The current article implies that the term "word" always refers to the register size of the processor. As JohnBSmall states it is very common for "word" to instead mean 16bits. You can see this usage in both the AMD and Intel application programmer manuals. — Preceding unsigned comment added by ‎ 130.123.104.22 (talkcontribs) 14:49, 19 November 2014

I have to strongly disagree. To anyone with a mainframe background of any kind this is simply not true. A word might be 32, 36, or 60 bits. 16 bits is a halfword, to call it a word is a recentism. A word on 8086 systems was 16 bits, and Intel didn't want to bother to update their terminology when the 386 was introduced. Peter Flass (talk) 22:17, 19 November 2014 (UTC)Reply

To call 16 bits a word would be a recentism on a 32-bit mainframe. To call it a word wouldn't be a recentism in the world of the IBM 1130, which was announced one year after the System/360 (see the 1130 System Summary, which speaks of 16-bit words)., nor in the world of the mid-1960's HP 2100, or the slightly later Data General Nova and PDP-11. The PDP-11's terminology was continued with the 32-bit VAX, so that was an earlier example than the 80386 of a 32-bit architecture calling 16-bit quantities "words" and 32-bit quantities "doublewords".
So the choice of a particular architecture's developers to use the term "word" for a particular operand size is arbitrary; the mainframe perspective and the minicomputer perspective (in which a word might be 12, 16, or 18 bits, for example) are equal here.
So there's:
  • "word" as the minimum directly-addressible unit of memory on many, mostly older, architectures;
  • "word" as the maximum integer/address operand size on most architectures;
  • "word" as a term for a particular operand size, often chosen for historical reasons (as is the case for VAX, IA-32, and x86-64, for example).
Of those three, the latter is of least interest, as it's arbitrary and, in many cases, historical. We should perhaps mention that some architectures have quantities called "words" that have nothing to do with the inherent characteristics of the architecture, and then talk about the other two types of "word", with the second one being the one of current interest for most architectures, as most of them are 8-bit-byte-addressible and have support for at least 16-bit operands and addresses. If the article ends up extending the horizons of x86 programmers as a result, that would be a Good Thing. Guy Harris (talk) 23:09, 19 November 2014 (UTC)Reply
See also the "Size families" section of the article, which talks about the third use of "word". Guy Harris (talk) 10:06, 20 November 2014 (UTC)Reply

Minicomputers edit

There were a whole bunch of minicomputers before the VAX. I used Motorola, HP, and DEC. There were a few others in the building for "real time" computing and some pre-PC desktops.

I remember the CDC 6600 as having a 64-bit word and there were 128 characters available. The 128 characters does not match a 6-bit word, but rather a 7-bit word. You could run APL on it and it used a whole bunch of special characters. .

I only read that part of the manual once, and there were a few "hall talk" discussions about the size. So, there may be a wet memory error there.

Ralph —The preceding unsigned comment was added by 67.173.69.180 (talk) 15:33, 6 January 2007 (UTC).Reply

The CDC 6600 and 7600 use 60 bit words. Characters are defined in terms of 6 bit fields, of which it might take 1 or 2 per character. Gah4 (talk) 05:10, 27 March 2019 (UTC)Reply

tword edit

Is "tword" worth mentioning? It is used, at least, in nasm as a ten-byte field. I found a few other references to it (for example, GoAsm). I was looking because the nasm documentation didn't really say how big it is. --Ishi Gustaedr 17:56, 20 July 2007 (UTC)Reply

I don't think it belongs here, as it's not a generally recognized term. It seems pretty much to be a convention internal to nasm and a few other tools. -R. S. Shaw 03:39, 21 July 2007 (UTC)Reply

Variable word architectures requires fix and article needs to be improved edit

the Model II reduced this to 6 cycles, but reduced the fetch times to 4 cycles if one or 1 cycle if both address fields were not needed by the instruction

Suggestions:

1. Article needs to differentiate between general word meaning and Intel's definition (more clearly). Generally word should mean maximum data, which could be stored in a register, transfered through the memory bus in one cycle, etc (width of them). Now Intel (or x86 programmers) have coined the new word definition, which is 16 bits. This is only one of the possible word sizes like byte can mean not only 8 bits. IA-32 (since 80386) natural word size is 32 bytes cause such amount of data is maximum register capacity and is the size with which the system operates. 16 bit part is only part of the register. As well, 16 bit word can also be divided into high and low parts and all these together make one 32 bit register. Same goes for x86-64, but not the latest IA-64. Furthermore, C language standards ANSI C and C99 define that int size is equal to the size of the word in particular system. On IA-32 it is 32 bits.

2.

and is still said to be 16 bits, despite the fact that they may in actuality (and especially when the default operand size is 32-bit) operate more like a machine with a 32 bit word size. Similarly in the newer x86-64 architecture, a "word" is still 16 bits, although 64-bit ("quadruple word") operands may be more common.

Seems too much uncertainty (may, more like) —Preceding unsigned comment added by 193.219.93.218 (talk) 13:49, 11 June 2008 (UTC)Reply

1. As I see it, Intel didn't really coin any new definition of "word". They defined a word as 16 bits when they came out with the 8086. They continued with that convention with the upward-compatible 80186 and 80286 (even though the latter had many 32-bit operations). The '386 was also upward-compatible, and initially most programs were using 16-bit operands (indeed, could run on a 8086 or 8088), so of course they continued to have the term 'word' mean 16 bits. To do otherwise would have only created competely unnecessary confusion. This is not unusual; the term 'word' is vague except when applied to a particular machine, where it usually has a very particular value. This usually has to be done by definition for the machine, because any modern machine uses different sizes in different contexts; it is not a simple maximum register size. The x86-64, for instance, has some registers bigger than 64 bits, eg 128 bits. The "Size families" section of the article talks about this.
2. The prose seemed to reflect the situation, which is not really uncertain but complicated and conditional - a "32-bit" 386 can be running 8086 code and not operating like a native "32-bit word machine" in most respects. But I have made some revisions to it which may have improved things. -R. S. Shaw (talk) 19:40, 11 June 2008 (UTC)Reply
Thanks. I usually try to contact the maintainer first. The main thing why I question this word issue is because the definition of word crosses with C's definition of int. Int is defined as a word on particular system. Because of this I came here to verify my assumption that int would be 64bit on x86-64. But according to this article int should be 16bit. And we know that int size on 32bit systems is 32bit. Btw, memory bus size is 32/64 bits and even if we operate with only ax, bx, cx, dx, si and di, data is sent from the eax, ebx, ... with leading 0 appended. Could this word term be defined in some IEEE released terminology standard? 193.219.93.218 (talk) 03:47, 12 June 2008 (UTC)Reply
I haven't looked at the C definition recently, but I thought it was more like natural size, and not word size. For 80x86, it stayed at 16 bits for a long time, but then finally went to 32 bits. (I still sometimes use the Watcom compilers, where there are two compilers to generate 16 or 32 bit code.) In the case of Fortran, the size of INTEGER and REAL are required to be the same, such that EQUIVALENCE of arrays works. (But it isn't required that all the bits be used.) The IBM 704 stores 16 bit integers in 36 bit words. Gah4 (talk) 19:06, 20 April 2018 (UTC)Reply
To quote ISO/IEC 9899:2011, "Information technology — Programming languages — C":

A ‘‘plain’’ int object has the natural size suggested by the architecture of the execution environment (large enough to contain any value in the range INT_MIN to INT_MAX as defined in the header <limits.h>).

C90 and C99 say the same thing. So you are correct - C's definition of int does not involve the term "word" at all.
Prior to the C standard, there existed C compilers for the Motorola 68000 family that had 16-bit ints and C compilers for the 68000 family that had 32-bit ints, so some compiler writers chose "the width of the CPU data paths" and some compiler writers chose "the size of the data and address registers and of the operands of the widest integer arithmetic instructions". Guy Harris (talk) 20:05, 20 April 2018 (UTC)Reply

Table of word sizes edit

What are the criteria for including an architecture in this table? I don't see the PDP-6/10, which was the dominant architecture on the ARPAnet and the major machine used in AI research in the 60's and 70's; I don't see the SDS/XDS 940, a pioneering early timesharing machine; I don't see the GE/Honeywell 6xx/6xxx series, on which Multics was implemented; I don't see the Manchester Atlas, which pioneered virtual memory. Of course, I could just add them, but perhaps there is actually some logic about the current choice I am missing? For that matter, I'm not sure that having an extensive list is all that valuable; maybe it could be condensed in some useful way? --Macrakis (talk) 20:16, 27 July 2010 (UTC)Reply

When I put the original table in [1], I gave it a dozen entries I thought were representative of the historical range of sizes and featured major machines. I envisioned a small table of this kind as an easy overview of the evolution of word sizes. It was not long before others started adding entries, most notably User:RTC who added perhaps two thirds of the current entries I would guess, although none since March 2008. What criteria there were for new entries I couldn't say. I prefer the small table, but haven't actively resisted the expansion. -R. S. Shaw (talk) 05:48, 30 July 2010 (UTC)Reply
x86-64 should be there. 24.87.51.64 (talk) 14:18, 29 June 2011 (UTC)Reply
The table also has some errors. There have been many machines with 6-bit characters and addressability to the character; the table shows them as addressable to the digit. With the exception of the IBM 1620, thet should be shown as 6-bit character (+wordmark for 1400/7010) and addressability to the character. The Fast Universal Digital Computer M-2 had an address size of 10, but the unit of address resolution was word.
As a secondary issue, should the table show sign convention, e.g. ones complement, two's complement, sign magnitude? --Shmuel (Seymour J.) Metz Username:Chatul (talk) 15:19, 25 February 2022 (UTC)Reply
That would have been more obvious if IBM didn't call their character code, and the resultant characters, BCD. You can consider a 6 bit character a digit in base 64, though as I remember, not all 64 codes can be used. When these machines do arithmetic, those are decimal digits, even though they take 6 bits (not counting word mark bit). And they do figure out a way to address memory using those digits in the least obvious way. (That is, a combination of decimal and binary arithmetic.) Given all that, calling them digits, (though maybe not decimal digits) doesn't seem so far off. Gah4 (talk) 22:46, 25 February 2022 (UTC)Reply
@Chatul: Sign might be valid, but you would have separate conventions for binary and decimal on the same machine (ex:S/360), and might have different for float also. My question is, what about machines that have only software floats (SDS 9xx, IBM 1130)? Should these be indicated? On the other hand, the table width is just about at the limit of usability now, so maybe this stuff is not relevant to discussion of words. Peter Flass (talk) 23:38, 25 February 2022 (UTC)Reply
True, there are complexities with signs, including machines whose signs take more than two values. Perhaps the best way to deal with them is to change Uses of words from a definition list to subsections with {{main}} as appropriate. --Shmuel (Seymour J.) Metz Username:Chatul (talk) 23:52, 26 February 2022 (UTC)Reply
On machines addressable to six-bit characters, only 30 of the possible characters are digits; 0-9, 0-9 with a plus zone and 0-9 with a minus zone. Given that, I see no justification for refering to arbitrary six-bit characters as digits, especially when all six bits of each character in an instruction has significance. --Shmuel (Seymour J.) Metz Username:Chatul (talk) 23:52, 26 February 2022 (UTC)Reply
I think that having a comprehensive table of machines is useful; it might not be in line with the original purpose of the article, but is a useful resource, and I personally was pleased to see some of the old machines I worked on represented. One advantage of having a comprehensive list is that one can more easily see the trend towards 64 bit words, but not extending further (which I think is relevant to the article). If people do want a shorter table in this article I would recommend to split the article, short table in this one with a link to another article: "List of computer word lengths". Whichever option we go with, it would be nice if Colossus is added, even if it needs some explanation. — Preceding unsigned comment added by FreeFlow99 (talkcontribs) 11:21, 5 January 2024 (UTC)Reply

Rethink edit

Although I appreciate the idea that this article describes "Word" as a dynamically sized type depending on architecture...

When we're in the world of x86 there's simply a backward compatibility in effect that has fixed the size of the type "WORD" to 16 bits. The problem with this concept however is Compilers, that actually utilize fixed types, but use base types like "integer" and "long" that are sized dependent on the underlying cpu architecture.

When in C the type WORD is defined via "#define WORD int", it doesn't mean that the WORD is always the length of an integer, it means that the WORD is defined to a type that at the time that it was written - an int was 16 bits. When you compile the same definition on a compiler that supports 64 bits, it will be compiled to a 32 bits type, which will probably Break your software - especially when coupled to the Windows-API. Nowadays WORD will probably be defined as a short int, which is 16-bits in most C compilers.

As a result, or reason, who knows what happened first, both Intel and AMD have a clear definition of Words in their opcode manuals.

The following quote is from the "Intel® 64 and IA-32 Architectures, Software Developer’s Manual, Volume 2A: Instruction Set Reference, A-M":

cb, cw, cd, cp, co, ct — A 1-byte (cb), 2-byte (cw), 4-byte (cd), 6-byte (cp),
8-byte (co) or 10-byte (ct) value following the opcode. This value is used to
specify a code offset and possibly a new value for the code segment register.
ib, iw, id, io — A 1-byte (ib), 2-byte (iw), 4-byte (id) or 8-byte (io) immediate
operand to the instruction that follows the opcode, ModR/M bytes or scaleindexing
bytes. The opcode determines if the operand is a signed value. All
words, doublewords and quadwords are given with the low-order byte first.

The "legacy" opcodes LODSB, LODSW, LODSD for example are made to specifically load 1-byte, 2-byte, 4-byte into registers. Independent of whether the architecture is 32 or 64 bits.

So I personally think that the size of a WORD has been fixed into history to 16-bits and DWORD into 32-bits, simply because they're currently heavily used and really need to be binary compatible types. — Preceding unsigned comment added by Partouf (talkcontribs) 20:54, 15 January 2011 (UTC)Reply

68000 edit

On the 68000, a word usually refers to 16 bits. The 68000 had a 16 bit data bus, and although it had 32 bit registers, performing 32 bit instructions were slower than 16 bit ones. -OOPSIE- (talk) 08:27, 2 February 2011 (UTC)Reply

The 68000 was Motorola's answer to the 8086, plus a half step or so. It has less connection to the 6800, than the 8086 does to the 8080. Gah4 (talk) 18:57, 20 April 2018 (UTC)Reply
The Intel 8086 page claims that Intel started designing the 8086 in 1976, and the Motorola 68000 article claims that the 68000 grew out of a project "begun in 1976 to develop an entirely new architecture without backward compatibility"; I don't know to what extent it was an "answer" in the sense of a response to the 8086, as opposed to being a chip promoted by Motorola as a better alternative to the 8086.
But, yes, it's not particularly connected to the 6800.
And the 68000 article also claims that it was designed with the future in mind, so, whilst the implementation had at least some 16-bit data paths internally and a 16-bit bus, the instruction set was intended to be 32-bit, and, as of the 68020, the implementation was 32-bit as well. Guy Harris (talk) 20:13, 20 April 2018 (UTC)Reply

address width edit

A critical limit on any architecture is the width of an address. I admit that there are multiple versions of this: the width of the address bus and the width of the address as far as ordinary programs are concerned (and sometimes the width of the address for programs that are willing to work hard).

Many computer architectures ran out of address bits, grew warts to extend the physical address, and then died when these were not sufficient. Think: PDP-8 with 12-bit addresses, then bank switching to allow more memory (horrible for a program to deal with, OK for an OS). Or the i386 with 32 bit addresses and then PAE.

Reliable sources say that the main motivation for 64-bit architectures (SPARC, MIPS, AMD64, ARM8) has been to make wider addresses convenient.

I'd like a column added to the table to give the address width.DHR (talk) 20:07, 28 October 2013 (UTC)Reply

Address width is indeed an important aspect of computer architecture, both at the Instruction Set Architecture and microarchitecture levels, and is also an important characteristic of particular processor models within a microarchitecture. The subject deserves exposition in Wikipedia. It isn't, however, very closely related to word size, and varies much more widely than word size does. It varies among chip models of an architecture, and often with microarchitectures. It is often responsible for forcing an extension to an ISA. Yet typically the word size remains constant across all those variations.
Because of the above, I'd be against adding discussion of it to the Word article and think it should not be in the word size table. (I'll add that the word table is overly complex now, and would suffer from any additional columns.) --R. S. Shaw (talk) 20:16, 2 November 2013 (UTC)Reply
Virtual address width rarely varies among implementations of an architecture, and that's the address width that matters most to programmers. All IA-32 processors have 32-bit linear addresses, and all x86-64 processors have 64-bit addresses, for example, regardless of how wide the physical addresses are on particular processors. And when the ISA extends for larger addresses, larger general registers and load/store/arithmetic instructions handling larger values generally goes along with it, widening the word size.
Historically, virtual address width (or physical address width if you only had physical addresses) may not have been closely related to word size, but in most modern architectures they're pretty closely related. Guy Harris (talk) 20:07, 24 August 2014 (UTC)Reply
I agree that the width (in bits) of the various kinds of addresses, and the various warts that historically were used when the original address width for some machine turned out to be insufficient, is encyclopedic enough for Wikipedia to cover it *somewhere*. However, I think memory address -- or perhaps pointer (computer programming) -- is a much better place to discuss address width. --DavidCary (talk) 15:02, 6 November 2014 (UTC)Reply

Char and multibyte edit

Thanks for improving my language regarding "char" and Unicode (and Harvard, but while true shouldn't split-cache be mentioned?). While the history lesson is correct (I can always count on you), my point wasn't to list encodings. UTF-8 is up to 82.4% while Shift JIS is down to 1.2% on the web.[2]

My point was that "char" is still 8-bit (or not e.g. in Java but with all those newer languages we have, there is still an 8-bit type) while not "usable" when 8-bit for characters.. The multibyte encodings didn't influence computer architecture and word size as far as I know.. Still I just thought somehow it should be mentioned that "char" can be bigger (e.g. 32-bit in Julia (programming language), my current favorite) while not having anything to do with the least addressable unit. Then maybe it's ok to shorten the language, to be on topic, while only mentioning Unicode/UTF-8 (as the only important encoding going forward)? comp.arch (talk) 14:38, 19 January 2015 (UTC)Reply

So what is the relevance of a particular C datatype in an article about word length? And if "the multibyte encodings didn't influence ... word size as far as I know", why bother mentioning any of them, including UTF-8 or UTF-16? The character size influenced the word size back in the days when variable-length encodings were rare (and when the C programming language didn't exist). Guy Harris (talk) 17:59, 19 January 2015 (UTC)Reply
Hmm, you may be right and then it seems you wouldn't object if I removed the text you added.. The first thing I noticed and changed was this sentence: "Character size is one of the influences on a choice of word size." -> "Character size ("char") was one of the influences on a choice of word size." I felt "is one of" might be confusing to Unicode users and I needed to add something about Unicode since I changed is->was. Thinking about this more, actually the "char" datatype isn't variable size (it may be 16-bit from pre-Han unification as in Java or just 32-bit in other languages), only the encoding is variable (and of course the word size doesn't change as still a multiply of 16 or 32). Then at a low level you "need" a "byte" type for the encoding - unless.. If UCS-2 (note not UTF-16) had prevailed, and with indexed color dead, I wander if the Alpha would have gotten away with skipping byte addressing, having the "char"/int 16-bit.. Considered a mistake and they added later (I was also thinking if this article should mention that). What other major uses of bytes can you think of? Networking is strictly speaking in bits but network packet lengths are in bytes (not say multiples of 4 usually?). Not that if needs byte addressing (not in the Alpha, and I think not one of the reasons that added bytes).
I know chronological order is preferred in history sections in WP, the Word section is about the concepts and design choices, maybe start with what is relevant now and end with historical? Not sure if you oppose reversing order as the section has history in it, shile not strictly a historical section. comp.arch (talk) 08:54, 20 January 2015 (UTC)Reply
I wouldn't object if we removed everything that spoke of "char". It would be unjustified to assume that I wouldn't object to other removals.
The right way to handle this would be to make it clear that, in the context of the computer history being discussed, "characters" come from a small set, possibly not even including lower-case letters, and therefore that anybody who thinks of "characters" as Unicode entities needs to significantly recalibrate their thought processes before reading that section - it'd be like somebody thinking a "phone" is a small portable hand-held device with a radio when reading an article about hand-cranked phones. :-)
And, no, the C/C++/Objective-C "char" datatype isn't variable-sized; it only needs to be "large enough to store any member of the basic execution character set", which means, in practice, that it's one 8-bit byte on 99 44/100% of the computer systems supporting C. "wchar_t" is what you want for larger character encodings, although I think that's 16 bits in Microsoft's C implementations, which isn't big enough for a Unicode character (Basic Multilingual Plane, yes, but not all of Unicode). C's "char" is best thought of as meaning "one-byte integer of unspecified signedness", with "signed char" and "unsigned char" meaning "one-byte {signed, unsigned} integer". Characters are encoded as sequences of 1 or more "char"s.
Given existing C code, the chances that a word-addressible Alpha would have succeeded were somewhere between slim and none. It'd have to be able to read and write files and network packets with a lot of byte-oriented stuff in it, etc.; that's why Alpha was byte-addressible, even if the first versions of the instruction set required multiple instructions to load or store 8-bit or 16-bit quantities. (No, the article shouldn't mention that; Alpha was not word-oriented in the sense that, say, the IBM 709x or the Univac 1100/2200 machines or the GE 600 machines and their successors or the PDP-10 was; all pointers were byte pointers, you just had to do multiple instructions to do 1-byte or 2-byte loads or stores. And the size of a word, in the sense of this article, was 64 bits, but Alpha could do 32-bit loads and stores with one instruction.)
The section talking about the influence of character size on word size is very much a historical section - the System/360 started the 8-bit-byte/word as power-of-2-multiple-of-bytes/byte-addressibility trend that became dominant in mainframes with the S/360, dominant in minicomputers with the PDP-11, and was pretty much always dominant in microprocessors. I see no good reasons to change the order of items in that section from what we have now. Guy Harris (talk) 10:50, 20 January 2015 (UTC)Reply
The stuff about multi-byte encodings was originally introduced in this edit; you might want to ask the person who made that edit whether they would object if it were removed. :-) Guy Harris (talk) 10:53, 20 January 2015 (UTC)Reply

Natural-size "word" vs. historical-terminology "word", yet again edit

Recent edits changed the entries for IA-32 and x86-64 to use "word" in the historical-terminology sense rather than the natural-size sense. This has been much discussed during the history of this page.

If, for architectures that evolved from earlier architectures with shorter natural-size words, we're going to consistently use "word" in the historical-terminology, then 1) we need to fix the entry for VAX as well (it was a 32-bit architecture, so "the natural unit of data used by a particular processor design", to quote the lede of the article, was 32 bits) and 2) we should update the lede, as it doesn't say "word is a term the developer of an instruction set uses for some size of data that might, or might not, be the natural unit of data used by a particular processor design".

Otherwise, we should stick to using "word" to mean what we say it means, even if that confuses those used to calling a data that's half the size, or a quarter of the size, of the registers in an instruction set a "word".

(We should also speak of instruction sets rather than processor designs in the lede, as there have been a number of designs for processors that implement an N-bit instruction set but have M-bit data paths internally, for M < N, but that's another matter.) Guy Harris (talk) 23:38, 20 February 2016 (UTC)Reply

I changed it because several pages directed to this one for definitions of X86 word and double-word, and then this article used a technical definition that contradicted normal use. The table in general seems to be of doubtful use, and full of errors. For instance the use of d is inconsistent and random. Perhaps we should ask what this table is supposed to do? I just want this page somewhere to have a definition of the most common uses of Word. That it is 16-bit for integers on X86, 32-bit for floats on the same, and 32-bit on most other 32-bit (and 64-bit) architectures.Carewolf (talk) 11:36, 21 February 2016 (UTC)Reply
"this article used a technical definition that contradicted normal use" This article is about the technical notion of "word", not about the x86 terminology. Perhaps the pages that go here when talking about the term Intel uses in their documentation should go somewhere else, e.g. a "Word (x86)" page. Guy Harris (talk) 17:13, 21 February 2016 (UTC)Reply
This page is about "Word" in computer architecture. We can define that, but then note nomenclature in different architectures is different based on history. Think also about word-size compared to floating point - Is it really 16bit on a 16-bit x86 with x87, when the smallest size floating point it can operate on is 32bit and the internal registers were 80bits? The term is inherently ambigious, so this article needs to deal with that. 17:27, 21 February 2016 (UTC)
In the technical sense, "word" generally corresponds to maximum integer and (linear) address sizes (even with wider-than-word-size integer arithmetic done with multiple instructions), even if that might not correspond to floating-point value sizes. Neither the integer nor linear address size are 16 bits on an IA-32 or x86-64 processor.
And, yes, the last paragraph of the introduction, and the "Size families" section, should be rewritten to indicate that, for some architectures, the term "word" is used to refer to values narrower than the natural word size of the processor - i.e.:
  • the natural word size of IA-32 processors, and the natural word size of x86-64 processors, are not the same as the natural word size of 16-bit x86 processors, but the Intel documentation uses "word" to mean "16-bit quantity" for all flavors of x86
  • the natural word size for VAX processors, and the natural word size of Alpha processors, are not the same as the natural word size of PDP-11 processors, but the DEC documentation uses "word" to mean "16-bit quantity" for all three of those architectures;
  • the natural word size for z/Architecture processors is not the same as the natural word size of System/3x0 processors, but the IBM documentation uses "word" to mean "32-bit quantity" for both of those architectures. Guy Harris (talk) 18:07, 21 February 2016 (UTC)Reply
I've changed it yet again, for consistency. There's no point in singling out 32-bit and 64-bit x86 when Microsoft's toolchains also exist for a number of other platforms, and they use "WORD" for 16 bits, "DWORD" for 32 bits. Thus, we'd need to either change all of those to use the Microsoft terminology, or use natural for all. And even worse, what to do if a Microsoft toolchain gets made for another architecture? Per this logic, you'd need to change it to 16-bit "words". -KiloByte (talk) 22:40, 27 April 2017 (UTC)Reply
Both 32 bit IA32/x86 and 64 bit x86-64, along with VAX, and Itanium use, in the hardware documentation 16 bits for word. In earlier years, word size was important in advertising, such that one wouldn't want to show a smaller than natural size. That seems to have been lost. Personally, I believe it was a mistake for DEC and Intel to do that, but they did, and we can't change it. For older word addressed machines, there wasn't much reason to use any other size, but for byte addressable machines, it seems less obvious. Someone asked about word size in Talk:Endianness, so I looked here to see that it says. Gah4 (talk) 06:32, 20 April 2018 (UTC)Reply
Yup, x86 started out as 16-bit, and VAX-11 was a Virtual Address Extension to the 16-bit PDP-11 (even though the ISA was, although a bit similar, not the same), so, for better or worse, as the ISAs became wider or had wider follow-on ISAs, they chose to continue calling 16-bit quantities "words", as they did in the original 16-bit ISAs. S/360 started out as 32-bit, so "word" started out as 32 bits, at least in the sense that a 16-bit quantity was called a "halfword", and the 64-bit version of S/360, z/Architecture, still calls 16-bit quantities "halfwords". I'm not sure to what extent S/360 and its successor spoke of "words" except as the thing of which a halfword is half - the 32-bit add instruction was "add" (A), not "add word", and even the 64-bit add instruction is called "add", although the mnemonic is AG.
So maybe the word "word" shouldn't be used any more with byte-addressable machines.... Guy Harris (talk) 07:54, 20 April 2018 (UTC)Reply
I wonder that, too. Before VAX, word size was a selling point. Though, even though S/360 has, by definition, a 32 bit word, the implementations often used a smaller data path and memory access width. The 360/30 and 360/40 have 8 bit ALUs, though the 360/40 has 16 bit wide (plus parity) memory. So, one definition of word size, the natural width for the processor, doesn't apply. High end S/360 and S/370 models have 64 bit wide memory. Also, S/360 and successors have a 64 bit PSW, Program Status Word (not doubleword). I am not sure what the natural width for various VAX processors is. Gah4 (talk) 18:45, 20 April 2018 (UTC)Reply
"Program Status Word" seems to be one of the few places where the S/360 architecture speaks directly of "word"s; the Principles of Operation mentions 16-bit "halfword" instructions, such as "add halfword", but the 32-bit add instruction is just "add".
I have the impression that all VAXes had 32-bit internal data paths; even the low-end VAX-11/730 had a 32-bit ALU (made out of 2901's). Guy Harris (talk) 20:18, 20 April 2018 (UTC)Reply
AS well as I know, that is true for VAX. When it was time to go to a 64 bit data path, they went to Alpha instead. I am not so sure about the high end VAX, though. Gah4 (talk) 05:21, 27 March 2019 (UTC)Reply

History....... edit

Well I had to scroll about 11 or 12 bits down the this page of ever-so cleverness, but finally some sanity, thank you Partouf. Firstly, whoever was responsible for putting together this page, I love it. It's full of really interesting stuff, and I also get the impression you may have a similar coding ethos to myself. I like it. I like you. So I really would like to see this page be re-titled and linked to or included in a page on the way 'word' is used in computing, perhaps under the heading "origins of data size terminology" or something like that. It is all true. You are technically correct, however anybody looking for real world information can only be confused here. Perhaps a junior school student who is starting computer science and can't remember which one of those funny new terms is which. There's actually comments here from people not wanting readers to get the wrong impression and think a word is 16-bits... It is. Sorry. The real world built a bridge, got over it and moved on with their lives. That poor school kid just wants to know its 2 bytes, she has enough study to do already.

Perhaps should be noted somewhere early in the page not to confuse this with the "word size" that some sources still use to describe processor architecture, and directing the 1 in every 65536 visitors (including myself) who find that interesting, to a section further down the page. The fact that the word has been re-purposed really should rate about 1 sentence in the opening paragraph before it is clearly described that a word is 2 bytes, 16 bits and can hold values of zero thru 2^16-1. Maybe something like "In computing 'word' was once used to describe the size of the largest individual binary value that could be manipulated by a given system, but with the evolution of processor design and rapid growth of computer use worldwide, it now almost always refers to..." a WORD. This is what languages do, they evolve. If we want to wax philosophical about it, I could postulate that it is probably because computer terminology and indeed documentation began to solidify around the time that 16 bit architecture was the mainstream norm. A group of cetaceanologists may very correctly us the term 'cow' to describe 100 tons of migratory aquatic mammal, but WP quite rightly directs that inquiry straight to this page.

The obvious example that immediately sprang to mind, I see has already been canvassed here by Partouf, being the long ago standardisation of the term in assembly language programming and its ubiquitous and categorical definition as being 2 bytes. From volume 1 of the current Intel® 64 and IA-32 Architectures Software Developer Manuals:

4.1 FUNDAMENTAL DATA TYPES The fundamental data types are bytes, words, doublewords, quadwords, and double quadwords (see Figure 4-1). A byte is eight bits, a word is 2 bytes (16 bits), a doubleword is 4 bytes (32 bits), a quadword is 8 bytes (64 bits), and a double quadword is 16 bytes (128 bits).

Please believe me, this is a page full of really good, interesting stuff. Stuff that I mostly already know, but I still find the chronicled detail of a subject so close to my heart to be a thing of great beauty. But, I suspect like most people who feel that way, I don't need 'word' explained to me, this page should be for those who do. I got here when I searched for 'qword'. The term for the next size up next size up (dqword) momentarily slipped my mind, and I expected a page dealing with data sizes where I could get the information. What I found was more akin to a group of very proper gents all sipping tea and reassuring each other that the word 'gay' really does mean happy, regardless of what those other 5 billion people use it for. In fact, having just looked up the gay page, its a really good example:

Gay This article is about gay as an English-language term. For the sexual orientation, see Homosexuality. For other uses, see Gay (disambiguation).

Gay is a term that primarily refers to a homosexual person or the trait of being homosexual. The term was originally used to mean "carefree", "happy", or "bright and showy".

A bit further on there's a 'History' title and some of this, “The word gay arrived in English during the 12th century from Old French gai, most likely deriving…..”

Should anybody find this example to be in some way inappropriate they will have proved themselves well described by my simile. It is the perfect example.

The only sources I am aware of in which 'word' is still used to describe that native data size of a processor, are in some technical manuals from microcontroller manufacturers originating from regions where English is not the predominant language. Perhaps this page could be accurate titled “Word (microcontroller architecture)”, but not “Word (computing)”. As “Word(computer science)”, it is just plain wrong. I know it was right in 1970, but that's a moot point. I have spent some time programming non-PC processors, of various makes and (dare I say it) word-sizes, so I understand why 'word' it so appropriate in its traditional context, its almost poetic when you imagine the quantum with which a processor speaks.

However, in the real world mostly they don't. They 'talk' to almost anything else using 2 wire buses like SMBus or I²C. SPI with it's lavish 4 wires is being used less in favor of the more versatile and cheaper to implement I²C. With these I find myself not thinking in 8,16, 32 or 64 bits as I do with PCs, but in serial pulses. You can write the code in C these days, its mostly interfacing with peripherals that the challenge, and its mostly 2 or 4 wire, even modern smartphones. Typically the only high bandwidth 'bus' is for graphics, and having the GPU and CPU fused together, they are all but the same device... its no surprise our eyes are still attached directly to the front of our brains.

After I wrote the start of this, I thought I had made a mistake, and that this was a page specifically for processor architecture, that there was a word(unit of data) page also and it was a case of a bad link that needed to be redirected. I looked. I was wrong. I couldn't believe it. I'm sorry

“Recentism”…. Peter, that has to be trolling right? I'm not sure, but you say, “A word might be 32, 36, or 60 bits” immediately followed by “16 bits is a halfword, to call it a word is a recentism.” You then go on to explain that the brains trust at Intel were just being lazy when they decided that they wouldn't re-define the length of a cubit to match the new king's forearm. So I'm just going to give you a big thumbs-up for sense of humor, but I should touch on the subject that you have so rightly identified as being at the root of the situation. The only linked reference on this page without the word 'history' in the title was first printed in 1970. Reference number 4 is unlinked, but it shows a glimmer of promise with its title specifying “Concepts and Evolution”.

Guy, gave a couple of great examples going back to the mid 60's, however, the real question is what does the word mean? I know what it means to you, and I expect also to our host, but what does it mean to the vast majority of people who use it? Back in the mid 60's when mainframes and COBOL were all the rage, a tiny fraction of the world's population had access to any sort of computer. PC's arrived a few years later, but were simplistic and prohibitively expensive for most people, It really wasn't until the 80's that even a notable minority of the world had access to one. The ZX81 sold 1.5 million units. By the late 80's the entire range had sold about 7million. A quick look at my phone and original Angry Birds has over 100 million downloads just on android.

A frivolous example, but I think its fair to say that then number of people who have daily access to a computer today is comparable to the entire population of the world in 1965, without knowing the exact figures. I could be wrong, but it wouldn't be a ridiculous argument. I think guessing at there being 10million PC's in 1985 when Intel sanely decided to stick with the cubit they had been using for years would be generous. Even though it may only be a minority of users who think about data sizes beyond the bare minimum they are taught at school, I would suggest that in 2016 there must be tens, if not hundreds of millions of people who all know that a word is 2 bytes.

Possibly there's as many young people who know that gay can also mean 'happy' or 'bright', possibly not, but regardless of our personal feelings, are any of us old gents sipping our tea really going to tell that 6'6” Hell's Angel at the bar that he looks really gay tonight? Of course not, because like it or not we know what the word means. A word, any word, does not mean what the first person to speak it was thinking at the time, it means what people understand you to be communicating when you use it. The most absolutely stingy comparative estimate I could possible imagine of people who use 'word' in a computing sense who take it to mean “2 Bytes” vs. those who take it to mean “the size of the native unit for a given processing system” would be 1000:1, albeit that I've knocked off a couple of zeros I'm quite sure are there, the point remains valid.

This page describes the historical origins of the use of the term 'word' in computing, not the actual meaning attributed to its usage by almost all the people who use it. The section of the population who understand the word in the way you are presenting it, is a very small minority, but more importantly the section of the population who actually use the word in that context is so tiny that it is bordering on statistically irrelevant. You can, of course point to historical text that will suggest you are technically correct, but at the end of the day what will you have achieved? WP is a truly awesome entity that has already etched itself into the global consciousness. The term 'word' is one of the central concepts in computing, as I suspect you might possibly want to point out, were you not so committed to the second pillar.

One of the things that fellows like me who are not as young as many have to come to terms with, is that regardless of how certainly I know that my view on something is the best way to look at it, if an entire generation has decided otherwise, I am not going to sway them. I could stubbornly hold my ground continue to back my barrow up, the best I could hope for there, is that the world just ignores me until I become venerable and expire, or I could participate in society as it is presented to me, and endeavor to appropriately impart whatever meager wisdom I posses from within something that is far larger than myself.

In my ever-so-humble opinion, the first thing a page should do is give the reader the information they are looking for, and then if there is an even richer context to place around it, so much the better. I believe that if you create a page that is relevant and informative, but then beckons the reader as far into the larger context as suits their individual interests and capacity you will have made something that is truly greater than the sum of its parts that may well outlive the both of us. Perhaps by then people will have decided that gay means 'windy'. Please try to imagine how this page presents to somebody who searches 'qword' or links from here.

… and really, check this out for formatting, its quite well constructed.

       “I say old chap, its rather gay outdoors this afternoon”


60.231.47.233 (talk) 22:54, 12 June 2016 (UTC)sanityReply

Typography, italics and spaces edit

Guy Harris, thanks for editing the table. I suggested non-italics (intentionally just changed not all in case I was wrong; left inconsistent..), and thanks for not changing (w) and {w} back.. and clarifying why italics in comments. I can go with the table as is; or just change and show (again) how it might look. I believe it might be lost on people why some are in italics.. Would upper case be better? I had a dilemma with "b", as it should stay lower case, but if you want to others to stand out upper would do. Also per MOS:FRAC, there should be a space I think (maybe exception; I hope to converse space?). comp.arch (talk) 15:03, 19 April 2017 (UTC)Reply

Architecture additions edit

While the list of computer architectures is quite thorough, it could use Data General Nova, National Semiconductor NS320xx (NS32016/NS32032), SPARC (32- and 64-bit versions) and PA-RISC entries. 64-bit MIPS would also be nice. I'd add them myself but I don't know all the technical details. Bumm13 (talk) 10:31, 27 September 2017 (UTC)Reply

I'm also surprised Burroughs isn't mentioned, they pioneered many innovations in architecture and had long word lengths on their small machines. Also the Symbolics Ivory's 40-bit word probably deserves a mention for uniqueness if nothing else. --72.224.97.26 (talk) 23:58, 10 May 2020 (UTC)Reply

8 edit

There are recent edits regarding removal, or unremoval, of 8 in the list of word sizes. Note that there are seven items in the Uses of Words section, all of which could be definitions of word size. It seems, though, that the number of bits used to describe a processor might not be the word size. VAX is considered a 32 bit processor (32 bit registers, 32 bit addresses, etc.) but has a 16 bit word size. The 8080, widely called an 8 bit processor, doesn't necessarily have an 8 bit word size. It has 16 bit addresses and (some) 16 bit registers. IBM S/360, a 32 bit system, has (in different models) data bus width from 8 to 64, 24 bit addresses, ALU size from 8 to 64 bits, not necessarily the same as data bus width. I think I agree with the removal of 8 from the list, even considering that there are 8 bit processors. (Unless said processors have an 8 bit address space.) Gah4 (talk) 23:43, 10 February 2019 (UTC)Reply

IMHO, there should be an 8 bit word size as an entry, just as there should be 16 bit for machines with a larger address space (ex: 8086 is a '16' bit word with an arguably 32 bit address). Let's put it this way. What is the size of the data item when there is a push instruction? Does it take up 8, 16, 32, whatever bits? That is the architectural size of the machine. If a 360/30 does it work 8 bits at a time, it is still a 32 bit machine (like the 360/75) because it operates as a 32 bit machine even though it does it 8 bits at a time. [hjs] — Preceding unsigned comment added by 137.69.117.204 (talk) 20:21, 10 October 2019 (UTC)Reply
(Not all processors have push instructions - S/3x0 doesn't have one - so that's not a general example. Perhaps "load or push" works better.)
In the case of the 8008, a memory address is given by the contents of two 8-bit registers, the H register and the L register, so it has 8-bit registers, 8-bit arithmetic, and instructions that load and store 8-bit quantities, but doesn't have an 8-bit address space. Guy Harris (talk) 20:43, 10 October 2019 (UTC)Reply
There are about six things that can be used to describe the bitness of a computer. Virtual adress register size (log of virtual address space), real address size (log of real address space), data register size, ALU size (sometimes different from data register size), address bus size, and data bus size. Just about every combination of differences (that is, dissimilarities) has been seen. Personally, I would use the geometric mean of them all, but most don't like that one. Most 8 bit microprocessors have an 8 bit ALU, but 16 bit address. (The 8008 has a 14 bit address space, though.) The early ones didn't have enough pins, so physical data and/or address bus were smaller. Then there are the low-end S/360 models with 32 bit registers, 24 bit addressing, though less for physical address space, and an 8 bit ALU. Also, just about all the tricks to allow these combinations have been tried. As far as I know, none of the current 64 bit processors have a 64 bit address bus, as it is just a waste of pins. Tricks are used to reduce both physical and virtual address space, while keeping 64 bit addresses. And yet we try to give one number to each of these. Gah4 (talk) 22:39, 25 February 2022 (UTC)Reply

In general, new processors must use the same data word lengths and virtual address widths as an older processor to have binary compatibility with that older processor. edit

The article says In general, new processors must use the same data word lengths and virtual address widths as an older processor to have binary compatibility with that older processor. This makes a lot of sense, but many examples show it is wrong. VAX is the 32 bit architecture (still with the 16 bit word) of the PDP-11. Early models have compatibility mode to execute PDP-11 code. IBM extended the 24 bit addressing for S/360 to 31 bits in XA/370 and ESA/390, and finally to 64 bit addressing in z/Architecture, binary compatible all the way through. You can run compiled load modules from early S/360 years, under z/OS on modern z/Architecture machines, complete back binary compatibility. I suspect more people understand the 32 bit and 64 bit extensions to the 16 bit 8086, which again kept the 16 bit word. Gah4 (talk) 05:33, 27 March 2019 (UTC)Reply

In the case of the VAX, the compatibility is "modal", in that the VAX has a different instruction set than the PDP-11; PDP-11 compatibility is implemented in a separate mode. VAX isn't the 32-bit version of the PDP-11; it's a new instruction set, inspired by the PDP-11, but significantly different from the PDP-11 (different addressing modes, more registers, different instructions).
In the case of S/3x0, the compatibility is also "modal", but there's no new instruction set; the modes just affect whether the upper 8 bits of an address are ignored or only the uppermost bit - and the word length is 32 bits in S/360, S/370, S/370-XA, ESA/370, and ESA/390. For z/Architecture, it's also modal, but the word length goes to 64 bits.
In the case of most other instruction sets that went from 32 bits to 64 bits, there's a mode bit, but, as with S/3x0 and z/Architecture. no new instruction set. In the case of x86, both the 16-bit to 32-bit and 32-bit to 64-bit transitions introduced some significant new instruction set features, and doubled the number of registers in the 32-bit to 64-bit transition, although the instruction set didn't change too significantly. In the case of ARM, the instruction set changed more significantly, with predication being dropped, for example.
So the statement needs clarification, in that changing the word length or address length needs a mode bit to preserve binary compatibility (or you have to require some extra coding discipline, as in the case of the 68000 family - don't stuff extra information into the upper 8 bits of an address, because your code will break on a 68020 or later). Guy Harris (talk) 07:26, 27 March 2019 (UTC)Reply
Oh yes, I remember when Macintosh programs were known to be 32 bit clean or not. S/360 programs, especially OS/360, used the upper byte with 24 bit addresses so much, that it was not possible to go completely 32 bit clean. Back compatibility is so important that they find ways to do it. Gah4 (talk) 12:03, 27 March 2019 (UTC)Reply

Where are machines like the CDC-STAR-100 and TI-ASC? edit

These supercomputers should be included in the table of machines due to their unique architectures and/or place in history. EX: The Star had as its basis, concepts from the programming language APL. https://en.wikipedia.org/wiki/CDC_STAR-100 EX: The ASC had as its basic, optimized vector/matrix operations. Its instruction set was designed to handle triple for loops as a first class operation. https://en.wikipedia.org/wiki/TI_Advanced_Scientific_Computer — Preceding unsigned comment added by 137.69.117.204 (talk) 20:13, 10 October 2019 (UTC)Reply

"These supercomputers should be included" Add them if you want. Guy Harris (talk) 20:18, 10 October 2019 (UTC)Reply

"...must have the same data word lengths and virtual address widths..." for binary compatibility? edit

The article says

In general, new processors must use the same data word lengths and virtual address widths as an older processor to have binary compatibility with that older processor.

There are ways of expanding the word lengths or virtual address widths without breaking binary compatibility. The most common way this is accomplished is by adding separate modes, so that the newer processor starts up in a mode without support for the wider word lengths or virtual address widths, and an OS that supports the newer version of the instruction set can enable the widening and, typically, even can run binary programs expecting the narrower widths by changing the mode on a per-process basis. This is how System/370-XA extended addresses from 24 bits (within a 32-bit word, the upper 8 bits being ignored) to 31 bits (again, within a 32-bit word, with the uppermost bit ignored), and how most 32-bit instruction sets were expanded to 64 bits.

In addition, some 64-bit instruction sets, such as x86-64, don't require that implementations support a fully 64-bit instruction set, but require that addresses in which the unsupported bits are significant cause faults (non-zero or, at least in the case of x86-64, not just sign extensions), so that programs can't use those bits to store other data (this was the issue that required the mode bit for System/370-XA). Later implementations can (and have) increased the number of significant bits.

This was an issue, however, for some code for the Motorola 68000 series. The 68000 and 68010 had 32-bit address registers, but only put the lower 24 bits on the address bus. This would allow applications to pack 8 bits of information in the upper 8 bits of an address. That code would run into trouble on the 68012, which put the lower 31 bits on the address bus, and on the 68020 and later processors, which either put all 32 bits on the address bus, or, for the 68030 and later, provided them to the on-chip MMU to translate to a physical address. This was an issue for Macintosh software. Guy Harris (talk) 18:06, 13 April 2021 (UTC)Reply

Yes. We tend not to notice when a new processor has the same (various) widths as the previous one. That is, it has the same architecture, we name it after the family, and consider it the same. So it is when a new architecture comes along, with different (various) widths, yet needs to be compatible, that it gets interesting. Even more, many of the IBM S/360 models had an emulation mode for previous generations of processors. That is, special microcode that would run code from machines like the 7090. 36 bit code running on a 32 bit processor! There are so many examples, most of which we know well because we had to work with them. One of the earlier machines to run out of address space was the PDP-11, which they partly fixed with various bank addressing systems, and finally with VAX. Early VAX processors have a PDP-11 compatibility mode, and with OS support will run PDP-11 software. The early VAX compilers ran in this mode, as it was faster to port them. In the change to XA/370, IBM had much to get right, to support older 24 bit software. Partly that was due to the way that IBM used the upper bits in their own OS. One of the most important parts of OS/360, the DCB, (Data control block) is in user address space. Even with current 64 bit z/OS, some things still need to be below the 24 bit line. Not only the DCB itself, but some fields in the DCB have 24 bit addresses. There are stories, partly jokes but partly true, that trace current Intel processors back to the 4004. The 8086 was designed to be assembly code, but not binary code, compatible with the 8080. Current 64 bit Intel processors still support the 16 bit 8086 modes, and the instructions for 8080 compatibility, though 64 bit versions of Windows won't run 16 bit code. Also, there should be a distinction between user code and system code. Do note, though, that current IBM z/OS will still run load modules compiled and linked for OS/360. True binary compatibility. Gah4 (talk) 20:38, 14 April 2021 (UTC)Reply

IA-64 edit

The instruction length cell for the IA-64 is rather terse. Does anybody know of an online manual that I could cite in a footnote with |quote= for a fuller explanation? Note that the 128-bit packet contains more than 41-bit instructions. --Shmuel (Seymour J.) Metz Username:Chatul (talk) 03:01, 24 April 2022 (UTC)Reply

They seem to be harder to find. You might look at this one, though I do wonder about vol-1 and vol-2. Gah4 (talk) 21:46, 25 April 2022 (UTC)Reply

"Bitness" listed at Redirects for discussion edit

  The redirect Bitness has been listed at redirects for discussion to determine whether its use and function meets the redirect guidelines. Readers of this page are welcome to comment on this redirect at Wikipedia:Redirects for discussion/Log/2024 March 21 § Bitness until a consensus is reached. Utopes (talk / cont) 03:22, 21 March 2024 (UTC)Reply