Sources

edit

The article seems to be lacking sources. A reference to the original 8086 intel doc would be cool, maybe somebody has a link? anton


Incorrect information

edit

I corrected the '1989' introduction year to 1978. It seems that somebody wrote wrong information.


THERE IS MUCH MORE INFO INCORRECT HERE!

The 8086 CPU was NEVER used in the IBM XT. https://en.wikipedia.org/wiki/Intel_8088 the CP/M machines used the 8086 CPU. The IBM XT in fact always had the 8088 CPU, the 8086 CPU is MUCH faster, but IBM won from CP/M, due to having standard expansion slots (BUS), CP/M machines all had their own unique disk format, you had conversion programs but software working on one brand, often did not support other brands. The IBM and all the clones had compatible DISKS and COMPATIBLE expansion cards. While no perfect compatibility in the early IBM/CLONE days, this soon changed and all hard and software became fully interchangeable.

8086 CPU can never work on a IBM XT machine since it had an 8 bit BUS and the 8086 has a 16 bit BUS interface. Those EARLY IBM clones that had compatibility issue's with IBM, sometimes used the 8086 CPU, often making them outperform IBM XT being 2x faster.

But 2x a little...+ compatibility issue's..... the compatibility between hard and software was more beneficial to users so the 8088 CPU won the market for years until the first 286 CPU's that had full compatibility with the 8088 CPU hit the market!

IBM should have adopted the 8086 CPU, gaining 2x performance and making them compatible with the 8086 clones, But IBM didn't care about the speed. They had the marked and the software, still if they would have done that.... the 8086 is almost as fast as the 286. And we would had have it 4 years earlier, but IBM did NEVER bother with the 8086. — Preceding unsigned comment added by 217.103.36.185 (talk) 11:11, 25 August 2019 (UTC)Reply


by sending out the appropriate control signals, opening the proper direction of the data buffers, and sending out the address of the memory or i/o device where the desired data resides

Also by AMD

edit

AMD created this processor type too. It has number P8086-1 has a AMD logo and also (C) Intel 1978. I can make a picture of the chip and add it to the article.

AMD did not create this processor but they did second-source it i.e. manufacture it under licence from Intel because Intel could not manufacture enough itself. Intel are the sole designers of the original 8086 processor. --ToaneeM (talk) 09:15, 15 November 2016 (UTC)Reply

Please clarify

edit
The 8086 is a 16-bit microprocessor chip designed by Intel in 1978, which gave rise to the x86 architecture. Shortly after, the Intel 8088 was introduced with an external 8-bit bus, allowing the use of cheap chipsets. It was based on the design of the 8080 and 8085 (it was assembly language source-compatible with the 8080) with a similar register set

Does the final sentence refer to the 8088 or the 8086? At first glance, it continues the info on the 8088, but upon consideration, it seems more likely to refer to the 8086. Is this correct? It's not too clear.

Fourohfour 15:15, 9 October 2005 (UTC)Reply

It refers to both, 8086 and 8088 are almost the same processor, except that 8086 has 16 bit external data bus and the 8088 has 8 bit external data bus and some very very small diferences. both where based on 8080's design. --200.11.242.33 17:20, 18 October 2005 (UTC)Reply

In "Busses and Operation", it states "Can access 220 memory locations i.e 1 MiB of memory." - While the facts are correct, the "i.e." suggests that they are directly related. The 16 bit data bus would imply an addressable memory of 2MiB (16 x 220), but the architecture was designed around 8 bit memory and thus treats the 16bit bus as two separate busses during memory transfers. 89.85.83.107 11:08, 20 March 2007 (UTC)Reply

The 8086 was not "designed around 8 bit memory" (however 8088 was), but each byte has its own address (like in 8-bit processors). Therfore 220 bytes and/or 16-bit words can be directly addressed by both chips; the latter are partially overlapping however, as they can be "unaligned", i.e. stored at any byte-address. /HenkeB 20:11, 21 May 2007 (UTC)Reply

Even more confusing, the 8088 article says nearly the exact same thing: 'The original IBM PC was based on the 8088' Family Guy Guy (talk) 01:09, 30 May 2017 (UTC)Reply

But what's the confusing part of this? Andy Dingley (talk) 01:26, 30 May 2017 (UTC)Reply
The 8086 was designed for byte addressable 16 bit wide memory. That is, with the ability to separately write-enable upper and lower bytes of 1 6 bit words. Gah4 (talk) 23:15, 25 March 2020 (UTC)Reply

first pc?

edit

is this the first "pc" microprocessor? I think so, in which case it should be noted, although I expected to find a mention to the first, whether it's this or a second one. The preceding unsigned comment was added by 81.182.77.84 (talk • contribs) .

The first IBM PC used an Intel 8088, which is (as far as I know) effectively a slightly cut-down 8086 with an 8-bit external data bus (although internally still 16-bit). So that fact should probably be noted in the article. Fourohfour 18:27, 10 March 2006 (UTC)Reply

also, for comparison, I'm interested in the size and speeds of pc hard-drives at the time, ie when it would have been used by a user -- I had thought around 10 MB but in fact if it can only address 1 MB (article text) this seems unlikely. Did it even have a hard-drive? a floppy? anything? The preceding unsigned comment was added by 81.182.77.84 (talk • contribs) .

You're confusing things here. When it talks about "memory" it means RAM. Hard drives are different, they are I/O devices and the processor communicates with them via programmed input/output (PIO) or DMA. The 20-bit / 1Mbyte limitation is only for RAM. Mystic Pixel 07:36, 8 June 2006 (UTC)Reply

Disagree

edit

I don't think the two articles should be merged. After all, the one talks about the 8086 itself while the other about the general architecture my_generation 20:06 UTC, 30 September 2006

I think merging the articles is a good idea. The Intel 8086 set the standard for microprocessor architecture. Look at the Intel 8086 user manual (if you can find it on eBay or Amazon) and you'll see the information that's included in both of these articles. It would be easier to have all that information in just one. In answer to the above disagreement, you can't describe the 8086 without describing its architecture.

There's nothing wrong with duplicating information that's already in the manual, but Microprocessor 8086 is poorly written and misnamed. An encyclopedia article isn't going to explain how to program a µP and theoretical details are out of place here. Is there anything in particular you think should be moved to Intel 8086? Or perhaps remove the duplicate information and give Microprocessor 8086 a name which is clearer (or at least grammatical)? I'd like to remove that page and just have it redirect. Potatoswatter 06:46, 21 November 2006 (UTC)Reply
I second what Potatoswatter said. (I thought the 8086 manual was available, but it doesn't seem to be. That's really weird, the 80186 manual is all over the place. Hmm.) Mystic Pixel 05:10, 22 November 2006 (UTC)Reply

Not a bug?

edit

This comment was in the "bugs" section: Processors remaining with original 1979 markings are quite rare; some people consider them collector's items.

This didn't seem to be related to a bug, so I moved it here. - furrykef (Talk at me) 06:38, 25 November 2006 (UTC)Reply

It was referring to the previous statement that such processors were sold with a bug. I'm putting it back. Potatoswatter 15:04, 25 November 2006 (UTC)Reply
What exactly was the bug that was in the processor? More information could be given than just a "Severe interrupt bug". --SteelersFan UK06 03:10, 8 February 2007 (UTC)Reply
SteelersFan UK06: Think there are/were three(?) interrupt-related bugs: (1) updating `ss:sp` needs to be written in two consecutive cycles [with interrupts disabled], otherwise an interrupt could use the stack in an unpredictable location—therefore writing to `ss` should disable interrupts for the following instruction, except it didn't. (2) interrupt with a restartable string instruction would cause a second prefix to be forgotten: eg. `ES REP STOSB`, would continue as `REP STOSB` or `REP ES STOSB` would continue as `ES STOSB`. (3) interrupt-storm with `REP` + string instruction would continuously restart from the first micro-op, and therefore never make any progress (=apparent hang). —Sladen (talk) 06:14, 28 March 2021 (UTC)Reply

Processor Prices

edit

Does anyone knows how much was the price of an 8086 processor when launched as a per unit or per thousand units? I think it is an important information that is missing.--201.58.146.145 22:21, 13 August 2007 (UTC)Reply

I found the pricing from the Intel Preview Special Issue: 16-Bit Solutions, May/June 1980 magazine. It is now posted at List of Intel 8086. Rjluna2 (talk) 01:29, 10 October 2015 (UTC)Reply

I don't recall a onesy price for the CPU alone. I do recall that we sold a box which had the CPU, a few support chips, and some documentation for $360. That was an intentional reference to the 8080, for which that was the original onesy price. I think the 8080 price, in turn, was an obscure reference to the famous IBM computer line, but that may be a spurious memory. For that matter, I'm not dead sure on the $360 price for the boxed kit, nor do I recall the name by which we called it at the time. I have one around here somewhere, as an anonymous kind soul left one on my chair just before I left Intel for the second time. —Preceding unsigned comment added by 68.35.64.49 (talk) 20:48, 26 February 2008 (UTC)Reply

As well as I remember, the Altair 8800 was announced at $399, mentioning the $360 for the processor alone. Gah4 (talk) 03:14, 24 November 2021 (UTC)Reply

iAPX 86/88

edit

The Intel manuals (published 1986/87) I have use the name iAPX 86 for the 8086, iAPX 186 for the 80186 etc. Why was this? Amazon lists an example here http://www.amazon.com/iAPX-186-188-users-manual/dp/083593036X John a s (talk) 23:04, 6 February 2008 (UTC)Reply


Around the time the product known internally as the 8800 was introduced as the iAPX432 (a comprehensive disaster, by the way), Intel marketing had the bright idea of renaming the 8086 family products.

A simple 8086 system was to be an iAPX86-10. A system including the 8087 floating point chip was to be an iAPX86-20.

I (one of the four original 8086 design engineers) despised these names, and they never caught on much. I hardly ever see them any more. But, since Marketing, of course, controlled the published materials, a whole generation of user manuals and other published material used these names. Avoid them if you are looking for early-published material. If you are looking for accuracy, avoid the very first 8086 manual, than which the second version had a fair number of corrections--but nearly all those were in long before the iAPX naming.

Peter A. Stoll —Preceding unsigned comment added by 68.35.64.49 (talk) 20:43, 26 February 2008 (UTC)Reply

Embedded processors with 256-byte paragraphs

edit

The article says:

According to Morse et al., the designers of the 8086 considered using a shift of eight bits instead of four, which would have given the processor a 16-megabyte address space.

It seems some manufacturers of 80186-like processors for embedded systems have later done exactly this. That could perhaps be mentioned in the "Subsequent expansion" section if reliable secondary sources can be found. So far, I've found only manufacturer-controlled or unreliable material. Paradigm Systems sells a C++ compiler for "24-bit extended mode address space (16MB)"[1] and lists supported processors from several manufacturers:

  • Genesis Microchip: I can't find any model number for a processor of theirs supporting this mode. Later bought by STMicroelectronics.
  • Lantronix: DSTni processors, spun off to Grid Connect. DSTni-LX Data Book[2] says the processor samples pin PIO31 on reset to select 20-bit or 24-bit addresses. DSTni-EX User Guide[3] calls the 4-bit shift "compatible mode" and the 8-bit shift "enhanced mode".
  • Pixelworks: ImageProcessor system-on-a-chip, perhaps including the 80186-compatible microprocessor in PW164[4]. Pixelworks licensed[5] an 80C186 core from VAutomation but only 20-bit addressing is mentioned there.
  • RDC Semiconductor: R2010, always 8-bit shift.[6]
  • ARC International: acquired[7] VAutomation, whose Turbo186 core supported a 256-byte paragraph size.[8][9]

85.23.32.64 (talk) 23:52, 11 July 2009 (UTC)Reply

Interesting, please feel free to add some of this to the article (at least for my part, I've written around 60-70 percent of it, as it stands). /HenkeB —Preceding unsigned comment added by 83.255.36.148 (talk) 19:40, 3 November 2009 (UTC)Reply
The RDC R2010 is RISC, not x86-compatible Лъчезар共产主义万岁 17:51, 14 August 2011 (UTC)Reply

Instruction timing

edit

Can anyone provide a reference to an official source from where these timings are taken? The datasheets given at the end of the article present only external bus timings (memory read/write is 5 clocks) but doesn't list any other information about the internal logic and ALUs. Thank you. bungalo (talk) 10:26, 26 April 2010 (UTC)Reply

One of these 5 cycles in the timing diagrams is a fully optional wait state, so the basic memory access cycle is 4 clock cycles in the 8086. 83.255.38.70 (talk) 09:42, 27 April 2010 (UTC)Reply

What a valid citation looks like

edit

Moved from user talk page...article discussions belong on article talk pages May I ask you again What is wrong with a data sheet or a masm manual as a ref? Can you get a better source? I don't get it! What are you trying to say? "Only the web exist" or something? 83.255.38.96 (talk) 06:08, 3 November 2010 (UTC)Reply

The problem is that "You could look it up somewhere" isn't a valid reference style and tells the reader nothing about where to find a specific relevant description of the instruction timing cycles. Find a book with a publisher and an ISBN, and a page number for a table of instruction cycle times. I'd cite my Intel data book if I could, but right now it's stored away and I won't have it unpacked for weeks. Telling the reader to "look it up in some manual you've never heard of" is lazy and very nearly insulting to the reader, who probably is familiar with the idea of finding relevant literature anyway and doesn't need some patronizing Wikieditor to tell him the bleeding obvious. This is NOT a difficult fact to cite, there must have been at least 3 books printed in the history of the world that actually list instruction cycle timing for the 8086 and all we need is ONE ISBN/author/publisher/date/page number citation to validate this table. --Wtshymanski (talk) 13:07, 3 November 2010 (UTC)Reply
Taken from the MASM 5.0 reference manual; numbers were also included in early 8086 and 8088 datasheets. That's a pretty poor citation. Does this golden manual have a publisher, a date, you know, stuff like that? Perhaps even an ISBN? Any specific datasheets in mind? Publisher, date, etc. ? --Wtshymanski (talk) 16:10, 20 December 2010 (UTC)Reply
Still not a citation. Go to the front page of whatever manual you're looking at, and copy down here the name of the editor or author, the full name of the manual, it's edition number if any, the copyright date, the publisher, the ISBN if it has one, and the pages on which these numbers allegedly appear. Can you do that for us? Hmm? --Wtshymanski (talk) 13:01, 21 December 2010 (UTC)Reply
There. Was that so hard? Now when a vandal comes along and randomizes all those cycle timing numbers (if that hasn't happened already), someone can compare with *his* copy of the Microsoft MASM Manual 5th Edition and make the numbers consistent with the reference again. Page numbers are still needed. Its important to say *which* manufacturer's data sheets you're talknig about, too. Does an AMD part or Hitachi part have the same cycle count as the Intel part? As long as we say *which* manufacturer we're talking about, we're ok. --Wtshymanski (talk) 15:29, 21 December 2010 (UTC)Reply

Of course it is about the original Intel part if nothing else is said (see the name of the article). I gave a perfectly fine reference in April this year, although I typed in those numbers many years ago. I have never claimed it to be a citation; citations are for controversial statements, not for plain numerical data from a datasheet. The MASM 5.0 reference manual was certainly uniquely identifiable as it stood, and it would really surprise me if more than a few promille of all the material on WP has equally good (i.e. serious and reliable) references. Consequently, with your logic, you better put a tag on just about every single statement on WP...!? 83.255.43.80 (talk) 13:05, 22 December 2010 (UTC)Reply

If it was important enough for you to add it to the article, it was important enough to have a proper citation. Never say "of course" in an encyclopedia. How do we know the MASM manual is talking about the same part, stepping level, etc. ? Call up your local public library and ask them for "the MASM reference manual, you know, like, 5.0 ? " and see how far you get with just that. Real encyclopedias don't need so many references because real encyclopedias have an editorial board and paid fact checkers. Wikipedia relies on citations because we don't trust our editors to get anything right and so we rely on multiple persons to review any contributions. Wikipedia is sadly lacking in citations. Any numerical data should have a citation so that vandalism can be detected and reverted; you may have noticed once or twice an anon IP will change a single digit in a value and scurry off back under a rock, leaving a permanent error in the encyclopedia because no-one can find the original reference from whence the data came. It's too bad you were inconvenienced with backing up a statement of fact...let's take all the references out of Wikipedia and rely on our anonymous editors to keep the tables right. --Wtshymanski (talk) 14:38, 22 December 2010 (UTC)Reply
Any distance or perspective? Wikipedia is not supposed to be some kind of legal document, it's a free encyclopedia. Also, I find it peculiar how extremely concerned you seem to be with "vandals" when it was yourself that actually messed up the table – putting JMP and Jcc timings in N.A. columns. I corrected that (20 dec 11.01) and put that (damn) reference back. However, you kept on deleting that information, over and over. That style makes me sick and tired just by thinking of contributing any further to Wikipedia (and don't you call me "any slacker"). 83.255.43.80 (talk) 20:59, 22 December 2010 (UTC)Reply
You don't get it, do you? I would never confuse this stunning display of erudite scholarship with that of a slacker. --Wtshymanski (talk) 22:49, 22 December 2010 (UTC)Reply
Looking at the Intel Component Data Catalog, 1980 edition, bearing Intel publication number AFN-01300A-1, which includes 8086/8086-2/8086-4 16-Bit HMOS Microprocessor data sheet, there's no listing of instruction cycle times for each opcode. Until a data sheet listing instruction cycle times can be found, I'm deleting the vague description since Intel itself doesn't seem to have published instruction cycles in their own data sheet. --Wtshymanski (talk) 06:07, 2 January 2011 (UTC)Reply
The document we really want to cite is Intel 8086 family user's manual,October 1979, publication number 9800722-03. The bootleg copy on the Web gives instruction timing cycles in Table 2-21, pages 2-51 through 2-68 (starting in the non-OCR .pdf at page 66). The summary in this article leaves out a lot of details. --Wtshymanski (talk) 06:39, 2 January 2011 (UTC)Reply

Random vs ad-hoc

edit

Industry jargon (such as the cited reference, first one on the Google Books hits) seems to prefer "random logic" as the description for the internal control circuits of a microprocessor, as contrasted with "microcode". "Ad-hoc" has the disadvantage of being Latin, and is probably as pejorative as "random" if you're sensitive about such things. --Wtshymanski (talk) 14:51, 29 June 2011 (UTC)Reply

I'm sure that within the industry it's described as "random logic". it's also described as "plumbing", "useless overhead" and probably "that squiggly stuff". The problem is that "random" has a dictionary meaning, and it's one that's unhelpful out of this narrow context. To the lay reader, this is confusing and thus unhelpful.
I'm not defending ad hoc. It's entirely apposite and quite a good description of the real situation. It's also a pretty common and well-understood latin term (by the standards of latin loan phrases). However it is in latin, and there's a fair argument that we should avoid such as a general principle of accessibility.
Random though is just bad, because it is misleading - you yourself felt compelled to add an edit summary with an apology for using it. Call it what you like, but don't call it random. Andy Dingley (talk) 00:14, 30 June 2011 (UTC)Reply
It doesn't matter what I call it, Andy. That's the great thing about Wikipedia. What does the literature call it? --Wtshymanski (talk) 02:18, 30 June 2011 (UTC)Reply
Recording of one instance of its use is no compulsion to use that unhelpful use to make the article worse. There are any number of ways to word this - if you don't like ad hoc, the lose it. However adding 'random' gives quite the wrong message. Andy Dingley (talk) 08:18, 30 June 2011 (UTC)Reply
Google Books gives 4130 hits for ' microprocessor "random logic" ' and 7 hits for ' microprocessor "ad-hoc logic" '. It's what people writing about micrprocessors call non-microprogrammed logic. ANyone stuyding microprocessors is going to run across the dread phrase "random logic" very quickly, why not use it in context here? --Wtshymanski (talk) 13:57, 30 June 2011 (UTC)Reply
WP:V still isn't WP:COMPULSION. Most readers here (and anywhere on WP) are very naive and new to the subject, not those seriously studying it. There's no expectation that they're "going to run across the dread phrase". Andy Dingley (talk) 14:11, 30 June 2011 (UTC)Reply
FWIW, I strongly agree that it should be "random logic"; that is a standard term. It is not comparable to descriptions like "plumbing," "useless overhead," and "that squiggly stuff," which are *not* standard technical terms. It's no more perjorative than the word "random" in "random access memory"; does anyone want to change RAM to AHAM? "Random" here means "capable of performing any arbitrary function," not "selected from a probability distribution"; no designer objects to using it in cases where it's appropriate, and no designer sees anything perjorative about describing random logic using the term "random logic." If anything, "ad-hoc" is perjorative because it suggests the design was done in an unsystematic way. I hesitate to mention this, but another standard term might be "combinational logic." That would not be as good ("random logic" is more specific because it excludes things like PLAs that could be included in "combinational logic") but it at least would make the point that it's not sequential logic like microcode, while avoiding the dirty R-word. I wish I could change "a mix" to "a mixture," but the page seems to be protected. 184.94.124.236 (talk) 14:39, 9 July 2011 (UTC)Reply

Small detail

edit

in the Microcomputers using the 8086 section, the compaq deskpro clock speed listed doesn't match the one listed on the wiki page for this product, plus it's not listed in the "Crystal_oscillator_frequencies" page neither, I have no idea where this comes from so I added a (?).... — Preceding unsigned comment added by 85.169.40.106 (talk) 08:06, 1 March 2012 (UTC)Reply

I don't know if it helps, but an 808x driven by the 8284 clock generator runs at 1/3 the crystal frequency, to generate a clock with 33% duty cycle. Gah4 (talk) 23:05, 7 December 2020 (UTC)Reply

Absurd predecessor/successor in misinfo box

edit

It's not like kings or popes or presidents...Intel was still selling lots of 8080s after the 8086 came out, and the 8086 and 8088 were virtually the same era - the 80286 came out before the 80186, for that matter. --Wtshymanski (talk) 14:20, 21 August 2012 (UTC)Reply

It doesn't imply that one entirely replaced the other and that only one could be on sale at a time.
The point is that the 8086 built upon the work with the 8080, and its instruction set and assembly language were broadly similar (compared at least to the 6800 / 68000 or the radically different TI 99xx series). The 8088 was a successor to it as a "value engineered" example with a skinny external bus (a thread of hardware simplification). The 80186, 80286, even the 80386, 486 et al could be considered as successors (a thread of increasingly sophisticated memory management). As we have to choose one example from each thread, lest it become too confusing, then the 8088 & 80186 seem reasonable. The 8088 & 80286 might well be better though. Andy Dingley (talk) 14:31, 21 August 2012 (UTC)Reply
Well, if you think this vague (to me) notion of predecessor and successor is useful in the box - alright, though I think it gives the reader the wrong idea. Hopefully anyone seriously interested in the development of Intel processors will read more than the info box, and anyone who's satisfied with just the "info" box probably wouldn't care anyway. --Wtshymanski (talk) 15:19, 21 August 2012 (UTC)Reply
The might be some reasonable arguments for giving the 8080/8085 as the predecessor (intended as a more powerful spiritual successor, similar ISAs, same support chips, NEC V20 could run both ISAs), but 8088 was more like the Celeron of its time, not a successor but a slower, cheaper version working with cheaper support chips. —Ruud 20:47, 22 August 2012 (UTC)Reply
I'm not spiritual enough to understand this. There are fairly well-defined agreed-upon successors or predecessors of Millard Filmore, Pius X or King Moshoeshoe, but many of these chips have much more tangled family trees, overlapping partly or entirely in their brief time in the sun. I don't think the notions of succsssor/predecessor is tightly defined when it comes to Intel chips. --Wtshymanski (talk) 21:43, 22 August 2012 (UTC)Reply
I agree with the last sentence. The 8086 not being binary compatible with the 8080 makes it not a clear-cut—no footnotes necessary—successor. Some other aspects, including the fact that is was source compatible, do make good arguments for calling it a successor. —Ruud 21:53, 22 August 2012 (UTC)Reply
The 8086 is the first member of the Intel x86 family. It is considered to be a successor to the 8080 because 8080 source code could be rebuilt for the 8086. The 8085 is definitely NOT a variant of the 8086, it is a variant of the 8080 and fully binary compatible with it. The 8088 is not a successor to the 8086, it is a 8086 with a 8-bit data bus much like the 80386 SX is a 80386 DX with a 16-bit data bus. As far as binary compatibility goes, the Zilog Z-80 is a true successor to the 8080. The 80186 is the successor to the 8086 and it certainly came before the 80286 (hence the numbering). The 80186 is where instructions like PUSHA and POPA originated. The 80186 (or the 8-bit data bus 80188) weren't used in PCs because of the integrated hardware (e.g. interrupt controller, DMA controller, timer) which was incompatible with PC architecture. The 80286 supports all of the opcodes of the 80186 and of course adds protected mode support. Asmpgmr (talk) 19:53, 24 August 2012 (UTC)Reply
Well, the 8080 instruction set was a subset of the 8085; SID and SOD don't work on an 8080. My CP/M-fu has long since vanished but wasn't there some stunt with the flags register that you could do, to determine if the program was running on an 8085 or an 8080? --Wtshymanski (talk) 20:00, 24 August 2012 (UTC)Reply
Pins have nothing to do with the instruction set. 8085 is 100% binary compatible with the 8080 though this is getting off topic. I've removed 8085 as a variant of the 8086 in the infobox since that was completely wrong. The infobox as it is now is correct. Asmpgmr (talk) 20:22, 24 August 2012 (UTC)Reply
Not correct. The 8085 instruction set was a superset of the 8080 instruction set. Since the 8080 came first, it could not have been a subset of the then non existent 8085. 86.156.154.237 (talk) 17:37, 25 August 2012 (UTC)Reply
The 8086 was code compatible with the 8080 at the source code level. This means that although an 8080 ROM would not work with the 8086, nevertheless, every 8080 instruction had an equivalent instruction (or combination of instructions) in the 8086. 86.156.154.237 (talk) 17:37, 25 August 2012 (UTC)Reply

It is clear enough that the 8085 was the processor that immediately preceeded the 8086. Since the 8080 and the 8085 were so architecturally similar, it would seem reasonable to show the predecessor as the 8080/8085. 86.156.154.237 (talk) 17:41, 25 August 2012 (UTC)Reply

Slowest clock speed

edit

The lowest rated clock speed was 5MHz for both the 8086 here and the 8088 (which is what most PCs used, the IBMs exclusively so). Yes, the original IBM PC ran at 4.77MHz but that was a design choice: from memory it mapped quite well into the video timing signals although I admit I forget the details. The chip itself was a 5MHz chip underclocked to the slower speed. Datasheets for the two chips are available: 8086 and 8088: there are no chips slower than 5MHz described. Crispmuncher (talk) 04:05, 29 August 2012 (UTC).Reply

The IBM PC and many compatibles used a 14.31818 MHz clock source divided by 3 which is 4.77272 MHz. Even if 5 MHz was the slowest speed rated by Intel, many early PCs definitely ran at 4.77 MHz so listing that is correct. Asmpgmr (talk) 04:37, 29 August 2012 (UTC)Reply
The first PCs used 8088s, not 8086s, so that isn't relevant here in any case. However, the template documentation at Template:Infobox CPU points out the "slowest" parameter refers to the lowest maximum clock, it does not address underclocking. There probably is a minimum clock speed: I recall Intel making an explicit point about a fully static design for embedded 80186's but only much later, but that is probably on the order of 1MHz or less. Crispmuncher (talk) 04:55, 29 August 2012 (UTC).Reply
IBM PCs used the 8088. Many other companies used the 8086 and some ran their systems at 4.77 MHz just like companies did with 8088-based systems though many did not (8 MHz 8086 systems were common like the AT&T 6300 / Olivetti M24). Anyway I see no reason not to list 4.77 MHz as the minimum clock speed for both the 8086 and the 8088 since this usage was common because of PCs. Asmpgmr (talk) 15:48, 29 August 2012 (UTC)Reply
The minimum clock rate for the HMOS 8086 is not quite as low as 1 MHz. From the Intel 8086/8086-2/8086-1 datasheet, the maximum clock cycle time for all three speed grades (5 to 10 MHz max.) is 500 ns, which equals a minimum clock frequency of 2 MHz. The fact that this is constant for all speed grades implies that it results from a stored-charge decay time which is independent of the factors that limit logic signal propagation time and thus maximum frequency. As a CPU can often be overclocked a significant amount above its rated maximum frequency before it begins to actually malfunctions, it is probably also possible to "underclock" a CPU below the minimum frequency specified by the manufacturer before malfunctions actually occur, perhaps by a much larger percentage than it can be overclocked. As far as I know, few people have tried this (outside the CPU design and engineering labs, where I'm sure it has been tried extensively). 108.16.203.38 (talk) 11:06, 9 October 2013 (UTC)Reply
The point of this is not to list the minimum speed at which the chip would still work (some development boards ran even slower, to make use of cheaper memory) but to cite the design speed of the first generation of the processor - 5MHz. This isn't the PC article, it's the 8086 article. Andy Dingley (talk) 15:59, 29 August 2012 (UTC)Reply
Yes, good point, and the 8086 was used in lots of things besides PC and compatible desktop computers, including arcade game machines, space probes, and NASA STS (Space Shuttle) ground support equipment. 108.16.203.38 (talk) 11:06, 9 October 2013 (UTC)Reply

Something you should know or insert into article about memory segmentation

edit
How to convert hexodecimal into decimal.
Intel 8086 Memory Segmentation, page 8.
Intel 8086 Datasheet.
Intel 8086/8088 User Manual.
1) 64KB of 2^20=1048576 RAM address is used for ES (Extra Data Segment), from address 70000(h) to address 7FFFF(h) (in hexodecimal). This is from 7 * 16^4 + 0 * 16^3 + 0 * 16^2 + 0 * 16^1 + 0 * 16^0 = 458752 to 7 * 16^4 + 15 * 16^3 + 15 * 16^2 + 15 * 16^1 + 15 * 16^0 = 458752+61440+3840+240+15=524287.
2) 64KB of 1MB RAM address is used for DS (Data Segment), from address 20000(h) to address 2FFFF(h). This is from 2*16^4=131072 to 2*16^4+15*16^3+15*16^2+15*16^1+15*16^0=131072+61440+3840+240+15=196607.

— Preceding unsigned comment added by Paraboloid01 (talkcontribs) 13:10, 1 November 2012 (UTC)Reply

Derivatives and clones

edit

Enhanced clones

edit

Can someone provide information or reference to support this claim: Compatible—and, in many cases, enhanced—versions were manufactured by Fujitsu, Harris/Intersil, OKI, Siemens AG, Texas Instruments, NEC, Mitsubishi, and AMD

What is an enhanced version? To me it is a version that has some additional software features. To best of my knowledge NEC was the only company that produced such enhanced 8086 version - its V30 processor. Harris and OKI (and later Intel) made a CMOS version - 80C86, it doesn't have any software enhancements. It appears to be a simple conversion from NMOS to CMOS logic technology.

Also I don't think Texas Instruments ever made 8086 clone (they were making 8080 though).

I don't think "enhanced" means only software features must be improved. That is thinking like a programmer, and a lot of people who work with CPUs aren't programmers, or aren't mainly programmers—including most of the engineers at Intel that design the CPUs! Hardware enhancements could include lower power dissipation, more hardware interface flexibility (e.g. inbuilt 8288 bus controller logic), demultiplexed buses, higher output drive capability, and, as in the CMOS versions, more flexible clock rate (down to DC for CMOS). If these aren't enhancements, what are they? They aren't modifications, because they don't alter the basic capability, only extend it.
Another hardware enhancement would be the addition of internal peripherals like an interrupt controller or DMA controller. The interrupt controller might just be simple logic that connects a few separate interrupt pins each to specific interrupt levels, skipping the INTA cycle, or doing a dummy one, or doing a real one that uses a different interrupt vector table from the Intel-standard one used by main INTR interrupt pin. Some added peripherals, such as a DMA controller or a programmable interrupt controller, would be visible to software, whereas the simple non-programmable interrupt controller described above would not be.
Also, there could be hardware enhancements to the base CPU that could be visible to programmers. Faster instruction execution would be one that would be visible to software, if it was timing-aware. (If you program in a high-level language, then no, you can't see this, but you can if you programin assembly language.) Caches and buffers, e.g. for data and instructions from memory, would also potentially have a software-visible effect on timing.

108.16.203.38 (talk) 10:22, 9 October 2013 (UTC)Reply

Soviet clones

edit

I don't agree with the following claim: The resulting chip, K1810BM86, was binary and pin-compatible with the 8086, but was not mechanically compatible because it used metric measurements. In fact the difference in lead pitch is minimal - only 0.04 mm between two adjacent pins, which results in 0.4 mm maximal difference for a 40 pin DIP package. So that soviet ICs can be installed in 0.1" pitch sockets and vice-versa. It was not unusual to see western chips installed in later soviet computers using metric sockets. And interestingly enough I also have seen Taiwanese network card using some soviet logic IC.

This picture shows ES1845 board with NEC D8237AC-5 DMA controller IC installed in a metric socket (top left corner). — Preceding unsigned comment added by Skiselev (talkcontribs) 21:52, 5 June 2013 (UTC)Reply

0.4mm, relative to a 2.5mm pitch is about 16%. Whether one will fit, depends on how bit the socket (or PC board if not socketed) holes are. I suspect that when the decision was made, it was thought to be small, but in the end, many won't fit. Maybe the Russians made some big hole sockets to go with them. Gah4 (talk) 23:12, 7 December 2020 (UTC)Reply

Variable MN/MX

edit

There is no reason a computer designer could not equip an 8086/8088-based computer with a register port that would change the MN/MX pin, taking effect at the next reset. (Changing the level of the pin while the CPU is running might have unpredictable and undocumented results.) However, since other hardware needs to be coordinated to the MN/MX pin level, this would require other hardware to switch at the same time. This would not normally be practical, and probably the only reasonable use for it would be hardware emulation of two or more different computer models based on the same CPU (such as the IBM PCjr, which uses the 8088 in minimum mode, and the IBM PC XT, which uses it in maximum mode). It is even possible that the 8086/8088 does not respond to a change in the level of MN/MX after a hardware reset but only at power-up. Even then, it is certainly possible to design hardware that will power down the CPU, switch MN/MX, wait an adequate amount of time, and power it back up. 108.16.203.38 (talk) 10:01, 9 October 2013 (UTC)Reply

There is a very good reason why it cannot be done - at least not easily. Eight of the pins of the processor have different functions between MAX and MIN mode. Since these are related to the bus control signals, it would be necessary to provide logic such that the 8288 bus controller was in circuit for the MAX mode and taken out of circuit for the MIN mode. Since the purpose of the MIN mode is to save on that 8288 controller chip, there would be nothing to gain by having a switchable mode system as the 8288 would have to be present for the MAX mode of operation. I B Wright (talk) 16:36, 7 March 2014 (UTC)Reply
It's true that a system could change the MN/MX pin but it would require a lot of circuitry and effort. Not that I haunt the design offices of the world but I've never heard of a system that did want to do this. Were you thinking of including an observation about dynamically changing MN/MX? If so, I wouldn't. You could just as well build a system that changed the Vcc supply rail from 5V to 5.5V under software control or changed the clock from 4MHz to 5MHz but it's not a typical application or educational/insightful so it's not for the 8086 Wiki page. --ToaneeM (talk) 09:39, 15 November 2016 (UTC)Reply

Memory organisation

edit

Maybe I missed it in the article, but there seems to be nothing about how the memory is organised and interfaced to the 8086. Unlike most 16 bit processors where the memory is 16 bits wide and and is accessed 16 bits at a time, memory for the 8086 is arranged as two 8 bit wide chunks of memory with one chunk connected to D0-D7 (low byte) and the other to D8-D15 (high byte). This arrangement comes about because the 8086 supports both 8 bit and 16 bit opcodes. The 8 bit opcodes can occupy both the low byte and the high byte of the memory. Thus the first (8 bit) opsode will be present on the low byte of the data bus, but the next will be present on the high byte. Further, if a single 8 bit opcode is on the low byte, then any immediately following 16 bit opcodes will be stored with its lowest byte on the corresponding high byte of the memory and the highest byte on the next addressed low byte. In both cases there is an execution time penalty. In the first scenario, the processor has to switch the second opcode back to its low byte position (time penalty is minimal in this case). In the second scenario, the processor has to perform 2 memory accesses as the 16 bit opcode occupies 2 addresses, further the processor then has to swap the low and high bytes once read (the swap is a minimal time penalty but the two memory accesses are a significant penalty as two cycles are required). The processor then has to read the second momory location again to recover the low byte of the next 16 bit opcode or the 8 bit opcode as required. This means that a series of 16 bit opcodes aligned on the odd boundary forces the processor to use two memory access cycles for each opcode.

Code can be assembled one of two ways. The code can be assembled such that the opcodes occupy whatever memory position is next available. This give more compact code but with an execution penalty. Alternatively, the code can be assembled such that a single 8 bit opcode always occupies the low byte of memory (a NOP code is placed in the high byte), and a 16 bit opcode is always placed at a single adress in the correct low/high byte order (again NOP codes are used as required). This gives a faster execution speed as the above time penalties are avoided, but the price is larger code as the valid program codes are padded with NOP opcodes to ensure that all opcodes start on an even byte boundary. Assemblers will usually allow a mixture of modes. Compilers usually do not offer such control and will invariably compile code for minimum execution time.

Somewhere, I have a book that details all of this, but I am blowed if I can find it at present. I B Wright (talk) 17:10, 7 March 2014 (UTC)Reply

No, see below. Gah4 (talk) 23:22, 25 March 2020 (UTC)Reply
The 8086 has a three word instruction buffer. Only whole (aligned) words are read into it. In the case of unaligned instructions, it doesn't separately read the last byte of one, and the first byte of the next, but extracts them from the buffer in the appropriate order. Note also that in the case of self-modifying code, it doesn't refetch modified bytes. This is used to test for 8088 vs. 8086, as the 8088 has a four byte instruction buffer. I believe that the buffer is flushed on a branching instruction. Gah4 (talk) 17:35, 25 August 2020 (UTC)Reply

Sample code

edit

The sample code is very sloppy. It saves/sets/restores the ES register even though it's never referenced by the code. The discussion says this is needed but it's wrong ("mov [di],al" uses the default DS: segment just like "mov al,[si]" does). It would be needed by the string instructions, like REP MOVSB, but the discussion proposes replacing the loop with REP MOVSW (a word instruction), which would actually copy twice as many bytes as passed in the count register. REP MOVSB would work, and would obviate the need for the JCXZ check (REP MOVSB is a no-op when CX=0). — Preceding unsigned comment added by 24.34.177.232 (talk) 20:14, 19 December 2015 (UTC)Reply

Have another look. It pushes DS onto the stack and then pops it into ES. 81.157.153.155 (talk) 14:21, 25 August 2020 (UTC)Reply
Yes it does push/pop, but for what purpose? ES is never used!
REP MOVSB would work, yes, but REP MOVS is microcoded. For many typical small (below ~200 bytes) moves REP MOVS is slower than word-sized copy using GRPs. (Intel periodically says "this new processor has optimized short REP MOVS" but I am yet to see it actually working. Skylake still slow.).
Also:
"Return registers AX = Zero". (1) why bother? (2) real C language memcpy() returns source address, not 0.
Labels have no colons after them. In Intel assembler, they have to.

I wonder if the sample code should be replaced completely. Since the 8086 has CLD, REP MOVSB, why would any compiler/coder fashion a procedure to do a string move rather than inlining? The inline is only 3 bytes! How about replacing the sample with another trivial example like a string case change like the one in the 68000 article? That could be written to include the call frame setup and demonstrate the use of LODSB and STOSB. This would even create a purpose for setting up ES! I will happily write the procedure if there is some consensus. RastaKins (talk) 15:52, 28 December 2021 (UTC)Reply

Truely useful

edit

I removed "often described as 'the first truly useful microprocessor'{{citation needed|date=December 2014}},". Every hit on Google using the search term "the first truly useful microprocessor" is a site that uses this article, or is in fact this article. I could not find one independent hit stating that. In addition the "citation needed" tag is now over three years old, and there is still no citation. Finally, the phrase contains the weasel word "often" which is a canonical example. Nick Beeson (talk) 15:04, 22 March 2018 (UTC)Reply

As for microprocessors that could replace real minicomputers, it is about right. The 8080 could do much of what people needed to do at the time, though. That is, even though the 8080 was designed for microcontroller use, it was a reasonable general purpose computer. Gah4 (talk) 03:28, 11 July 2018 (UTC)Reply
'Truly useful' is a relative term. A microcontroller or microprocessor is only truly useful if it will do what you actually want it to do.
Intel produced several processors before the 8086 starting with the 4004. Even the 4004 had to be truly useful for something for Intel to produce it for over ten years. The first processor to be used as the CPU in what could be realistically described as a computer (the term micro-computer came later) was the 8080, though Intel neither conceived it nor were interested in marketing it for such use - until the CP/M disk operating system appeared from Inter-Galactic Digital Research (which rapidly became just Digital Research). 81.157.153.155 (talk) 14:17, 25 August 2020 (UTC)Reply

7.16 MHz

edit

There is a recent edit regarding 7.16 MHz. The 8086 and 8088 have unsymmetric clock requirements. At full speed, the clock should have a 33% duty cycle. The 8284 generates this with a divide by three counter. One could run a 10MHz 8086 at 7.16MHz with a 50% clock. Appropriate wait states would get the bus cycle down, if needed. Gah4 (talk) 03:26, 11 July 2018 (UTC)Reply

BX as index register

edit

to my knowledge while you can't directly reference the memory locations at AX, CX or DX, BX is also used as an index register. Should we have four leading zeros in front of it in the little register guide thing? Jethro 82 (talk) 00:35, 15 March 2019 (UTC)Reply

@Jethro 82:I was looking at BX today and implemented the four leading zeros. THEN I saw your wise comment. Obviously, I agree. AX, CX, and DX cannot be used as an index but BX can be used just like SI and DI. RastaKins (talk) 18:11, 25 December 2021 (UTC)Reply

If the 8086 is to retain 8-bit object codes and hence the efficient memory use of the 8080, then it cannot guarantee that (16-bit) opcodes and data will lie on an even-odd byte address

edit

The article says: If the 8086 is to retain 8-bit object codes and hence the efficient memory use of the 8080, then it cannot guarantee that (16-bit) opcodes and data will lie on an even-odd byte address, which I believe is wrong. The 8086 has an instruction buffer (cache) that is loaded 16 bits at a time. Instruction fetch is always done for the whole word, even when the odd byte is needed. Data access, on the other hand, can be done on 8 bit or 16 bit cycles. Mostly this is only important for memory write, where write enable is only for the byte that is being written. It is the efficient addressing of data that requires the 8 bit cycles, not instructions. Gah4 (talk) 23:28, 25 March 2020 (UTC)Reply

@Gah4:I agree with you, the article is incorrect. The 8086 always fetches the instruction stream as 16 bits. It has a prefetch buffer to make this practical. "The 8086 architecture has a six-byte prefetch instruction pipeline, while the 8088 has a four-byte prefetch. As the Execution Unit is executing the current instruction, the bus interface unit reads up to six (or four) bytes of opcodes in advance from the memory. The queue lengths were chosen based on simulation studies."[1] The suggestion below that inserting NOPs in-between every odd-length instruction would improve efficiency is pure fantasy. The NOP instruction is an XCHG AX,AX. It takes three clocks. EVENing up the instruction stream only helps when the address is the destination of a JMP or CALL. I have replaced the incorrect information from Osborne and substituted a primary source. RastaKins (talk) 21:20, 23 November 2021 (UTC)Reply
@Gah4:The article is correct. 16-bit opcodes do not have to occupy a single 16-bit address. The presence of 8-bit opcodes will disturb this otherwise natural flow. A single 8-bit opcode will cause the normally low byte of the following 16-bit opcode to occupy the high byte of the address holding the 8-bit opcode. The high byte will then occupy the low byte of the immediately following location. If this were followed by another 8-bit opcode, this would then occupy the left over high byte. The 8086 is idealised for this method of storing code precisely because the memory is not organised as a normal 16-bit memory would be, but as two banks of 8-bit memory occupying the same address space. Similarly, the 8086 does not have a 16-bit instruction buffer that receives the memory content, but two 8-bit buffers. There then follows logic to place the high byte buffer into the low byte of the following 16-bit (output) buffer and the low byte of the following read into the high byte of the output buffer, which makes the instruction complete and the 'right way around'. This same logic will shift an 8-bit instruction from the high byte to the low byte of the output buffer.
Things get a little more confused because the 8086 features an apparent 19 address lines (AD0 to AD15 plus A16 to A19). AD0 to AD7 are the (multiplexed) address and data lines for the low byte bank of memory and are connected to the D0 to D7 lines of the memory bank. The AD8 to AD15 are connected to the D0 to D7 lines of the upper byte memory bank. AD0 is also the memory select line for the lower byte bank as this is only selected on odd addresses. Another signal BHE is the memory select line for the upper byte bank. AD1 to AD15 (multiplexed with data lines) and A16 to A19 (not multiplexed) are also the address lines for both banks connected to A0 to A18 of both memory banks. Still with me?
On reading an 8-bit opcode located on the lower byte of an address or reading a 16-bit opcode occupying a single address, the 8086 transfers the entire 16 bits to the two memory buffers and then to the output buffer. However, if there is a 16-bit opcode that straddles two addresses, the 8086 has to perform two reads and a swap. However, it does not assert BHE for the second read and consequently does not load the high byte into its memory buffer. The 8086 then reads the next location but this time does not assert the AD0 line as we are accessing the high bank only thus not loading the low byte into its memory buffer and then reads the next address as above if it is reading a 16-bit opcode.
Intel could have just opted for 16-bit opcodes only, simplifying memory logic and saving a memory select signal. However, Intel opted for the arrangement that they did in order to use the limited address space more efficiently. The price paid is execution speed. Speed can be recovered by always arranging 8-bit opcodes to be on the lower byte only and 16-bit opcodes to occupy a single address by following an 8-bit opcode with a NOP instruction (two adjacent 8-bit opcodes can occupy the low and high bytes of a single address without penalty as that is what the NOP instruction will do). The penalty is a larger object code.
x86 assemblers assemble memory efficient code by default). For the original 8086 (and the subsequent 80286) assemblers, the assembler directive is 'EVEN' to force the following opcode to occupy a single address (or just the lower byte if 8-bit). Frustratingly, EVEN must precede every opcode that has to occupy a single address. Some compilers also offer options to force opcodes to occupy single address, but it is either all or nothing.
It is often not realised that the 80386 and later processors also have the same issue as the memory by now is organised as a single 32-bit block (treated as adjacent 16-bit blocks in REAL or V86 modes). By default an assembler would place four 8-bit or two 16-bit opcodes in the same 32-bit location giving compact object code, but time penalties still occur because the processor is having to shuffle the codes around, though the logic is much more efficient. For assemblers that support the 80386 and later, the directive is 'ALIGN 4'. In these newer assemblers, 'ALIGN 2' is a valid directive for REAL and V86 modes and has the same function as 'EVEN'. 81.157.153.155 (talk) 13:40, 25 August 2020 (UTC)Reply
Having not actually thought about this much for about 30 years, and even though I only wrote the above some months ago ... I believe the 8086 always does word cycles reading instructions. I believe that is what I was trying to say, even though reading it now, that isn't so obvious. Instructions are read into the three word instruction buffer, and then extracted from that to execute. Reading unaligned data operands, it might read as bytes, or whole words. The time you actually have to get it right is in writes, where BHE and A0 are needed for single byte (or unaligned word) writes. For reads, it can always read the whole word. Gah4 (talk) 17:24, 25 August 2020 (UTC)Reply
I can only refer you to the reference work that I use for such occasions. Osborne 16-bit Microprocessor Handbook by Adam Osborne and Gerry Kane: Page 5-24 to 5-26 (ISBN 0-931988-43-8). My recollection gets a bit hazy after all this time as well. The important thing to note is that although the memory is organised as two 512 kByte blocks, a single memory address only accesses an 8 bit wide byte of data, unlike that which you might expect. Even addresses are in the 'low byte' block (addresses start from 0x00000) and odd addresses are in the 'high byte' block. There are other 16-bit processors which do exactly the same thing, the Z8000 is an example even though only four of its instructions are 8-bit opcodes (technically three as DI and EI have the same opcode [7C], the difference being determined by one bit in the operand) so there is little scope for memory efficiency. This arrangement made the realisation of the 8088 relatively simple as this processor simply has an single 8-bit wide 1024kByte block of memory.
I did manage to get things slightly wrong above (it is that confusing) and I was a data line short for the high memory block. For that: I apologise. I have corrected it, which I feel I can do since you did not directly respond to the details. 81.157.153.155 (talk) 13:49, 26 August 2020 (UTC)Reply
I have that book, probably easier to find than the Intel book, which I also have. I forget now if you can force the 8086 to do only 8 bit cycles. I believe you can for the 680x0 processors. Well, I used to know some things about VME, which is conveniently close in some ways to the 680x0 in addressing spaces and modes. Among others, VME has separate address spaces for different address and data widths, though devices on the bus can respond to all of them. There are, at least, 24 and 32 bit address widths, and 8, 16, and 32 bit data widths, for six different and possibly separate address spaces. Reminds me of an 8086 (and 8088) feature that I almost forgot. The memory system can tell which are stack and which non-stack accesses, and so supply separate address space for them. (I don't know any that do it.) The instruction cache was a new feature on the 8086, not so long before more general external cache systems. (And funny in its not detecting instruction modification.) For comparison, the IBM 360/91 has instruction prefetch (pretty new at the time), but detects writes into already fetched addresses and refetches. The S/360 architecture allows self-modifying code, and it wasn't so unusual at the time. Gah4 (talk) 18:58, 26 August 2020 (UTC)Reply
Ohterwise, the important thing is memory writes. There is no harm in reading a whole word (slightly complicated for I/O), but you can't write the whole word when only one byte changes. Not so much later, there is cache, where the whole word goes into the cache, bytes are changed, and the whole word written out. For DEC Alpha, there are only 32 and 64 bit memory access instructions. To change a byte, you fetch the whole word (low bits are ignored) change the byte (high bits are ignored), write it back. There are efficient instruction sequences to do that. Gah4 (talk) 19:02, 26 August 2020 (UTC)Reply

References

  1. ^ McKevitt, James; Bayliss, John (March 1979). "New options from big chips". IEEE Spectrum: 28–34.

compact encoding inspired by 8-bit processors

edit

The article says compact encoding inspired by 8-bit processors regarding one address and two address operations, I presume a reference to earlier 8 bit microprocessors. One address (accumulator), and two address (usually register based) processors go a long way back, many years before the earlier microprocessors, almost to the beginning of electronic computation. The 8-bit byte is commonly attributed to IBM S/360, a line of 32 bit (architecture) machines with byte addressable memory. In any case, optimal instruction coding has a long history that didn't start with 8 bit microprocessors. Gah4 (talk) 23:22, 7 December 2020 (UTC)Reply

Maybe it is not "compact encoding" but "byte encoding" inspired by 8-bit processors. Most 16-bit processors use 16-bit instruction streams. I cannot think of another 16-bit processor that uses a byte instruction stream like the 8086. RastaKins (talk) 15:46, 10 June 2024 (UTC)Reply

Instructions for nested functions

edit
Instructions were added to assist source code compilation of nested functions in the ALGOL-family of languages, including Pascal and PL/M. -- (in Intel_8086#The_first_x86_design)

This sentence doesn't say which instructions it is talking about. From my understanding, this could be about the enter and leave instructions (specifically the second argument of enter which is a direct way of referencing the lexical nesting level of the called function and helps with escaping variables), but these instructions were not in the first 8086 instruction set, they were introduced later according to X86_instruction_listings#Added_with_80186/80188. Is this a mistake? It is a very interesting information IMO but should it be moved to another section and rephrased to something like "Later x86 processors would add the enter/leave instructions to assist source code compilation of […]"? -- Dettorer (talk) 08:04, 15 May 2023 (UTC)Reply

CVV

edit

The CVV of my debit card is 086. I love it! It reminds me of 8086 and my youth. Bretleduc (talk) 17:04, 24 November 2023 (UTC)Reply

Refimprove

edit

The first, third, fifth, seventh and eighth paras of the history section, the first three and last L2 sections of the details section, are entirely devoid of references. The other parts are similarly scant with multiple unsourced paragraphs. Don’t remove the refimprove template until the referencing issues have been addressed. Cambial foliar❧ 08:15, 4 June 2024 (UTC)Reply

Thank you for indicating what needs to be done. We usually see refimprove on articles that have sparse references. This article has 28 references and 37 links to those refs. Superficially, that seems adequate. With your posting above, we now know precisely where to improve the article. In the future, when placing a refimprove on an article with many refs, please add a comment describing what is lacking in the article so editors know what is needed and the criteria to remove the tag. RastaKins (talk) 14:03, 4 June 2024 (UTC)Reply
I agree that we usually see refimprove on articles that have sparse references, much like this one. In the future, don't remove maintenance categories until the issue is resolved, thanks. Cambial foliar❧ 14:09, 4 June 2024 (UTC)Reply