This article is rated C-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||
|
Text and/or other creative content from this version of Maximum mode was copied or moved into Intel 8086 with this edit on 02:56, 30 August 2012. The former page's history now serves to provide attribution for that content in the latter page, and it must not be deleted as long as the latter page exists. |
The article seems to be lacking sources. A reference to the original 8086 intel doc would be cool, maybe somebody has a link? anton
I corrected the '1989' introduction year to 1978. It seems that somebody wrote wrong information.
THERE IS MUCH MORE INFO INCORRECT HERE!
The 8086 CPU was NEVER used in the IBM XT. /info/en/?search=Intel_8088 the CP/M machines used the 8086 CPU. The IBM XT in fact always had the 8088 CPU, the 8086 CPU is MUCH faster, but IBM won from CP/M, due to having standard expansion slots (BUS), CP/M machines all had their own unique disk format, you had conversion programs but software working on one brand, often did not support other brands. The IBM and all the clones had compatible DISKS and COMPATIBLE expansion cards. While no perfect compatibility in the early IBM/CLONE days, this soon changed and all hard and software became fully interchangeable.
8086 CPU can never work on a IBM XT machine since it had an 8 bit BUS and the 8086 has a 16 bit BUS interface. Those EARLY IBM clones that had compatibility issue's with IBM, sometimes used the 8086 CPU, often making them outperform IBM XT being 2x faster.
But 2x a little...+ compatibility issue's..... the compatibility between hard and software was more beneficial to users so the 8088 CPU won the market for years until the first 286 CPU's that had full compatibility with the 8088 CPU hit the market!
IBM should have adopted the 8086 CPU, gaining 2x performance and making them compatible with the 8086 clones, But IBM didn't care about the speed. They had the marked and the software, still if they would have done that.... the 8086 is almost as fast as the 286. And we would had have it 4 years earlier, but IBM did NEVER bother with the 8086. — Preceding unsigned comment added by 217.103.36.185 ( talk) 11:11, 25 August 2019 (UTC)
by sending out the appropriate control signals, opening the proper direction of the data buffers, and sending out the address of the memory or i/o device where the desired data resides
AMD created this processor type too. It has number P8086-1 has a AMD logo and also (C) Intel 1978. I can make a picture of the chip and add it to the article.
AMD did not create this processor but they did second-source it i.e. manufacture it under licence from Intel because Intel could not manufacture enough itself. Intel are the sole designers of the original 8086 processor. -- ToaneeM ( talk) 09:15, 15 November 2016 (UTC)
Does the final sentence refer to the 8088 or the 8086? At first glance, it continues the info on the 8088, but upon consideration, it seems more likely to refer to the 8086. Is this correct? It's not too clear.
Fourohfour 15:15, 9 October 2005 (UTC)
In "Busses and Operation", it states "Can access 220 memory locations i.e 1 MiB of memory." - While the facts are correct, the "i.e." suggests that they are directly related. The 16 bit data bus would imply an addressable memory of 2MiB (16 x 220), but the architecture was designed around 8 bit memory and thus treats the 16bit bus as two separate busses during memory transfers. 89.85.83.107 11:08, 20 March 2007 (UTC)
Even more confusing, the 8088 article says nearly the exact same thing: 'The original IBM PC was based on the 8088' Family Guy Guy ( talk) 01:09, 30 May 2017 (UTC)
is this the first "pc" microprocessor? I think so, in which case it should be noted, although I expected to find a mention to the first, whether it's this or a second one. —The preceding unsigned comment was added by 81.182.77.84 ( talk • contribs) .
also, for comparison, I'm interested in the size and speeds of pc hard-drives at the time, ie when it would have been used by a user -- I had thought around 10 MB but in fact if it can only address 1 MB (article text) this seems unlikely. Did it even have a hard-drive? a floppy? anything? —The preceding unsigned comment was added by 81.182.77.84 ( talk • contribs) .
I don't think the two articles should be merged. After all, the one talks about the 8086 itself while the other about the general architecture my_generation 20:06 UTC, 30 September 2006
I think merging the articles is a good idea. The Intel 8086 set the standard for microprocessor architecture. Look at the Intel 8086 user manual (if you can find it on eBay or Amazon) and you'll see the information that's included in both of these articles. It would be easier to have all that information in just one. In answer to the above disagreement, you can't describe the 8086 without describing its architecture.
This comment was in the "bugs" section: Processors remaining with original 1979 markings are quite rare; some people consider them collector's items.
This didn't seem to be related to a bug, so I moved it here. - furrykef ( Talk at me) 06:38, 25 November 2006 (UTC)
Does anyone knows how much was the price of an 8086 processor when launched as a per unit or per thousand units? I think it is an important information that is missing.-- 201.58.146.145 22:21, 13 August 2007 (UTC)
I don't recall a onesy price for the CPU alone. I do recall that we sold a box which had the CPU, a few support chips, and some documentation for $360. That was an intentional reference to the 8080, for which that was the original onesy price. I think the 8080 price, in turn, was an obscure reference to the famous IBM computer line, but that may be a spurious memory. For that matter, I'm not dead sure on the $360 price for the boxed kit, nor do I recall the name by which we called it at the time. I have one around here somewhere, as an anonymous kind soul left one on my chair just before I left Intel for the second time. —Preceding unsigned comment added by 68.35.64.49 ( talk) 20:48, 26 February 2008 (UTC)
The Intel manuals (published 1986/87) I have use the name iAPX 86 for the 8086, iAPX 186 for the 80186 etc. Why was this? Amazon lists an example here http://www.amazon.com/iAPX-186-188-users-manual/dp/083593036X John a s ( talk) 23:04, 6 February 2008 (UTC)
Around the time the product known internally as the 8800 was introduced as the iAPX432 (a comprehensive disaster, by the way), Intel marketing had the bright idea of renaming the 8086 family products.
A simple 8086 system was to be an iAPX86-10. A system including the 8087 floating point chip was to be an iAPX86-20.
I (one of the four original 8086 design engineers) despised these names, and they never caught on much. I hardly ever see them any more. But, since Marketing, of course, controlled the published materials, a whole generation of user manuals and other published material used these names. Avoid them if you are looking for early-published material. If you are looking for accuracy, avoid the very first 8086 manual, than which the second version had a fair number of corrections--but nearly all those were in long before the iAPX naming.
Peter A. Stoll —Preceding unsigned comment added by 68.35.64.49 ( talk) 20:43, 26 February 2008 (UTC)
The article says:
It seems some manufacturers of 80186-like processors for embedded systems have later done exactly this. That could perhaps be mentioned in the "Subsequent expansion" section if reliable secondary sources can be found. So far, I've found only manufacturer-controlled or unreliable material. Paradigm Systems sells a C++ compiler for "24-bit extended mode address space (16MB)" [1] and lists supported processors from several manufacturers:
85.23.32.64 ( talk) 23:52, 11 July 2009 (UTC)
Can anyone provide a reference to an official source from where these timings are taken? The datasheets given at the end of the article present only external bus timings (memory read/write is 5 clocks) but doesn't list any other information about the internal logic and ALUs. Thank you. bungalo ( talk) 10:26, 26 April 2010 (UTC)
Moved from user talk page...article discussions belong on article talk pages May I ask you again What is wrong with a data sheet or a masm manual as a ref? Can you get a better source? I don't get it! What are you trying to say? "Only the web exist" or something? 83.255.38.96 ( talk) 06:08, 3 November 2010 (UTC)
Of course it is about the original Intel part if nothing else is said (see the name of the article). I gave a perfectly fine reference in April this year, although I typed in those numbers many years ago. I have never claimed it to be a citation; citations are for controversial statements, not for plain numerical data from a datasheet. The MASM 5.0 reference manual was certainly uniquely identifiable as it stood, and it would really surprise me if more than a few promille of all the material on WP has equally good (i.e. serious and reliable) references. Consequently, with your logic, you better put a tag on just about every single statement on WP...!? 83.255.43.80 ( talk) 13:05, 22 December 2010 (UTC)
Industry jargon (such as the cited reference, first one on the Google Books hits) seems to prefer "random logic" as the description for the internal control circuits of a microprocessor, as contrasted with "microcode". "Ad-hoc" has the disadvantage of being Latin, and is probably as pejorative as "random" if you're sensitive about such things. -- Wtshymanski ( talk) 14:51, 29 June 2011 (UTC)
in the Microcomputers using the 8086 section, the compaq deskpro clock speed listed doesn't match the one listed on the wiki page for this product, plus it's not listed in the "Crystal_oscillator_frequencies" page neither, I have no idea where this comes from so I added a (?).... — Preceding unsigned comment added by 85.169.40.106 ( talk) 08:06, 1 March 2012 (UTC)
It's not like kings or popes or presidents...Intel was still selling lots of 8080s after the 8086 came out, and the 8086 and 8088 were virtually the same era - the 80286 came out before the 80186, for that matter. -- Wtshymanski ( talk) 14:20, 21 August 2012 (UTC)
It is clear enough that the 8085 was the processor that immediately preceeded the 8086. Since the 8080 and the 8085 were so architecturally similar, it would seem reasonable to show the predecessor as the 8080/8085. 86.156.154.237 ( talk) 17:41, 25 August 2012 (UTC)
The lowest rated clock speed was 5MHz for both the 8086 here and the 8088 (which is what most PCs used, the IBMs exclusively so). Yes, the original IBM PC ran at 4.77MHz but that was a design choice: from memory it mapped quite well into the video timing signals although I admit I forget the details. The chip itself was a 5MHz chip underclocked to the slower speed. Datasheets for the two chips are available: 8086 and 8088: there are no chips slower than 5MHz described. Crispmuncher ( talk) 04:05, 29 August 2012 (UTC).
— Preceding unsigned comment added by Paraboloid01 ( talk • contribs) 13:10, 1 November 2012 (UTC)
Can someone provide information or reference to support this claim: Compatible—and, in many cases, enhanced—versions were manufactured by Fujitsu, Harris/Intersil, OKI, Siemens AG, Texas Instruments, NEC, Mitsubishi, and AMD
What is an enhanced version? To me it is a version that has some additional software features. To best of my knowledge NEC was the only company that produced such enhanced 8086 version - its V30 processor. Harris and OKI (and later Intel) made a CMOS version - 80C86, it doesn't have any software enhancements. It appears to be a simple conversion from NMOS to CMOS logic technology.
Also I don't think Texas Instruments ever made 8086 clone (they were making 8080 though).
108.16.203.38 ( talk) 10:22, 9 October 2013 (UTC)
I don't agree with the following claim: The resulting chip, K1810BM86, was binary and pin-compatible with the 8086, but was not mechanically compatible because it used metric measurements. In fact the difference in lead pitch is minimal - only 0.04 mm between two adjacent pins, which results in 0.4 mm maximal difference for a 40 pin DIP package. So that soviet ICs can be installed in 0.1" pitch sockets and vice-versa. It was not unusual to see western chips installed in later soviet computers using metric sockets. And interestingly enough I also have seen Taiwanese network card using some soviet logic IC.
This picture shows ES1845 board with NEC D8237AC-5 DMA controller IC installed in a metric socket (top left corner). — Preceding unsigned comment added by Skiselev ( talk • contribs) 21:52, 5 June 2013 (UTC)
There is no reason a computer designer could not equip an 8086/8088-based computer with a register port that would change the MN/MX pin, taking effect at the next reset. (Changing the level of the pin while the CPU is running might have unpredictable and undocumented results.) However, since other hardware needs to be coordinated to the MN/MX pin level, this would require other hardware to switch at the same time. This would not normally be practical, and probably the only reasonable use for it would be hardware emulation of two or more different computer models based on the same CPU (such as the IBM PCjr, which uses the 8088 in minimum mode, and the IBM PC XT, which uses it in maximum mode). It is even possible that the 8086/8088 does not respond to a change in the level of MN/MX after a hardware reset but only at power-up. Even then, it is certainly possible to design hardware that will power down the CPU, switch MN/MX, wait an adequate amount of time, and power it back up. 108.16.203.38 ( talk) 10:01, 9 October 2013 (UTC)
Maybe I missed it in the article, but there seems to be nothing about how the memory is organised and interfaced to the 8086. Unlike most 16 bit processors where the memory is 16 bits wide and and is accessed 16 bits at a time, memory for the 8086 is arranged as two 8 bit wide chunks of memory with one chunk connected to D0-D7 (low byte) and the other to D8-D15 (high byte). This arrangement comes about because the 8086 supports both 8 bit and 16 bit opcodes. The 8 bit opcodes can occupy both the low byte and the high byte of the memory. Thus the first (8 bit) opsode will be present on the low byte of the data bus, but the next will be present on the high byte. Further, if a single 8 bit opcode is on the low byte, then any immediately following 16 bit opcodes will be stored with its lowest byte on the corresponding high byte of the memory and the highest byte on the next addressed low byte. In both cases there is an execution time penalty. In the first scenario, the processor has to switch the second opcode back to its low byte position (time penalty is minimal in this case). In the second scenario, the processor has to perform 2 memory accesses as the 16 bit opcode occupies 2 addresses, further the processor then has to swap the low and high bytes once read (the swap is a minimal time penalty but the two memory accesses are a significant penalty as two cycles are required). The processor then has to read the second momory location again to recover the low byte of the next 16 bit opcode or the 8 bit opcode as required. This means that a series of 16 bit opcodes aligned on the odd boundary forces the processor to use two memory access cycles for each opcode.
Code can be assembled one of two ways. The code can be assembled such that the opcodes occupy whatever memory position is next available. This give more compact code but with an execution penalty. Alternatively, the code can be assembled such that a single 8 bit opcode always occupies the low byte of memory (a NOP code is placed in the high byte), and a 16 bit opcode is always placed at a single adress in the correct low/high byte order (again NOP codes are used as required). This gives a faster execution speed as the above time penalties are avoided, but the price is larger code as the valid program codes are padded with NOP opcodes to ensure that all opcodes start on an even byte boundary. Assemblers will usually allow a mixture of modes. Compilers usually do not offer such control and will invariably compile code for minimum execution time.
Somewhere, I have a book that details all of this, but I am blowed if I can find it at present. I B Wright ( talk) 17:10, 7 March 2014 (UTC)
The sample code is very sloppy. It saves/sets/restores the ES register even though it's never referenced by the code. The discussion says this is needed but it's wrong ("mov [di],al" uses the default DS: segment just like "mov al,[si]" does). It would be needed by the string instructions, like REP MOVSB, but the discussion proposes replacing the loop with REP MOVSW (a word instruction), which would actually copy twice as many bytes as passed in the count register. REP MOVSB would work, and would obviate the need for the JCXZ check (REP MOVSB is a no-op when CX=0). — Preceding unsigned comment added by 24.34.177.232 ( talk) 20:14, 19 December 2015 (UTC)
I wonder if the sample code should be replaced completely. Since the 8086 has CLD, REP MOVSB, why would any compiler/coder fashion a procedure to do a string move rather than inlining? The inline is only 3 bytes! How about replacing the sample with another trivial example like a string case change like the one in the 68000 article? That could be written to include the call frame setup and demonstrate the use of LODSB and STOSB. This would even create a purpose for setting up ES! I will happily write the procedure if there is some consensus. RastaKins ( talk) 15:52, 28 December 2021 (UTC)
I removed "often described as 'the first truly useful microprocessor'{{citation needed|date=December 2014}},". Every hit on Google using the search term "the first truly useful microprocessor" is a site that uses this article, or is in fact this article. I could not find one independent hit stating that. In addition the "citation needed" tag is now over three years old, and there is still no citation. Finally, the phrase contains the weasel word "often" which is a canonical example. Nick Beeson ( talk) 15:04, 22 March 2018 (UTC)
There is a recent edit regarding 7.16 MHz. The 8086 and 8088 have unsymmetric clock requirements. At full speed, the clock should have a 33% duty cycle. The 8284 generates this with a divide by three counter. One could run a 10MHz 8086 at 7.16MHz with a 50% clock. Appropriate wait states would get the bus cycle down, if needed. Gah4 ( talk) 03:26, 11 July 2018 (UTC)
to my knowledge while you can't directly reference the memory locations at AX, CX or DX, BX is also used as an index register. Should we have four leading zeros in front of it in the little register guide thing? Jethro 82 ( talk) 00:35, 15 March 2019 (UTC)
The article says: If the 8086 is to retain 8-bit object codes and hence the efficient memory use of the 8080, then it cannot guarantee that (16-bit) opcodes and data will lie on an even-odd byte address, which I believe is wrong. The 8086 has an instruction buffer (cache) that is loaded 16 bits at a time. Instruction fetch is always done for the whole word, even when the odd byte is needed. Data access, on the other hand, can be done on 8 bit or 16 bit cycles. Mostly this is only important for memory write, where write enable is only for the byte that is being written. It is the efficient addressing of data that requires the 8 bit cycles, not instructions. Gah4 ( talk) 23:28, 25 March 2020 (UTC)
References
The article says compact encoding inspired by 8-bit processors regarding one address and two address operations, I presume a reference to earlier 8 bit microprocessors. One address (accumulator), and two address (usually register based) processors go a long way back, many years before the earlier microprocessors, almost to the beginning of electronic computation. The 8-bit byte is commonly attributed to IBM S/360, a line of 32 bit (architecture) machines with byte addressable memory. In any case, optimal instruction coding has a long history that didn't start with 8 bit microprocessors. Gah4 ( talk) 23:22, 7 December 2020 (UTC)
This sentence doesn't say which instructions it is talking about. From my understanding, this could be about the enter
and leave
instructions (specifically the second argument of enter
which is a direct way of referencing the lexical nesting level of the called function and helps with escaping variables), but these instructions were not in the first 8086 instruction set, they were introduced later according to
X86_instruction_listings#Added_with_80186/80188. Is this a mistake? It is a very interesting information IMO but should it be moved to another section and rephrased to something like "Later x86 processors would add the enter
/leave
instructions to assist source code compilation of […]"? --
Dettorer (
talk) 08:04, 15 May 2023 (UTC)
The CVV of my debit card is 086. I love it! It reminds me of 8086 and my youth. Bretleduc ( talk) 17:04, 24 November 2023 (UTC)
This article is rated C-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||
|
Text and/or other creative content from this version of Maximum mode was copied or moved into Intel 8086 with this edit on 02:56, 30 August 2012. The former page's history now serves to provide attribution for that content in the latter page, and it must not be deleted as long as the latter page exists. |
The article seems to be lacking sources. A reference to the original 8086 intel doc would be cool, maybe somebody has a link? anton
I corrected the '1989' introduction year to 1978. It seems that somebody wrote wrong information.
THERE IS MUCH MORE INFO INCORRECT HERE!
The 8086 CPU was NEVER used in the IBM XT. /info/en/?search=Intel_8088 the CP/M machines used the 8086 CPU. The IBM XT in fact always had the 8088 CPU, the 8086 CPU is MUCH faster, but IBM won from CP/M, due to having standard expansion slots (BUS), CP/M machines all had their own unique disk format, you had conversion programs but software working on one brand, often did not support other brands. The IBM and all the clones had compatible DISKS and COMPATIBLE expansion cards. While no perfect compatibility in the early IBM/CLONE days, this soon changed and all hard and software became fully interchangeable.
8086 CPU can never work on a IBM XT machine since it had an 8 bit BUS and the 8086 has a 16 bit BUS interface. Those EARLY IBM clones that had compatibility issue's with IBM, sometimes used the 8086 CPU, often making them outperform IBM XT being 2x faster.
But 2x a little...+ compatibility issue's..... the compatibility between hard and software was more beneficial to users so the 8088 CPU won the market for years until the first 286 CPU's that had full compatibility with the 8088 CPU hit the market!
IBM should have adopted the 8086 CPU, gaining 2x performance and making them compatible with the 8086 clones, But IBM didn't care about the speed. They had the marked and the software, still if they would have done that.... the 8086 is almost as fast as the 286. And we would had have it 4 years earlier, but IBM did NEVER bother with the 8086. — Preceding unsigned comment added by 217.103.36.185 ( talk) 11:11, 25 August 2019 (UTC)
by sending out the appropriate control signals, opening the proper direction of the data buffers, and sending out the address of the memory or i/o device where the desired data resides
AMD created this processor type too. It has number P8086-1 has a AMD logo and also (C) Intel 1978. I can make a picture of the chip and add it to the article.
AMD did not create this processor but they did second-source it i.e. manufacture it under licence from Intel because Intel could not manufacture enough itself. Intel are the sole designers of the original 8086 processor. -- ToaneeM ( talk) 09:15, 15 November 2016 (UTC)
Does the final sentence refer to the 8088 or the 8086? At first glance, it continues the info on the 8088, but upon consideration, it seems more likely to refer to the 8086. Is this correct? It's not too clear.
Fourohfour 15:15, 9 October 2005 (UTC)
In "Busses and Operation", it states "Can access 220 memory locations i.e 1 MiB of memory." - While the facts are correct, the "i.e." suggests that they are directly related. The 16 bit data bus would imply an addressable memory of 2MiB (16 x 220), but the architecture was designed around 8 bit memory and thus treats the 16bit bus as two separate busses during memory transfers. 89.85.83.107 11:08, 20 March 2007 (UTC)
Even more confusing, the 8088 article says nearly the exact same thing: 'The original IBM PC was based on the 8088' Family Guy Guy ( talk) 01:09, 30 May 2017 (UTC)
is this the first "pc" microprocessor? I think so, in which case it should be noted, although I expected to find a mention to the first, whether it's this or a second one. —The preceding unsigned comment was added by 81.182.77.84 ( talk • contribs) .
also, for comparison, I'm interested in the size and speeds of pc hard-drives at the time, ie when it would have been used by a user -- I had thought around 10 MB but in fact if it can only address 1 MB (article text) this seems unlikely. Did it even have a hard-drive? a floppy? anything? —The preceding unsigned comment was added by 81.182.77.84 ( talk • contribs) .
I don't think the two articles should be merged. After all, the one talks about the 8086 itself while the other about the general architecture my_generation 20:06 UTC, 30 September 2006
I think merging the articles is a good idea. The Intel 8086 set the standard for microprocessor architecture. Look at the Intel 8086 user manual (if you can find it on eBay or Amazon) and you'll see the information that's included in both of these articles. It would be easier to have all that information in just one. In answer to the above disagreement, you can't describe the 8086 without describing its architecture.
This comment was in the "bugs" section: Processors remaining with original 1979 markings are quite rare; some people consider them collector's items.
This didn't seem to be related to a bug, so I moved it here. - furrykef ( Talk at me) 06:38, 25 November 2006 (UTC)
Does anyone knows how much was the price of an 8086 processor when launched as a per unit or per thousand units? I think it is an important information that is missing.-- 201.58.146.145 22:21, 13 August 2007 (UTC)
I don't recall a onesy price for the CPU alone. I do recall that we sold a box which had the CPU, a few support chips, and some documentation for $360. That was an intentional reference to the 8080, for which that was the original onesy price. I think the 8080 price, in turn, was an obscure reference to the famous IBM computer line, but that may be a spurious memory. For that matter, I'm not dead sure on the $360 price for the boxed kit, nor do I recall the name by which we called it at the time. I have one around here somewhere, as an anonymous kind soul left one on my chair just before I left Intel for the second time. —Preceding unsigned comment added by 68.35.64.49 ( talk) 20:48, 26 February 2008 (UTC)
The Intel manuals (published 1986/87) I have use the name iAPX 86 for the 8086, iAPX 186 for the 80186 etc. Why was this? Amazon lists an example here http://www.amazon.com/iAPX-186-188-users-manual/dp/083593036X John a s ( talk) 23:04, 6 February 2008 (UTC)
Around the time the product known internally as the 8800 was introduced as the iAPX432 (a comprehensive disaster, by the way), Intel marketing had the bright idea of renaming the 8086 family products.
A simple 8086 system was to be an iAPX86-10. A system including the 8087 floating point chip was to be an iAPX86-20.
I (one of the four original 8086 design engineers) despised these names, and they never caught on much. I hardly ever see them any more. But, since Marketing, of course, controlled the published materials, a whole generation of user manuals and other published material used these names. Avoid them if you are looking for early-published material. If you are looking for accuracy, avoid the very first 8086 manual, than which the second version had a fair number of corrections--but nearly all those were in long before the iAPX naming.
Peter A. Stoll —Preceding unsigned comment added by 68.35.64.49 ( talk) 20:43, 26 February 2008 (UTC)
The article says:
It seems some manufacturers of 80186-like processors for embedded systems have later done exactly this. That could perhaps be mentioned in the "Subsequent expansion" section if reliable secondary sources can be found. So far, I've found only manufacturer-controlled or unreliable material. Paradigm Systems sells a C++ compiler for "24-bit extended mode address space (16MB)" [1] and lists supported processors from several manufacturers:
85.23.32.64 ( talk) 23:52, 11 July 2009 (UTC)
Can anyone provide a reference to an official source from where these timings are taken? The datasheets given at the end of the article present only external bus timings (memory read/write is 5 clocks) but doesn't list any other information about the internal logic and ALUs. Thank you. bungalo ( talk) 10:26, 26 April 2010 (UTC)
Moved from user talk page...article discussions belong on article talk pages May I ask you again What is wrong with a data sheet or a masm manual as a ref? Can you get a better source? I don't get it! What are you trying to say? "Only the web exist" or something? 83.255.38.96 ( talk) 06:08, 3 November 2010 (UTC)
Of course it is about the original Intel part if nothing else is said (see the name of the article). I gave a perfectly fine reference in April this year, although I typed in those numbers many years ago. I have never claimed it to be a citation; citations are for controversial statements, not for plain numerical data from a datasheet. The MASM 5.0 reference manual was certainly uniquely identifiable as it stood, and it would really surprise me if more than a few promille of all the material on WP has equally good (i.e. serious and reliable) references. Consequently, with your logic, you better put a tag on just about every single statement on WP...!? 83.255.43.80 ( talk) 13:05, 22 December 2010 (UTC)
Industry jargon (such as the cited reference, first one on the Google Books hits) seems to prefer "random logic" as the description for the internal control circuits of a microprocessor, as contrasted with "microcode". "Ad-hoc" has the disadvantage of being Latin, and is probably as pejorative as "random" if you're sensitive about such things. -- Wtshymanski ( talk) 14:51, 29 June 2011 (UTC)
in the Microcomputers using the 8086 section, the compaq deskpro clock speed listed doesn't match the one listed on the wiki page for this product, plus it's not listed in the "Crystal_oscillator_frequencies" page neither, I have no idea where this comes from so I added a (?).... — Preceding unsigned comment added by 85.169.40.106 ( talk) 08:06, 1 March 2012 (UTC)
It's not like kings or popes or presidents...Intel was still selling lots of 8080s after the 8086 came out, and the 8086 and 8088 were virtually the same era - the 80286 came out before the 80186, for that matter. -- Wtshymanski ( talk) 14:20, 21 August 2012 (UTC)
It is clear enough that the 8085 was the processor that immediately preceeded the 8086. Since the 8080 and the 8085 were so architecturally similar, it would seem reasonable to show the predecessor as the 8080/8085. 86.156.154.237 ( talk) 17:41, 25 August 2012 (UTC)
The lowest rated clock speed was 5MHz for both the 8086 here and the 8088 (which is what most PCs used, the IBMs exclusively so). Yes, the original IBM PC ran at 4.77MHz but that was a design choice: from memory it mapped quite well into the video timing signals although I admit I forget the details. The chip itself was a 5MHz chip underclocked to the slower speed. Datasheets for the two chips are available: 8086 and 8088: there are no chips slower than 5MHz described. Crispmuncher ( talk) 04:05, 29 August 2012 (UTC).
— Preceding unsigned comment added by Paraboloid01 ( talk • contribs) 13:10, 1 November 2012 (UTC)
Can someone provide information or reference to support this claim: Compatible—and, in many cases, enhanced—versions were manufactured by Fujitsu, Harris/Intersil, OKI, Siemens AG, Texas Instruments, NEC, Mitsubishi, and AMD
What is an enhanced version? To me it is a version that has some additional software features. To best of my knowledge NEC was the only company that produced such enhanced 8086 version - its V30 processor. Harris and OKI (and later Intel) made a CMOS version - 80C86, it doesn't have any software enhancements. It appears to be a simple conversion from NMOS to CMOS logic technology.
Also I don't think Texas Instruments ever made 8086 clone (they were making 8080 though).
108.16.203.38 ( talk) 10:22, 9 October 2013 (UTC)
I don't agree with the following claim: The resulting chip, K1810BM86, was binary and pin-compatible with the 8086, but was not mechanically compatible because it used metric measurements. In fact the difference in lead pitch is minimal - only 0.04 mm between two adjacent pins, which results in 0.4 mm maximal difference for a 40 pin DIP package. So that soviet ICs can be installed in 0.1" pitch sockets and vice-versa. It was not unusual to see western chips installed in later soviet computers using metric sockets. And interestingly enough I also have seen Taiwanese network card using some soviet logic IC.
This picture shows ES1845 board with NEC D8237AC-5 DMA controller IC installed in a metric socket (top left corner). — Preceding unsigned comment added by Skiselev ( talk • contribs) 21:52, 5 June 2013 (UTC)
There is no reason a computer designer could not equip an 8086/8088-based computer with a register port that would change the MN/MX pin, taking effect at the next reset. (Changing the level of the pin while the CPU is running might have unpredictable and undocumented results.) However, since other hardware needs to be coordinated to the MN/MX pin level, this would require other hardware to switch at the same time. This would not normally be practical, and probably the only reasonable use for it would be hardware emulation of two or more different computer models based on the same CPU (such as the IBM PCjr, which uses the 8088 in minimum mode, and the IBM PC XT, which uses it in maximum mode). It is even possible that the 8086/8088 does not respond to a change in the level of MN/MX after a hardware reset but only at power-up. Even then, it is certainly possible to design hardware that will power down the CPU, switch MN/MX, wait an adequate amount of time, and power it back up. 108.16.203.38 ( talk) 10:01, 9 October 2013 (UTC)
Maybe I missed it in the article, but there seems to be nothing about how the memory is organised and interfaced to the 8086. Unlike most 16 bit processors where the memory is 16 bits wide and and is accessed 16 bits at a time, memory for the 8086 is arranged as two 8 bit wide chunks of memory with one chunk connected to D0-D7 (low byte) and the other to D8-D15 (high byte). This arrangement comes about because the 8086 supports both 8 bit and 16 bit opcodes. The 8 bit opcodes can occupy both the low byte and the high byte of the memory. Thus the first (8 bit) opsode will be present on the low byte of the data bus, but the next will be present on the high byte. Further, if a single 8 bit opcode is on the low byte, then any immediately following 16 bit opcodes will be stored with its lowest byte on the corresponding high byte of the memory and the highest byte on the next addressed low byte. In both cases there is an execution time penalty. In the first scenario, the processor has to switch the second opcode back to its low byte position (time penalty is minimal in this case). In the second scenario, the processor has to perform 2 memory accesses as the 16 bit opcode occupies 2 addresses, further the processor then has to swap the low and high bytes once read (the swap is a minimal time penalty but the two memory accesses are a significant penalty as two cycles are required). The processor then has to read the second momory location again to recover the low byte of the next 16 bit opcode or the 8 bit opcode as required. This means that a series of 16 bit opcodes aligned on the odd boundary forces the processor to use two memory access cycles for each opcode.
Code can be assembled one of two ways. The code can be assembled such that the opcodes occupy whatever memory position is next available. This give more compact code but with an execution penalty. Alternatively, the code can be assembled such that a single 8 bit opcode always occupies the low byte of memory (a NOP code is placed in the high byte), and a 16 bit opcode is always placed at a single adress in the correct low/high byte order (again NOP codes are used as required). This gives a faster execution speed as the above time penalties are avoided, but the price is larger code as the valid program codes are padded with NOP opcodes to ensure that all opcodes start on an even byte boundary. Assemblers will usually allow a mixture of modes. Compilers usually do not offer such control and will invariably compile code for minimum execution time.
Somewhere, I have a book that details all of this, but I am blowed if I can find it at present. I B Wright ( talk) 17:10, 7 March 2014 (UTC)
The sample code is very sloppy. It saves/sets/restores the ES register even though it's never referenced by the code. The discussion says this is needed but it's wrong ("mov [di],al" uses the default DS: segment just like "mov al,[si]" does). It would be needed by the string instructions, like REP MOVSB, but the discussion proposes replacing the loop with REP MOVSW (a word instruction), which would actually copy twice as many bytes as passed in the count register. REP MOVSB would work, and would obviate the need for the JCXZ check (REP MOVSB is a no-op when CX=0). — Preceding unsigned comment added by 24.34.177.232 ( talk) 20:14, 19 December 2015 (UTC)
I wonder if the sample code should be replaced completely. Since the 8086 has CLD, REP MOVSB, why would any compiler/coder fashion a procedure to do a string move rather than inlining? The inline is only 3 bytes! How about replacing the sample with another trivial example like a string case change like the one in the 68000 article? That could be written to include the call frame setup and demonstrate the use of LODSB and STOSB. This would even create a purpose for setting up ES! I will happily write the procedure if there is some consensus. RastaKins ( talk) 15:52, 28 December 2021 (UTC)
I removed "often described as 'the first truly useful microprocessor'{{citation needed|date=December 2014}},". Every hit on Google using the search term "the first truly useful microprocessor" is a site that uses this article, or is in fact this article. I could not find one independent hit stating that. In addition the "citation needed" tag is now over three years old, and there is still no citation. Finally, the phrase contains the weasel word "often" which is a canonical example. Nick Beeson ( talk) 15:04, 22 March 2018 (UTC)
There is a recent edit regarding 7.16 MHz. The 8086 and 8088 have unsymmetric clock requirements. At full speed, the clock should have a 33% duty cycle. The 8284 generates this with a divide by three counter. One could run a 10MHz 8086 at 7.16MHz with a 50% clock. Appropriate wait states would get the bus cycle down, if needed. Gah4 ( talk) 03:26, 11 July 2018 (UTC)
to my knowledge while you can't directly reference the memory locations at AX, CX or DX, BX is also used as an index register. Should we have four leading zeros in front of it in the little register guide thing? Jethro 82 ( talk) 00:35, 15 March 2019 (UTC)
The article says: If the 8086 is to retain 8-bit object codes and hence the efficient memory use of the 8080, then it cannot guarantee that (16-bit) opcodes and data will lie on an even-odd byte address, which I believe is wrong. The 8086 has an instruction buffer (cache) that is loaded 16 bits at a time. Instruction fetch is always done for the whole word, even when the odd byte is needed. Data access, on the other hand, can be done on 8 bit or 16 bit cycles. Mostly this is only important for memory write, where write enable is only for the byte that is being written. It is the efficient addressing of data that requires the 8 bit cycles, not instructions. Gah4 ( talk) 23:28, 25 March 2020 (UTC)
References
The article says compact encoding inspired by 8-bit processors regarding one address and two address operations, I presume a reference to earlier 8 bit microprocessors. One address (accumulator), and two address (usually register based) processors go a long way back, many years before the earlier microprocessors, almost to the beginning of electronic computation. The 8-bit byte is commonly attributed to IBM S/360, a line of 32 bit (architecture) machines with byte addressable memory. In any case, optimal instruction coding has a long history that didn't start with 8 bit microprocessors. Gah4 ( talk) 23:22, 7 December 2020 (UTC)
This sentence doesn't say which instructions it is talking about. From my understanding, this could be about the enter
and leave
instructions (specifically the second argument of enter
which is a direct way of referencing the lexical nesting level of the called function and helps with escaping variables), but these instructions were not in the first 8086 instruction set, they were introduced later according to
X86_instruction_listings#Added_with_80186/80188. Is this a mistake? It is a very interesting information IMO but should it be moved to another section and rephrased to something like "Later x86 processors would add the enter
/leave
instructions to assist source code compilation of […]"? --
Dettorer (
talk) 08:04, 15 May 2023 (UTC)
The CVV of my debit card is 086. I love it! It reminds me of 8086 and my youth. Bretleduc ( talk) 17:04, 24 November 2023 (UTC)