Settings

Theme

AMD entered the CPU market with reverse-engineered Intel 8080 clone 50 years ago

tomshardware.com

188 points by ksec a day ago · 92 comments

Reader

WarOnPrivacy a day ago

> AMD entered the CPU market with reverse-engineered Intel 8080 clone 50 years ago

Moral: Awesome productivity happens when IP doesn't get in the way.

  • WaitWaitWha a day ago

    If I recall correctly, they were also licensed to produce some clones.

    I remember when in the early 90s the am386-40MHz came out. Everyone was freaking out how we are now breaking the sound barrier. There was a company Twinhead(?) that came out with these 386-40Mhz motherboards with buses so overclocked most video cards would fry. Only the mono Hercules cards could survive. We thought our servers were the shizzle.

    • adrian_b 9 hours ago

      Intel did indeed later license AMD to produce some clones, but it was not due to their good heart, but those were cross-licensing deals, with AMD producing clones of some Intel chips and Intel producing clones of some AMD chips, which could be used as peripherals for the Intel CPUs.

      Then there was the big licensing deal for Intel 8088 and its successors, which was forced by IBM upon Intel, in order to have a second source for the critical components of the IBM PC.

    • fweimer a day ago

      Weren't legal protections for semiconductor masks rather lax in the 70s, at least in the United States? You might need certain patent licenses for the manufacturing process, but the chip itself was largely unprotected.

      • mrandish 20 hours ago

        > "In the summer of 1973, during their last day working at Xerox, Ashawna Hailey, Kim Hailey, and Jay Kumar took detailed photos of an Intel 8080 pre-production sample"

        I was interested in this and followed the links to the original interview at: https://web.archive.org/web/20131111155525/http://silicongen... which was interesting:

        > "Xerox being more of a theoretical company than a practical one let us spend a whole year taking apart all of the different microprocessors on the market at that time and reverse engineering them back to schematic. And the final thing that I did as a project was to, we had gotten a pre-production sample of the Intel 8080 and this was just as Kim and I were leaving the company. On the last day I took the part in and shot ten rolls of color film on the Leica that was attached to the lights microscope and then they gave us the exit interview and we went on our way. And so that summer we got a big piece of cardboard from the, a refrigerator came in and made this mosaic of the 8080. It was about 300 or 400 pictures altogether and we pieced it together, traced out all the logic and the transistors and everything and then decided to go to, go up North to Silicon Valley and see if there was anybody up there that wanted to know about that kind of technology. And I went to AMI and they said oh, we're interested, you come on as a consultant, but nobody seemed to be able to take the project seriously. And then I went over to a little company called Advanced Micro Devices and they wanted to, they thought they'd like to get into it because they had just developed an N-channel process and this was '73. And I asked them if they wanted to get into the microprocessor business because I had schematics and logic diagrams to the Intel 8080 and they said yes."

        From today's perspective, just shopping a design lifted directly from Intel CPU die shots around to valley semi companies sounds quite remarkable but it was a very different time then.

      • rasz a day ago

        yep, thus intel going microcode and patent route.

    • dspillett 21 hours ago

      That wasn't the first time they had similar products out-speeding Intel. I have the CPU from the first PC I owned tacked to the front of my current main PC with a Ryzen. That was clocked at 20MHz IIRC (I'm at parental home ATM so can't confirm) where the Intel units topped out at 12MHz (unless overclocked, or course).

      • Delk an hour ago

        I'm guessing that was a 286. I think Intel parts topped out at 12.5 MHz but AMD and Harris eventually reached 20 or even 25 MHz. I still have my original PC with a 12.5 MHz one.

        The difference with the 386, I think, is that AFAIK the second-sourced 8086 and 286 CPUs from non-Intel manufacturers still made use of licensed Intel designs. The 386 (and later) had to be reverse engineered again and AMD designed their own implementation. That also meant AMD was a bit late to the game (the Am386 came out in 1991 while the 80386 had already been released in 1985) but, on the other hand, they were able to achieve better performance.

  • elif a day ago

    100% agree. It's clearest to see in China. IP has been transformed from a mechanism to maintain competition and into a mechanism to maintain market control.

    • gyomu a day ago

      Given that market control is one of the few ultimate gating factors that makes you thrive or die as a company, it’s no surprise that anything that could be used as a mechanism to maintain market control would be.

    • expedition32 20 hours ago

      European countries "acquired" quite a few Chinese trade secrets in the past. And from eachother to be fair.

      IP is one of those things you invent once you made it to the top.

izacus a day ago

I wonder if in 2025 a company would even allowed to start before being curb stomped by Intel's IP lawyers. After all, they started making clones, something that China gets accused of a lot.

  • BlueToth a day ago

    Intel customers required a second source supplier, i.e. IBM, thus, AMD was providing that for Intel in the beginning. Then later on AMD created the x86 64bit commands, which Intel adopted from AMD so now both share the same ISA.

    • nwallin 13 hours ago

      This article is not about that. This article is about the AMD Am9080, which was a unlicensed clone of the Intel 8080.

      The licensing deals that legitimized AMD's unlicensed clones came later.

    • izacus a day ago

      Can you explain what you tried so say with that?

      Customer needs don't really matter in cases where monopolist (ab)uses the law to kill competition. That's the MAIN reason why monopolies are problematic.

      • nineteen999 18 hours ago

        That wasn't the case. Their customers were the military. The second sourcing was required if they wanted DoD contracts.

      • debugnik 20 hours ago

        The "required" in that sentence should be read strictly: some customers, mainly governmental, wouldn't have bought Intel chips in the first place without access to alternative suppliers (AMD and previously VIA). Intel had to give in.

      • zenethian 20 hours ago

        Neither company were like they are now back then. Intel needed a second supplier for their chips because nobody trusted manufacturing from a single source provider.

      • sokoloff 20 hours ago

        I read GP to mean that Intel had strong incentive to cooperate in order to make the initial sale. That’s where the customer need was relevant.

  • jezek2 a day ago

    You can do it with HW accelerated emulation like Apple did with M1 CPUs. They implemented x86 compatible behavior in HW so the emulation has very good performance.

    Another approach was Transmeta where the target ISA was microcoded, therefore done in "software".

    • BlueToth a day ago

      They said that they implemented x86 ISA memory handling instructions, that substantially sped up the emulation. I don't remember exactly which now, but they explained this all in a WWDC video about the emulation.

      • fweimer a day ago

        There's a Linux patch that exposes it via prctl: https://lore.kernel.org/all/20240410211652.16640-1-zayd_qums...

        There's also the CFINV instruction (architectural, part of FEAT_FLAGM), which helps with emulating the x86-64 CMP instruction.

      • als0 a day ago

        Not instructions per se. Rosetta is a software based binary translator, and one of the most intensive parts about translating x86 to ARM is having to make sure all load/store instructions are strictly well ordered. To alleviate this pressure, Apple implemented the Total Store Ordering (TSO) feature in hardware, which makes sure that all ARM load and store instructions (transparently) follow the same memory ordering rules as x86.

        • nineteen999 a day ago

          It is funny to hear sometimes though:

          "Apple created a chip which is not an X86! Its awesome! And the best thing about it is ... it does TSO does like an X86! Isn't that great?"

          • dontlaugh a day ago

            Only some of the time.

            I think the last time I ran amd64 on my mac was months ago, a game.

    • izacus a day ago

      Apple didn't implement x86 ISA in hardware, they have a few instructions that change memory behaviour to make emulation faster.

  • tiffanyh a day ago

    If the company was based in the EU, local regulation might encourage reverse-engineering.

    See tangentially related topic from yesterday: https://news.ycombinator.com/item?id=46362927

  • phendrenad2 a day ago

    Maybe if you find a company as small as Intel was at the time.

tracker1 a day ago

I'm still a heavy advocate for requiring second/dual-sourcing in govt contracts... literally for anything that can be considered essential infrastructure or communications technology and medicine. A role of govt in a capitalist society is to ensure competition and domestic availability/production as much as possible.

While my PoV is US centered, I feel that other nations should largely optimize for the same as much as possible. Many of today's issues stem from too much centralization of commercial/corporatist power as opposed to fostering competition. This shouldn't be in the absence of a baseline of reasonable regulation, just optimizing towards what is best for the most people.

  • gosub100 a day ago

    Suppose we got nuked or some calamity caused the interruption of all the fancy x-nanoneter processes. What would we actually miss out on? I don't know what the latest process nodes we have stateside are, but let's say we could produce 2005 era cpus here. What would we actually miss out on? I don't think it would affect anything important. You could do everything we do today, just slower. I think the real advancement is in software, programming languages, and libraries.

    • icedchai a day ago

      Software is much, much more bloated today than it was in 2005. 64-bit CPUs were available, but not quite mainstream yet. A "high end" consumer system had a couple gigabytes of RAM and chipset limitations generally capped you out at 4 or 8 gigs. You were lucky to have two CPU cores.

      If you took today's software and tried running it on a memory constrained, slow, 2005 era system, you'd be in for some pain.

      • MarsIronPI a day ago

        I used to daily-drive a Thinkpad X200 from 2008. As soon as you touch the modern (i.e. bloated) web, you feel the slowness. Other than that and gaming, it ran fine.

    • tracker1 a day ago

      I'm talking about way more than just CPUs... And for your question, we'd pretty much miss out on modern-like mobile phones entirely. 90nm -> 18A/1.8nm is a LOT of reduction in size and energy... not to count the evolution in battery and display technology over the same period.

      Now apply that to weapons systems in conflict against an enemy that DOES have modern production that you (no longer) have... it's a recipe for disaster/enslavement/death.

      China, though largely hamstrung, is already well ahead of your hypothetical 2005 tech breakpoint.

      Beyond all this, it's not even a matter of just slower, it's a matter of even practical... You couldn't viably create a lot of websites that actually exist on 2005 era technology. The performance memory overhead just weren't there yet. Not that a lot of things weren't possible... I remember Windows 2000 pretty fondly, and you could do a LOT if you had 4-8x what most people were buying in RAM.

      • fooker a day ago

        China can make CPUs around as good as 2016-18 intel now.

      • 15155 20 hours ago

        > Now apply that to weapons systems in conflict against an enemy that DOES have modern production that you (no longer) have... it's a recipe for disaster/enslavement/death.

        How do you maintain this production with a sudden influx of ballistic missiles at the production facility - or a complete naval blockade of all food calories to your country?

    • fweimer a day ago

      I think there have been many improvements since 2005 that are not dependent at all on the process node.

    • numpad0 a day ago

      I have an unpopular pet theory: the exponentially growing software bloat actually exists to slow computers back down to bearable levels for common folks, and that's why the most bloated frameworks have consistently replaced obsolete, less bloated ones, throughout the last decade.

      Why else are everything now seem to be wrappers for wrappers? What if the bloat was, subconsciously or whatever, the point?

      • immibis 11 hours ago

        The purpose of a system is what it does, after all. You might be onto something.

    • unethical_ban a day ago

      One superpower being in 2005 for CPUs and another being in 2030, and at cold/hot war, would be decisive.

      If society as a whole reverted to 2005, we would be fine.

      • gosub100 19 hours ago

        Can you sketch an example of how a war would be lost due to fewer IOPS or threads?

        In 2004 Iraq, we had guided missiles, night vision, explosives, satellites. What advantages would 3nm transistors give the enemy in combat?

        • unethical_ban 18 hours ago

          Like person below said, I assume the drone/AI warfare of the present and near future, along with IoT-integrated warfare and sensors and communications, function better and cheaper and faster with modern silicon.

        • cmrdporcupine 18 hours ago

          depends on the kind of battle, but

          see Ukraine drone warfare ... there's a lot going on there which is more than just miniaturized motors, etc. a lot is efficient power use of the semiconductors in those drones, the image processors attached to the cameras, etc. that i suspect relies on newer processes

ksecOP a day ago

If Intel decide to focus on Foundry, I just wish AMD and Intel could work together and make a subset clean up of x86 ISA open source or at least available for licensing. I dont want it to end up like MIPS or POWER ISA where everything is too little too late.

  • holowoodman a day ago

    A subset of an ISA will be incompatible with the full ISA and therefore be a new ISA. No existing software will run on it. So this won't really help anyone.

    And x86 isn't that nice to begin with, if you do something incompatible, you might as well start from scratch and create a new, homogenous, well-designed and modern ISA.

    • ksecOP a day ago

      There are software compiled today without using MMX support. I was thinking the idea of something that is open or for licensing is an 86 ISA that is forward compatible. And for customers that requires strict backward compatibility they could still source it from AMD and Intel.

      i.e Software compiled for 86 should work on x86. The value for backward compatibility is kept with both Intel and AMD. If the market wants something in between they now have an option.

      I know this isn't a sexy idea because HN or most tech people like something shiny and new. But I have always like the idea of extracting value from the "old and tried" solutions.

      • Scoundreller a day ago

        Sadly over the past year, Spotify builds require AVX extensions. Had an issue updating my 2008 Dell semi-upgraded bench PC that has a Q9300 in it (no AVX on it)

        But thankfully I could install an old bin and lock it out from updating.

        Intel’s software development emulator might run the newest bin but variable how slow it might be.

        In other circumstances, the AVX extensions aren’t required but the app is compiled to fail if they’re not required: https://www.reddit.com/r/pcgaming/comments/pix02j/hotfix_for...

    • fooker a day ago

      Software or microcode emulation works pretty well.

      So it would be faster and more efficient when sticking to the new subset and Nx slower then using the emulation path.

      • kimixa a day ago

        You could argue that microcode emulation is what they do now.

        • fooker 17 hours ago

          True, the only part that can not be solved by microcode emulation is faster decoding.

          Most architectures other than x86 have fixed sized machine instructions now, making decoding fast and predictable.

    • inkyoto 13 hours ago

      > A subset of an ISA will be incompatible with the full ISA and therefore be a new ISA. No existing software will run on it. So this won't really help anyone.

      This isn't an issue in any way. Vendors have been routinely taking out rarely used instructions from the hardware and simulating them in the software for decades as part of the ongoing ISA revision.

      Unimplemented instruction opcodes cause a CPU trap to occur where the missing instruction (s) is then emulated in the kernel's emulation layer.

      In fact, this is what was frequently done for «budget» 80[34]86 systems that lacked the FPU – it was emulated. It was slow as a dog but worked.

  • tester756 a day ago

    >I just wish AMD and Intel could work together and make a subset clean up of x86 ISA

    AMD and Intel Celebrate First Anniversary of x86 Ecosystem Advisory Group Driving the Future of x86 Computing

    Standardizing x86 features

    Key technical milestones, include:

        FRED (Flexible Return and Event Delivery): Finalized as a standard feature, FRED introduces a modernized interrupt model designed to reduce latency and improve system software reliability.
        AVX10: Established as the next-generation vector and general-purpose instruction set extension, AVX10 boosts throughput while ensuring portability across client, workstation, and server CPUs.
        ChkTag: x86 Memory Tagging: To combat longstanding memory safety vulnerabilities such as buffer overflows and use-after-free errors, the EAG introduced ChkTag, a unified memory tagging specification. ChkTag adds hardware instructions to detect violations, helping secure applications1, operating systems, hypervisors, and firmware. With compiler and tooling support, developers gain fine-grained control without compromising performance. Notably, ChkTag-enabled software remains compatible with processors lacking hardware support, simplifying deployment and complementing existing security features like shadow stack and confidential computing. The full ChkTag specification is expected later this year – and for further feature details, please visit the ChkTag Blog.
        ACE (Advanced Matrix Extensions for Matrix Multiplication): Accepted and implemented across the stack, ACE standardizes matrix multiplication capabilities, enabling seamless developer experiences across devices ranging from laptops to data center servers.
  • fulafel a day ago

    90s x86 from ISA pov is already free to use, no? The original patents must have expired and there's no copyright protection of ISAs. The thing keeping the symbiotic cross-licensed duopoly going is mutating the ISA all the time so they can mix in more recently patented stuff.

    • tracker1 a day ago

      AFAIK, most of event x86_64 patents are largely expired, or will be within the next 6 years. That said, efforts for a more open platform are probably more likely to be centered around risc or another arm alternative than x86... While I could see a standardization of x86 compatible shortcuts for use with emulation platforms on arm/risc processors. Transmeta was an idea too far ahead of its time.

      • fulafel a day ago

        Remembering the Mac ARM transition pain wrt Docker and Node/Python/Lambda cross builds targeting servers, there's a lot to be said for binary compatibility.

        • tracker1 a day ago

          You're doing builds for Docker on your desktop for direct deployment instead of through a CI/CD service?

          My biggest issue was the number of broken apps in Docker on Arm based Macs, and even then was mostly able to work around it without much trouble.

          • fulafel a day ago

            You want to be able to replicate the build in your local dev env. And you're not always working on a mature project, you first get it working locally. CICD tends to be slow and hard to debug.

            • fweimer a day ago

              Sure, but why does the developer environment have to be the same architecture as in production? Think of it as ahead-of-time binary translation if you want to.

              These days, even fairly low-level system software is surprisingly portable. Entire GNU/Linux distributions are developed this way, for the majority of architectures they support.

        • cmrdporcupine a day ago

          90% of those problems effect people like you and I, developers and power users, not "regular" users of machines who are mostly mobile device and occasional laptop/desktop application users.

          I suspect we'll see somebody -- a phone manufacturer or similar device -- make a major transition to RISC-V from ARM etc in the next 10 years that we won't even notice.

          • fulafel a day ago

            I agree, some will, but it may not be a more open platform from developer POV.

    • fweimer a day ago

      I don't think it works that way in practice.

      Some distributions like Debian or Fedora will make newer features (such as AVX/VEX) mandatory only after the patents expire, if ever. So a new entrant could implement the original x86-64 ISA (maybe with some obvious extensions like 128-bit atomics) in that time frame and preempt the patent-based lockout due to ISA evolution. If there was a viable AMD/Intel alternative that only implements the baseline ISA, those distributions would never switch away from it.

      It's just not easy to build high-performance CPUs, regardless of ISA.

  • lloydatkinson a day ago

    They recently killed off their recent attempt; x86s.

    • userbinator 21 hours ago

      The "s" stands for "stupid".

      But it's fortunate that they realised the main attraction to x86 is backwards-compatibility, so attempting to do away with that will lead to even less marketshare.

    • tester756 a day ago

      Wut?

      >AMD and Intel Celebrate First Anniversary of x86 Ecosystem Advisory Group Driving the Future of x86 Computing

      Oct 13, 2025

      Standardizing x86 features

      Key technical milestones, include:

          FRED (Flexible Return and Event Delivery): Finalized as a standard feature, FRED introduces a modernized interrupt model designed to reduce latency and improve system software reliability.
      
          AVX10: Established as the next-generation vector and general-purpose instruction set extension, AVX10 boosts throughput while ensuring portability across client, workstation, and server CPUs.
      
          ChkTag: x86 Memory Tagging: To combat longstanding memory safety vulnerabilities such as buffer overflows and use-after-free errors, the EAG introduced ChkTag, a unified memory tagging specification. ChkTag adds hardware instructions to detect violations, helping secure applications1, operating systems, hypervisors, and firmware. With compiler and tooling support, developers gain fine-grained control without compromising performance. Notably, ChkTag-enabled software remains compatible with processors lacking hardware support, simplifying deployment and complementing existing security features like shadow stack and confidential computing. The full ChkTag specification is expected later this year – and for further feature details, please visit the ChkTag Blog.
      
          ACE (Advanced Matrix Extensions for Matrix Multiplication): Accepted and implemented across the stack, ACE standardizes matrix multiplication capabilities, enabling seamless developer experiences across devices ranging from laptops to data center servers.
  • IshKebab a day ago

    Far too late for that. Does anyone seriously think ARM isn't going to obliterate x86 in the next 10-20 years?

    • Keyframe a day ago

      In which space? Desktop and high performance servers? Why would it?

      Mature gallery of software to be ported from TSO to weak memory model is a soft moat. So is avx/simd mature dominance vs neon/sve. x86/64 is a duopoly and a stable target vs fragmented landscape of ARM. ARM's whole spiel is performance per watt, scale out type of thing vs scale up. In that sense the market has kind of already moved. With ARM if you start pushing for sustained high throughput, high performance, 5Ghz+ envelope, all the advantages are gone in favor of x86 so far.

      What might be interesting is if let's say AMD adds an ARM frontend decoder to Zen. In one of Jim Keller's interviews that was shared here, he said it wouldn't be that big of a deal to make such a CPU for it to be an ARM decoding one. That'd be interesting to see.

      • philistine a day ago

        > In which space? Desktop and high performance servers? Why would it?

        Laptops. Apple already owned the high margin laptop market before they switched to ARM. With phones, tablets, laptops above 1k, and all the other doodads all running ARM, it's not that x86 will simply disappear. Of course not. But the investments simply aren't comparable anymore with ARM being an order of magnitude more common. x86 is very slowly losing steam, with their chips generally behind in terms of performance per watt. And it's not because of any specific problem or mistake. It's just that it no longer makes economic sense.

    • tracker1 a day ago

      Well, given some of the political/legal gamesmanship over the company itself the past few years, it could very well self destruct in favor of RISC-V or something else entirely in the next decade, who knows.

    • fulafel a day ago

      Look how long SPARC, z/Architecture, PowerPC etc have kept going even after they lost their strong positions on the market (a development which is nowhere in sight for x86), and they had a tiny fraction of the inertia of x86 softare base.

      Obliterating x86 in that time would take quite a lot more than what the ARM trajectory is now. It's had 40 years to try by now and the technical advantage window (power efficieny advantage) has closed.

      • IshKebab 20 hours ago

        To be clear if x86 is ever as unpopular as SPARC or PowerPC I would consider that to be obliterated.

        I was thinking more like if it falls to 10% of desktop/laptop/server market share, which is still waaaaaay more then the nearly-dead architectures you listed.

    • fweimer a day ago

      It seems to me that interest in AArch64 for on-promise general-purpose compute workloads has largely waned. Are Dell/HPE/Lenovo currently selling AArch64 servers? Maybe there is a rack-mounted Nvidia DGX variant, but that's more focused on GPU compute for sure.

    • tester756 a day ago

      >Does anyone seriously think ARM isn't going to obliterate x86 in the next 10-20 years?

      Lunar Lake shows that x86 is capable of getting that energy efficiency

      Panther Lake that will be released in around 30 days is expected to show significant improvement over Lunar Lake

      So... why switch to ARM if you will get similar perf/energy eff?

    • izacus a day ago

      20 years is half of x86's lifetime and less than half of the lifetime of home computing as we know it.

      So this is kind of a useless question, because in such a timespan anything can happen. 20 years ago computers had somewhere around 512MB of RAM and a single core and had a CRT on desk.

    • zzzoom a day ago

      Why would the market jump from one proprietary ISA to another proprietary ISA?

      • IshKebab a day ago

        Ask Apple.

        • zzzoom a day ago

          Apple's A4 was launched 4 years before RISC-V.

          • IshKebab 19 hours ago

            So what? Are you suggesting that Apple would have switched to RISC-V?

            I like RISC-V (it's my job and I'm very involved in the community) but even now it isn't ready for laptops/desktop class applications. RVA23 is really the first profile that comes close and that was only ratified very recently. But beyond that there are a load of other things that are very much work in progress around the periphery that you need on a laptop. ACPI, UEFI, etc. If you know RISC-V, what does mconfigptr point to? Nothing yet!

            Anyway the question was why would anyone switch from one proprietary ISA to another, as if nobody would - despite the very obvious proof that yes they absolutely would.

bhasi a day ago

“Behind every successful fortune there is a crime.”

- Mario Puzo, The Godfather

startupsfail a day ago

Seems like an interesting story, Ashawna - she was about 25 at the time, and as per Wikipedia, already worked on the military projects - the Sprint Missile System, and was at Xerox.

> The processor was reverse-engineered by Ashawna Hailey, Kim Hailey and Jay Kumar. The Haileys photographed a pre-production sample Intel 8080 on their last day in Xerox, and developed a schematic and logic diagrams from the ~400 images.

mixxit 19 hours ago

Gordon should have stuck to the symphonic

B1FF_PSUVM a day ago

AMD was already in the CPU market with bit-slice LSI chips, the Am2900 set of chips: https://en.wikipedia.org/wiki/AMD_Am2900

Those worked in 4-bit slices, and you could use them as LEGO blocks to build your own design (e.g. 8, 12 ou 16 bits) with much fewer parts than using standard TTL gates (or ECL NANDs, if you were Seymour Cray).

The 1980 Mick & Brick book Bit-slice Microprocessor Design later gathered together some "application notes" - the cookbooks/crib sheets that semiconductor companies wrote and provided to get buyers/engineers started after the spec sheets.

  • adrian_b 2 hours ago

    Intel has launched in 1974 both the NMOS Intel 8080 and a bipolar bit-slice processor family (Intel 3000).

    AMD has introduced in 1975 both its NMOS 8080 clone and the bipolar bit-slice 2900 family.

    I do not know which of these 2 AMD products was launched earlier, but in any case there was only a few months difference between them at most, so it cannot be said that AMD "was already in the CPU market". The launch of both products has been prepared at a time when AMD was not yet in the CPU market and Intel had been earlier than AMD both in the NMOS CPU market and in the market for sets of bipolar bit-slice components.

    While Intel 8080 was copied by AMD, the AMD 2900 family was much better than the Intel 3000 family, so it has been used in a lot of PDP-11 clones or competitors.

    For example, the registers+ALU component of Intel 3000 implemented only a 2-bit slice and few ALU operations, while the registers+ALU component of AMD 2900 implemented a 4-bit slice and also many more ALU operations.

htrp a day ago

> Am9080 variants ran at up to 4.0 MHz

Definitely read that wrong the first time I skimmed the article

ck2 a day ago

ah I remember those amazing days

instant 20% speed boost replacing the IBM 8088 with the v20 chip

bought a sleeve of them cheap and went around to all the PCs and popped them out

only problem was software that relied on clocks ran too fast

FridayoLeary a day ago

>In 1975, AMD could make these processors for 50 cents and sell them for $700.

Apparently by ripping off their military customers.

>says Wikipedia.

Why is that a primary source?

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection