FPGAs Need a New Future

6 min read Original article ↗

FPGAs have been around for decades, but have never truly gotten their due. They’ve long been seen as powerful yet obscure; full of potential but not worth the hassle.

And here’s the irony: the people who misunderstand FPGAs the most are often the ones who make them.

A Legacy Problem

Altera and Lattice Semiconductor were both founded in 1983; Xilinx followed a year later. These three companies have defined the FPGA landscape ever since. But while they make highly capable chips, they also make and control the software required to program them. And that software still runs on a 1970s-era mindset. (That’s not a good thing.)

Think of the old IBM model: you bought the chip, licensed the programming software, purchased proprietary hardware to load the code, and maybe even sent engineers to a weeklong training seminar to learn how to use it. That business model died everywhere else in tech, but somehow, it never quite died in the FPGA world.

What FPGAs Actually Are

To understand how we got here, we need to revisit what an FPGA actually is.

Both processors and FPGAs have clocks. In a typical processor, we write code that executes instructions sequentially: one operation per clock tick.

An FPGA, by contrast, defines data pathways specifying how signals change on each clock tick based on internal states and external inputs. In essence, we describe global per-clock-cycle behavior rather than an individual act of data manipulation per step.

The fundamental nature of both devices is simple, and while the processor internals can get extremely complicated, FPGAs, for the most part, do not. They are essentially a homogeneous fabric of logical blocks.

A simplified FPGA architecture.

A simplified FPGA architecture. Image used courtesy of Intel

Since these devices operate in very different ways, we cannot use traditional languages to program FPGAs (even though many have tried). Instead, we use hardware description languages: VHDL or Verilog, and these are rarely handled by the software team.

The Great Misconception(s)

Here’s the first big misconception: HDL is hardware. It isn’t. HDL is software and should be managed like software.

Unfortunately, VHDL and Verilog are relics. Both originated in the 1980s and were modeled after programming languages from the 1960s and ’70s. Both are awful, but engineers keep using them because the toolchains are closed, the vendors are unmotivated, and no one has enough leverage to force a change.

VHDL and Verilog have been in use for nearly 50 years.

VHDL and Verilog have been in use for nearly 50 years. Image used courtesy of Adobe Stock

Why hasn’t anyone created a better language? Locked-down toolchains. No internal support from the semiconductor companies. No external drivers are forcing them to change. Further, companies found it easier to make the development the hardware team's problem because the software engineers would not tolerate the bad languages compiled on mediocre, proprietary IDEs using systems completely out of touch with modern software development.

The result: engineers are stuck using outdated languages inside proprietary IDEs that feel like time capsules from another century. Development becomes painful, and companies turn to FPGAs only when they absolutely must, typically for very complicated tasks. So, FPGA companies develop products assuming very complicated tasks. That, in turn, reinforces the perception that FPGA work is always slow, complicated, and expensive. This becomes a self-reinforcing problem.

It Doesn’t Need to Be This Way

That perception is wrong. HDL can be painful, but for simple designs, it’s often faster than traditional coding. Writing a SPI interface in HDL, for instance, is simpler than coding against a SPI peripheral on a microcontroller.

I once hoped things would improve when Xilinx launched the Zynq line, combining a processor with FPGA fabric. Instead, the accompanying tools were so unusable that they made things worse, pushing developers even further away.

In real applications, FPGAs are almost always married with a processor of some type. Sometimes it is like the Zynq, other times it is a small MCU. FPGAs represent an excellent way to bridge the gap between the processor and the physical world. Rather than seeing the software and HDL as the single application it really is, we literally split this up between different groups of people. This is extremely inefficient.

How We Got Stuck

In 2015, Intel acquired Altera; a few years later, AMD acquired Xilinx. The strategic bet was on data centers: FPGAs paired with CPUs to accelerate specific workloads like encryption. It made sense on paper, but never gained traction in practice.

Now, a decade later, FPGAs in data centers are irrelevant. AI has made GPUs the hardware of choice, and both Intel and AMD have shifted focus. Prices for FPGA hardware have climbed while innovation has stagnated. Intel is divesting its Altera business, and AMD is steering Xilinx toward AI-related chips.

Which leaves one real independent player: Lattice Semiconductor.

The Opportunity for Lattice

Lattice has a rare opportunity to lead, not by outengineering its larger competitors, but by modernizing its ecosystem.

There are already open-source toolchains built by enthusiasts who’ve reverse-engineered Lattice devices. Those tools are already easier to use than the official ones. Even if the vendor provided no coding support, just access to full documentation, these projects could evolve into first-class development environments.

The Lattice Semiconductor Avant-X Versa FPGA prototyping board.

The Lattice Semiconductor Avant-X Versa FPGA prototyping board. Image used courtesy of Lattice Semiconductor

If Lattice embraces open source, it could unlock a new era for FPGAs. Imagine modern languages, smart IDE integrations, and accessible workflows that make FPGA development a natural part of embedded systems engineering rather than a niche discipline.

The FPGA Path Forward

Proprietary toolchains have been holding FPGAs back for decades. Opening them up would allow the community to innovate, build better languages, and create the kind of user experience that software engineers expect in 2025 and beyond.

Chasing AI or data centers did not work for the technology. Instead, it’s about realizing the original promise of FPGAs: flexible, efficient hardware that anyone can program. The technology is ready. The only thing missing is an ecosystem that finally sets it free.

The views expressed in this article are those of the author and do not necessarily reflect the views of All About Circuits, EETech, or its affiliates.