Fairylog: A Racket language aiming to be like Verilog
pinksquirrellabs.comBoth Verilog and VHDL are a poor fit for chip design. This is why there are so many projects of higher-level hardware description languages. Chisel[1] looks to me as the best one. What I particularly like - they created an intermediate language FIRRTL[2], which might become an LLVM of hardware design. While the exact syntax might be opinionated and we need different options, the fewer wheels reinvented - the better. So at some point, it is important to create the low-level framework, on top of which many different languages can be built. You can read discussion about this particular idea here[3].
[1] https://chisel.eecs.berkeley.edu/
I prefer vhdlisp[0] tbh, but I prefer vhdl over verilog anyway.
Isn't there a Python based dialect for that already ? I think it was named cirrus but I can't find any link to it.
Anyway, it seems to be it would be better to make a lib for this than a whole language.
> Isn't there a Python based dialect for that already?
There's Migen (which is like Fairylog - a preprocessor/macro system for RTL) and MyHDL (which lets you write Verilog using a subset of 'synthesizable' Python).
> Anyway, it seems to me it would be better to make a lib for this than a whole language.
This is a Racket library. The whole point of Racket is that you can make a language as a library.
I am a strong opponent to most DSL. They usually make life harder, not easier, and a regular lib is most of the time a better choice.
> Isn't there a Python based dialect for that already ? ... Anyway, it seems to be it would be better to make a lib for this than a whole language.
FPGA programmers are quite a different breed than python developers. Things that are simple in programming languages (e.g., multiplication) can create real issues in an FPGA. I'm normally all for simplifying and unifying semantics, but with hardware programming, a split is justified.
Libs can track what a verilog integration system can check, and do that at a higher level. Qualcomm don't write all verilog or system c by hand, they have software generating it.
What are they using as source then? Please let us know more?
I know that some qualcomm tools use IP described in a python dialect, and then generate the verilog out of it. They have a whole synthesis system on top of that. I assume they are not alone.
I'm not sure I see the point, it turns a language that looks a lot like verilog but with lots more parentheses into verilog - why not just write verilog in the first place?
The author's stated motivations sounded plausible to me:
> Fairylog is a Racket language (of course) which aims to be quite like Verilog, with less redundant syntax, Racket macros, and several additional compile-time features that Verilog seems to be lacking. > Fairylog extends Racket rather than replaces it, which means you can use all of Racket wherever you like in Fairylog code to help you generate stuff.
I think there might be an additional benefit of this, to implementers of languages atop Racket, in that they can transform/generate syntax objects (essentially, ASTs with source location info) to use this language as backend, pretty easily. I suppose this might be good for rapid experimenting in developing/compiling for CPU+FPGA targets.
It's Verilog with Racket macro! That sounds much better than Verilog with custom macro system cobbled with Perl, which is what is used in practice.
Perl? A real comment I've seen in demo code from a vendor:
always @(X) begin // add code from "PRBS_Calculator.xls" here...
It seems that it has been done, because it is possible for the scope that exists so far. However, there are a lot of obstacles on the way forward...
I have a lot of experience in writing Verilog for ASICs and I also have significant experience in software design. On top of this I have played with similar ideas in past and implemented all kinds of stuff related to this.
The bottom line is that it is very difficult to get this to scale. When you develop RTL, you have to be able to simulate it. There is no simulator (yet).
Using Racket as code generator is OK, but then there should be proper error checking for RTL semantics. This would be easier to achieve with just parsing any LISP style RTL (invent your own syntax) and generate Verilog out of that. That kind of parsing and checking can be done in Racket or C (or whatever language you prefer). Verilog95 is quite redundant in syntax, but the later Verilog versions and SystemVerilog are very sensible. There is actually very little you can gain by adding one layer of complexity on top of Verilog. If we assume that you could somehow rise the abstraction level significantly from Verilog, then the problem would become: how to annotate Verilog errors back to the "high" level Racket. The semantic gap is now too large. You loose controllability.
It is possible to do the simulator with Racket as well. The performance of such a simulator will be poor (yes, I have done simulators as well). Since the late 1990s RTL simulation has be done with "native compilation". The original implementation was interpreting, which is good when fast simulator start is preferred. However, when you run long simulations, you really need to break RTL into peaces and map everything straight to CPU instruction set. If you wanted to have fast RTL simulation, you would have to be able to explicitly control certain aspects. First of all you should be able to manipulate low level data effectively. It's reasonable in C with bit shifting and masking. Not so much in Racket, although it uses JIT, so it's probably not hopeless. HW is fixed in nature. All resources in your RTL model are going to exist for the duration of the whole simulation. In C you can allocate everything in nice order and you will gain good CPU cache behavior in your simulation. In Racket you have GC (garbage collection) and random placement of data in memory. The cache utilization will be poor, and the simulation would be 100x slower. Even if you keep references to all data in Racket all the time, the GC has to check for whether this data should be garbage collected or not. Most data is HW and it is static, so this is all redundant. Also there would be so much more data in Racket than in C implementation.
Then there is the issue of concurrency. HW is parallel and SW is serial. SW must model the parallel nature in simulation. There are maybe 10000M transistors in modern CPUs. That translates (very very roughly) to 100M flip-flops. In SW terms you have 100M (minimum, in practice more) threads. Not possible as OS threads, so you can do Green Threads to cover that number, but performance is useless. In the end you have to do it "manually" by using functions and modeling concurrency on data level. For each flip-flop you maintain new-input-value and update current-output-value when HW clock ticks. This is the basic method for RTL level simulation. Netlist simulation is much more detailed and hence much slower. RTL simulation has some more complications, but it's not that far from what is said above.
Ok, hobby projects does not have 10000M transistors, but they have to be debugged. You can use "printf" debugging with HW as well, but at some point you need waves. Waves display bit level values on a timeline. Time is essential since HW is parallel. You can get to waves with Racket, but you have to instrument each data point that you want visible with value dumping features. Again, it is possible with metaprogramming, but it really makes your head hurt and simulation slooooooowww.
With all that said, best of luck going forward. :)