To kick off my time at the Recurse Center I decided to build a modular softsynth in Rust to expand my knowledge of Rust and DSP, as well as build something I and others could use to make music. I’m inspired by an old modular softsynth I used to use called SpiralSynthModular. Unfortunately, this software is dated and no longer updated, and doesn’t compile easily in a modern environment.
SpiralSynthModular, an old modular softsynth that left an impression on me
The Synth Engine
To begin implementing a modular synthesizer, I started by implementing the concept of modules.
In the physical world, a module is an electronic circuit with voltage inputs and outputs. Each module does one thing and one thing well. For example, one module could generate a tone whose pitch is determined by an input voltage. Another module could play a sequence of voltages into this “oscillator” to create different tones over time. Yet another module could take the output of the oscillator and filter it to produce sound of different timbre. And another module could prepare the signal for connection with the correct voltage levels to an amplifier and speaker.
A real, physical, Doepfer A-100 modular synthesizer. (Image credit)
In software, I decided to make each type of synth module its own struct, but with a SynthModule trait common across all kinds of modules, providing a common interface for connecting modules together, executing the signal processing defined inside them, and fetching output buffers from them.
pub trait SynthModule {
// Required methods
fn get_name(&self) -> String;
fn get_id(&self) -> String;
fn calc(&mut self);
fn get_num_inputs(&self) -> u8;
fn get_num_outputs(&self) -> u8;
fn get_input(
&self,
input_idx: u8
) -> Result<Option<(Arc<RwLock<dyn SynthModule + Send + Sync>>, u8)>, ()>;
fn get_inputs(
&self
) -> Vec<Option<(Arc<RwLock<dyn SynthModule + Send + Sync>>, u8)>>;
fn get_output(&self, output_idx: u8) -> Result<&[f32], ()>;
fn set_input(
&mut self,
input_idx: u8,
src_module: Arc<RwLock<dyn SynthModule + Send + Sync>>,
src_port: u8
) -> Result<(), ()>;
}
Arc<RwLock<dyn SynthModule + Send + Sync>> is a type wrapper around SynthModule that allows the workspace to reference all the modules and for all connected modules to reference each other, even across threads.
After the SynthModules are connected to each other, with inputs referencing the connected shared SynthModule, we need to build an execution order from the graph. I took a naive stab at this just to get the audio flowing. At the very least, it ensures every module in the connected graph connected to the output executes exactly once per cycle, initiated by a request for a buffer from the audio library. This allows for cycles in the graph with the tradeoff that there will be a delay introduced where cycles are formed. At least that’s the theory, anyway. I’m not sure that my current implementation of the execution planner is optimal and I'l probably have to revisit it later.
A few days in, I had an implementation of a Oscillator module – limited to sine wave output -- and an Output module which fed the audio library. I was able to hard code a graph where a low-frequency oscillator drives another oscillator which drives the output.
The GUI
I spent a couple days researching what GUI library would be best and trying to integrate them within the app. Most UI libraries are designed around the idea that it would be best to arrange widgets within a layout. I’d like to do something much different: to have blocks of UI that can be dragged around a workspace and connected to each other to form a network.
The best fit I found for this requirement is egui. egui is a “immediate-mode” GUI library. This means that on every display update, the whole UI is redrawn and the code that reacts to user input exists along side the code that is drawing the UI. This greatly simplifies the problem of having a unique user interface for each type of module. We just need to define a .ui method on the SynthModule trait that accepts a mutable reference to egui’s Ui instance once we have set up the area in which to display the module’s interface. Because egui at its core accepts collections of textured triangles as primitives, drawing custom UI elements like connections between modules shouldn’t be that hard. And luckly for my use case, there is already a demo of pan and scroll with draggable widgets I can borrow code from.

The audio thread will run, execute the audio pipeline when the audio library requests a new buffer, individually locking and running the calculation code in each module. Concurrently, the UI will run, individually locking the module when the module is drawn on screen and updating the module’s internal state with any changes requested by the user during this time as well.
At the end of the week I have my hardcoded module graph appearing on screen, with UI to adjust the octave of oscillators, and connections shown between the modules.
Future Challenges
I’m looking forward to two upcoming challenges.
Widgets for input and output handles
Right now the connections between modules have no interactivity – there is no way to connect modules together or disconnect them – and my connection drawing code has magic numbers all over the place to get the positioning just right. It’s a proof of concept.
Next I need to define proper Widgets that I can add next to the modules for input and output that have drag and drop behaviors for making connections between modules.
Preventing aliasing
The Nyquist frequency – half of the sample rate – is the highest frequency that is able to be represented in digital audio.
A naive implementation of a sawtooth wave or a square wave will inevitably involve sampling of ideal waveforms that are not band-limited – that is to say that these waveforms contain harmonics above the Nyquist frequency, and sampling them at discrete times will cause harmonics higher than the Nyquist frequency to be reflected back into the band-limited signal when it is reproduced. This is known as aliasing. I’m anticipating that there will be quite noticeable aliasing artifacts for my first implementation of sawtooth and squarewave.
Some initial research has led me to “Band-limited Impulse Trains” (BLIT) or “Band-limited Step Functions” (BLEP) as solutions for generating artifact-free waveforms. PolyBLEP looks especially promising and easy to implement.