Skip to main content

ยท 7 min read
Chris Tsang

This post serves as the "Part 3" of the article "The rainbow bridge between sync and async Rust", today we will discuss:

  1. The fundamentals of multithreaded programming
  2. Synchronization with zero-sized channels
  3. Visualizing multithreaded programs with FireDBG

I've been thinking of the simplest program that can demonstrate the use of zero-sized channels in a multithreaded context, and no programming exercise can be simpler than FizzBuzz!

Problemโ€‹

Here is the original description of the problem:

Write a program that prints the numbers from 1 to 100.
But for multiples of three print "Fizz" instead of the number and for the multiples of five print "Buzz".
For numbers which are multiples of both three and five print "FizzBuzz".

With an additional requirement:

Fizz and Buzz must be printed from different threads.

To keep things simple, the printing of numbers and Fizz can happen on the same thread. So we only need two threads for this problem.

Let's look at the basic solution to the original FizzBuzz problem:

fn main() {
for i in 1..=100 {
if i % 15 == 0 {
println!("FizzBuzz");
} else if i % 3 == 0 {
println!("Fizz");
} else if i % 5 == 0 {
println!("Buzz");
} else {
println!("{i}");
}
}
}

Playground

It prints something like:

1
2
Fizz
4
Buzz
Fizz
7
8
Fizz
Buzz
11
Fizz
13
14
FizzBuzz
16
17
Fizz
19
Buzz
Fizz
22
23
Fizz
Buzz
26
Fizz
28
29
FizzBuzz
...

Multi-threadโ€‹

We want to move Buzz to a different thread. Let's take a closer look to the core logic:

if i % 15 == 0 {
print!("Fizz");
println!("Buzz"); // <- move this to a different thread
} else if i % 3 == 0 {
println!("Fizz");
} else if i % 5 == 0 {
println!("Buzz"); // <- also move this
} else {
println!("{i}");
}

Here is where channels come into play. We want to notify the other thread - "hey it's time for you to buzz!".

We want to use a "zero-sized" channel, because we want our Fizz thread to wait until the Buzz thread is ready to print.

The SyncSender::send() method is blocking, i.e.

This function will block until space in the internal buffer becomes available or a receiver is available to hand off the message to.

If the buffer size is 0, however, the channel becomes a rendezvous channel and it guarantees that the receiver has indeed received the data if this function returns success.

Because there is only one thing we need to notify, we can send () - the zero-sized type.

Based on the examples in the previous post, we can try construct the following:

use std::sync::mpsc::{sync_channel, Receiver, SyncSender};

fn fizzer(sender: SyncSender<()>) {
for i in 1..=100 {
if i % 15 == 0 {
print!("Fizz");
sender.send(()).unwrap();
} else if i % 3 == 0 {
println!("Fizz");
} else if i % 5 == 0 {
sender.send(()).unwrap();
} else {
println!("{i}");
}
}
}

fn buzzer(receiver: Receiver<()>) {
while let Ok(()) = receiver.recv() {
println!("Buzz");
}
}

fn main() {
let (sender, receiver) = sync_channel(0); // zero sized
std::thread::spawn(move || buzzer(receiver));

fizzer(sender);
}

Race conditionโ€‹

Neat, this should work right? Let's give it a try. Since this is a multi-threaded program, let's run it a few times:

1st2nd3rd
1
2
Fizz
4
Buzz
Fizz
7
8
Fizz
11
Fizz
Buzz
13
14
Fizz16
17
Fizz
19
Buzz
Buzz
1
2
Fizz
4
Buzz
Fizz
7
8
Fizz
11
Fizz
13
14
FizzBuzz
Buzz
16
17
Fizz
19
Fizz
1
2
Fizz
4
Buzz
Fizz
7
8
Fizz
11
Fizz
13
Buzz
14
Fizz16
17
Fizz
19
Buzz
Buzz

Sad, it doesn't look deterministic. The first Buzz is in the right place, but subsequent Buzzes are in the wrong places. There are ... race conditions!

But why?

Here is what should have happened:

1: Fizz thread sends a signal
2: Buzz thread receives a signal
3: Buzz prints
4: Fizz thread continues running
5. Print other numbers

Here is what could have happened:

1: Fizz thread sends a signal
2: Buzz thread receives a signal
4: Fizz thread continues running
5. Print other numbers
3: Buzz prints

Turns out, the channel and signal only guarantees the relative order of 1 and 2, but everything afterwards can happen in arbitrary order!

This is the rule of thumb of multithreaded / parallel programming: if we don't make explicit attempt to synchronize different actors, we cannot assume things will happen in a particular order.

Solutionโ€‹

The solution to the above problem is simple: Fizz thread has to wait until Buzz thread finishes printing. There can be multiple ways to implement this, e.g. with a request-response design, or with a Future that only completes after printing.

The simplest solution here, though, is to send signals in successive pairs, with even signals indicating an open ( and odd signals indicating a close ).

use std::sync::mpsc::{sync_channel, Receiver, SyncSender};

fn fizzer(sender: SyncSender<()>) {
for i in 1..=100 {
if i % 15 == 0 {
print!("Fizz");
sender.send(()).unwrap();
sender.send(()).unwrap(); // <-
} else if i % 3 == 0 {
println!("Fizz");
} else if i % 5 == 0 {
sender.send(()).unwrap();
sender.send(()).unwrap(); // <-
} else {
println!("{i}");
}
}
}

fn buzzer(receiver: Receiver<()>) {
while let Ok(()) = receiver.recv() {
println!("Buzz");
receiver.recv().unwrap(); // <-
}
}

fn main() {
let (sender, receiver) = sync_channel(0);
std::thread::spawn(move || buzzer(receiver));

fizzer(sender);
}

There is no material transfer here, but the first rendezvous can be thought as "I hand over the lock to print now", and the second is thus "now return the print lock to me". Here () represents an imaginary lock.

If you think this is clever, subtle and intricate, you're right. Yes parallel programming can be intimidating and error-prone.

FireDBGโ€‹

With FireDBG, we can actually visualize the events happened in parallel threads!

Testbench with full source code

In the above screenshot, we can see a snapshot at the precise moment of i = 30:

  1. In the "Timeline" panel, there are two threads, the one with double line == is the main thread, aka Fizz thread. The thread below -- is the Buzz thread.
  2. On the timeline, โ—‹ denotes function call, โ–ก denotes function return.
  3. The first โ—‹-fizz-โ–ก from the left is for the number 27. The first โ—‹-buzz-โ–ก is for the number 25.
  4. The subsequent two โ—‹-I-โ–ก are for 28 and 29 respectively.
  5. The second โ—‹-fizz-โ–ก is highlighted (frame id 28), and the state i = 30 is show in the top "Variables" panel.
  6. We can see that the โ—‹-buzz-โ–ก on the other thread returns before the main thread continues.

Conclusionโ€‹

In this article we discussed:

  1. The basics of multithreaded programming
  2. The analysis and resolution of race conditions
  3. Synchronization with zero-sized channels and a double-handshake protocol
  4. Visualizing multithreaded programs with FireDBG

FireDBG can be used to study parallel algorithms in simulation, before scaling up to real-world distributed systems.

Probably the alternative title for this article can be "Rendezvous channel - the Timedoor between parallel threads".

Rustacean Sticker Pack ๐Ÿฆ€โ€‹

The Rustacean Sticker Pack is the perfect way to express your passion for Rust. Our stickers are made with a premium water-resistant vinyl with a unique matte finish. Stick them on your laptop, notebook, or any gadget to show off your love for Rust!

Moreover, all proceeds contributes directly to the ongoing development of SeaQL projects.

Sticker Pack Contents:

  • Logo of SeaQL projects: SeaQL, SeaORM, SeaQuery, Seaography, FireDBG
  • Mascot of SeaQL: Terres the Hermit Crab
  • Mascot of Rust: Ferris the Crab
  • The Rustacean word

Support SeaQL and get a Sticker Pack!

Rustacean Sticker Pack by SeaQL

ยท 6 min read
Chris Tsang

If you haven't already, go watch the video Mastering Dynamic Programming by Tech With Nikola! This tutorial is based off that.

We will look into three examples today, and gain some new insights by visualizing the call trees.

  1. Fibonacci
  2. Minimum Coins
  3. Longest common subsequence

Fibonacciโ€‹

This one is tried and true! But not bad as a starter:

fn fib(n: i32) -> i32 {
if n <= 2 {
return 1;
}
fire::dbg!("return", fib(n - 1) + fib(n - 2))
}

What does the call tree of fib(5) look like?

Call tree of `fib(5)`

To make it more efficient, we can use a "memo" to cache the function calls. There are multiple ways to do this, let's try using a static HashMap:

lazy_static::lazy_static! {
static ref MEMO: Mutex<HashMap<i32, i32>> = Mutex::new(HashMap::new());
}

fn fibm(n: i32) -> i32 {
if n <= 2 {
return 1;
}
{
let mut memo = MEMO.lock().unwrap();
if let Some(o) = memo.get(&n) {
return *o;
}
// we have to release the lock here
}
let res = fibm(n - 1) + fibm(n - 2);
let mut memo = MEMO.lock().unwrap();
memo.insert(n, res);
res
}

Now, what will the call tree look like?

Call tree of `fibm(5)`

Nice! We can see that the fibm(n = 3) (9th node) returned early.

Here is one more comparison for fib(10).

Call tree of `fib(10)`
Call tree of `fibm(10)`

We can see that the memo helped reduce the number of nodes down from 111 to just 19!

Minimum Coinsโ€‹

Let's get to a slightly more interesting problem. According to Wikipedia, the change-making problem addresses the question of finding the minimum number of coins that add up to a given amount of money.

Example 1:

Given the coins { 1, 2 } (with unlimited number), how many coins do we need to make up to 5?

The answer is 3, with coins 1 + 2 + 2. Now, let's look at the standard recursive implementation:

fn min_coins(m: i32, coins: &[i32]) -> Option<usize> {
if m == 0 {
return Some(0);
}
let mut count: Option<usize> = None;
for coin in coins {
let next = m - coin;
if next < 0 {
continue;
}
count = match (count, min_coins(next, coins)) {
(Some(a), Some(b)) => Some(a.min(b + 1)),
(Some(a), None) => Some(a),
(None, Some(b)) => Some(b + 1),
(None, None) => None,
}
}
fire::dbg!("return", count)
}

The call tree for min_coins(13, &[1, 4, 5]):

Call tree of `min_coins(13, ..)`

With memoization, we'd use a context specific HashMap:

fn min_coins_m(m: i32, coins: &[i32], memo: &mut HashMap<i32, Option<usize>>) -> Option<usize> {
if m == 0 {
return Some(0);
}
if let Some(c) = memo.get(&m) {
return *c;
}
let mut count: Option<usize> = None;
for coin in coins {
let next = m - coin;
if next < 0 {
continue;
}
count = match (count, min_coins_m(next, coins, memo)) {
(Some(a), Some(b)) => Some(a.min(b + 1)),
(Some(a), None) => Some(a),
(None, Some(b)) => Some(b + 1),
(None, None) => None,
}
}
memo.insert(m, count);
count
}

Now, the call tree is reduced down to just 28 nodes:

Call tree of `min_coins_m(13, ..)`

Not bad, right?

Longest common subsequenceโ€‹

This is the most interesting and practical problem. The video uses it to implement a git diff algorithm which is pretty neat.

Here let's go straight into memoizing, because otherwise it would never return even for trivial test cases:

struct Memo {
memo: HashMap<(usize, usize), usize>,
}

fn lcs_impl<T: Copy + Eq>(left: &[T], right: &[T], state: &mut Memo) -> usize {
if left.is_empty() || right.is_empty() {
return 0;
}
fire::dbg!("(i, j)", (left.len() - 1, right.len() - 1));
if let Some(n) = state.memo.get(&(left.len() - 1, right.len() - 1)) {
return *n;
}
let n = if left[left.len() - 1] == right[right.len() - 1] {
1 + lcs_impl(&left[..left.len() - 1], &right[..right.len() - 1], state)
} else {
let leftn = lcs_impl(&left[..left.len() - 1], right, state);
let rightn = lcs_impl(left, &right[..right.len() - 1], state);
leftn.max(rightn)
};
state.memo.insert((left.len() - 1, right.len() - 1), n);
n
}

The .len() - 1 bits are pretty verbose, but if you look pass them, the code should be pretty understandable. The clever bit here is to use a HashMap with a tuple key instead of a 2D vector to save space!

But it's only half of the story. Because we still need to walk the path again to re-construct the common subsequence:

pub fn lcs<T: Copy + Eq>(left: &[T], right: &[T]) -> Vec<T> {
let mut state = Memo {
memo: Default::default(),
};
let length = lcs_impl(left, right, &mut state);

// reconstruct the common subsequence
let mut seq = Vec::new();
let (mut i, mut j) = (left.len() - 1, right.len() - 1);
while seq.len() < length {
if left[i] == right[j] {
seq.push(left[i]);
if seq.len() == length {
break;
}
i -= 1;
j -= 1;
} else {
if i > 0 && state.memo.get(&(i, j)).unwrap() == state.memo.get(&(i - 1, j)).unwrap() {
i -= 1;
} else {
j -= 1;
}
}
}
seq.reverse();
seq
}

Here is the call tree for:

assert_eq!(
lcs(&[1, 4, 5, 6, 9, 10, 11], &[6, 4, 5, 9, 11]),
[4, 5, 9, 11]
);
Call tree of `lcs(..)`
Memo of `lcs(..)` (last node)

If we look into the memo of the last node, we can see that we only used 13 elements, instead of 8 x 6 = 48 as in the video (or pretty much every other tutorial), and that the call tree only has 35 nodes. So there is some trade-off between top-down vs bottom-up dynamic programming here.

Conclusionโ€‹

I hope you learnt something today! FireDBG is your best companion in writing, debugging and analyzing algorithms. You don't have to install the Rust toolchain to interact with the call trees, as I have bundled them in the testbench with full source code here. You should be able to just open the dynamic-programming folder, install the VS Code extension and start playing.

Rustacean Sticker Pack ๐Ÿฆ€โ€‹

The Rustacean Sticker Pack is the perfect way to express your passion for Rust. Our stickers are made with a premium water-resistant vinyl with a unique matte finish. Stick them on your laptop, notebook, or any gadget to show off your love for Rust!

Moreover, all proceeds contributes directly to the ongoing development of SeaQL projects.

Sticker Pack Contents:

  • Logo of SeaQL projects: SeaQL, SeaORM, SeaQuery, Seaography, FireDBG
  • Mascot of SeaQL: Terres the Hermit Crab
  • Mascot of Rust: Ferris the Crab
  • The Rustacean word

Support SeaQL and get a Sticker Pack!

Rustacean Sticker Pack by SeaQL

ยท 15 min read
Billy Chan

In this blog post, we will cover the basic usage of FireDBG VS Code Extension ("the Extension") and FireDBG CLI ("the CLI"). By the end of this tutorial, you will learn:

  • How to install the Extension and the CLI
  • How to setup a Rust workspace for FireDBG debugger
  • How to debug Rust binary, example, unit test and integration test with FireDBG debugger
  • How to interpret and inspect visualized call tree, variables and timeline in the Extension
  • How to interpret and inspect breakpoint events of multi-threaded program in the Extension
  • How to trace any variable / expression of interest with the fire::dbg! trace macro
  • How to selectively enable / disable tracing of a local package
  • How to use the CLI to operate FireDBG debugger and then interpret and inspect breakpoint events via SQLite

If you're curious about the background and inner working of FireDBG

Installationโ€‹

Before we start, make sure you have VS Code and Rust installed.

  • VS Code (version 1.80.0 or later)
  • Rust (version 1.74.0 or later)

Rust & Cargoโ€‹

# install rustup
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# install stable
rustup install stable

FireDBG VS Code Extensionโ€‹

The Extension provides seamless integration with FireDBG to enhance debugging experience and developer productivity. Search and install the FireDBG extension.

Windows Notesโ€‹

We only support Windows under WSL 2 right now, so please follow these steps on Windows:

  1. Install Ubuntu 22.04
  2. Install the WSL extension
  3. Click >< on the bottom left, and select "Connect to WSL"
  4. Install the FireDBG extension

FireDBG Binariesโ€‹

To keep the .vsix package small in size, we dont't ship the platform-specific binaries with the Extension. Instead, we have a dedicated installer script for the FireDBG binaries. We provide prebuilt binaries for the following CPU and OS combination:

Linux1macOS2Windows (WSL 2)3
x64โœ…โœ… Intelโœ…
arm64โ›”๏ธโœ… Appleโ›”๏ธ

Once the the Extension has been installed, you should see a prompt hinting that FireDBG binaries are missing. Click on the "install" button to run the installer.

Or, you can run the installer manually.

curl https://raw.githubusercontent.com/SeaQL/FireDBG.for.Rust/main/install.sh -sSf | sh

FireDBG binaries will be installed in ~/.cargo/bin and a debugger self test will be conducted to verify the installation. Expect to see:

info: completed FireDBG self tests

In case you got error messages when performing self test, read Troubleshooting Guide for the solution of common errors.

GUI Tourโ€‹

Download the zipped source code, or cloning Rust Testbench for FireDBG to your local machine, then follow the tour below to learn the basic usage of the Extension.

git clone git@github.com:SeaQL/FireDBG.Rust.Testbench.git

Note that the root directory of testbench isn't a Cargo workspace. In the next section, please open each sub-folder (e.g. getting-started) in VS Code. For convenience, each example already included a sample run.

Debug Targets and Runsโ€‹

Where can I see the list of all debuggable Rust targets, how can I debug it and how to inspect previous runs?

  1. Click on the "Run and Debug" panel on your primary sidebar, you should see two new panels on the bottom
  2. Click on the "Activate FireDBG" to enable FireDBG debugger on this workspace
  3. The FIREDBG panel should display all binaries, examples, integration tests and unit tests in your current Rust workspace. Click on the list item to reveal the Rust source code. To debug it, hover the list item and click on the play icon โ–ถ๏ธ on the list item. A new debugger view will be shown and tail the progress in real time.
  4. All previous debug runs can be found in the FIREDBG RUNS panel, simply click on it to open.

Alternatively, you can list all debuggable Rust targets with the CLI:

$ firedbg list-target

Available binaries are:
roll_dice

And, list all previous runs with the CLI:

$ firedbg list-run

Available `firedbg` runs are:
1) roll_dice-1701678002235.firedbg.ss

Visual Debuggerโ€‹

How to interpret and inspect visualized call tree, variables, timeline and threads in the Extension?

We can the open the debugger view by clicking the items in the FIREDBG RUNS panel, or with the open command:

firedbg open
  1. Each node represent a function call; the depth of each node indicates the depth of the call stack; each node has a unique frame ID; there are two types of edge:
    • Function call with return value: -<->-
    • Function call only: -->--
  2. If the program exited with a panic, the panicking function will be highlighted in red with an exclamation mark.
  3. Click on the function name on the call tree node to reveal the Rust source code.
  4. Function Arguments: the name of the argument is shown as the label. The faded text on the bounding box denote the type name, where hovering on it will reveal the fully-qualified name. The actual value is enclosed in the bounding box.
  5. Function Return Value: the return value will be shown on the far right with the label return.
  6. Timeline: toggle the timline by checking the timeline checkbox on the bottom. There are two kinds of node:
    • Circle: function call
    • Square: function return
  7. Thread selector: If the program has more than one thread, a dropdown will be shown on the bottom. You can switch to inspect the execution of other threads. Bring up the timeline to view the execution of all threads in a single view.

Controlsโ€‹

How to navigate the program execution flow?

  1. Use the control buttons on the timebar to jump to the beginning or the end of program execution. Use J K on your keyboard or stepping buttons < > to step one frame backward or forward. The unit of time on the timebar is frame ID of the selected thread. Clicking on the timebar would jump to the exact function call.
  2. The visualization will be updated as you traverse the call tree. Use W A S D keys on your keyboard or your left mouse click to pan; Click the +|- buttons on the bottom right or - = on your keyboard or use your mouse scroll wheel to zoom.

To resize a panel, hover the mouse on the panel gutter then drag to resize. Tip: double clicking the gutter reverts the panel to a pre-defined size.

FireDBG Sidebarโ€‹

How to retrieve detailed debug info?

The FireDBG sidebar contains all debug info. It will be updated as you traverse the call tree.

  1. Debugger Info: FireDBG debugger info, program executable info and runtime info
  2. Frame Info: frame metadata of the inspected function call
  3. Parameters: Rust-like representation of the inspected function call's arguments
  4. Return Value: Rust-like representation of the inspected function's return value
  5. Call Stack (Ancestors): ancestors of the inspected function; up until root
  6. Callee (Children): immediate children of the inspected function

FireDBG CLIโ€‹

There are two ways to tell firedbg where is the root directory of a cargo workspace:

  1. By default, the current directory will be the root directory of a cargo workspace
  2. Or, overriding it with --workspace-root option, i.e. firedbg --workspace-root <WORKSPACE-ROOT>

Some common sub-commands include:

  • cache: Parse all .rs source files in the current workspace
  • clean: Cleanup the firedbg/ folder
  • list-target: List all runnable targets
  • run: Run a binary target with debugging enabled
  • example: Run an example with debugging enabled
  • test: Run an integrated test with debugging enabled
  • unit-test: Run a unit test with debugging enabled
  • index: Run indexer on the latest run and save it as a .sqlite db file
  • list-run: List all firedbg runs
  • open: Open debugger view in VS Code
  • help: Print help message or the help of the given subcommand(s)

You can get the help messages by appending the --help flag.

FireDBG Workspaceโ€‹

Cargo workspace is a set of crates sharing the same Cargo.lock and target directory. FireDBG rely on Cargo to locate source files and targets for debugging.

Now you have a basic understanding on the usage of FireDBG. Let's create a Cargo workspace and practice debugging with FireDBG!

Full source code is available on our testbench.

Start by creating a getting-started workspace.

$ mkdir getting-started
$ cd getting-started

For now, we only have a single quicksort package in this workspace. We will add one more crate later.

Cargo.toml
[workspace]
members = ["quicksort"]

To create the quicksort library, we can use the convenient cargo command.

$ cargo new --lib quicksort
$ cd quicksort

Debugging Unit Testsโ€‹

Replace the content of lib.rs with our "faulty" quick sort library code.

quicksort/src/lib.rs
pub fn run<T: PartialOrd>(arr: &mut [T]) {
let len = arr.len();
quick_sort(arr, 0, (len - 1) as isize);
}

fn quick_sort<T: PartialOrd>(arr: &mut [T], low: isize, high: isize) {
if low < high {
let p = partition(arr, low, high);
quick_sort(arr, low, p - 1);
quick_sort(arr, p + 1, high);
}
}

fn partition<T: PartialOrd>(arr: &mut [T], low: isize, high: isize) -> isize {
let pivot = high as usize;
let mut store_index = low; // Shouldn't this be `low - 1`?
let mut last_index = high;

loop {
store_index += 1;
while arr[store_index as usize] < arr[pivot] {
store_index += 1;
}
last_index -= 1;
while last_index >= 0 && arr[last_index as usize] > arr[pivot] {
last_index -= 1;
}
if store_index >= last_index {
break;
} else {
arr.swap(store_index as usize, last_index as usize);
}
}
arr.swap(store_index as usize, pivot as usize);
store_index
}

Then add some unit tests to the end of lib.rs file.

quicksort/src/lib.rs
#[cfg(test)]
mod test {
use super::*;

#[test]
fn test_quicksort_1() {
let mut numbers = [4, 65, 2, -31, 0, 99, 2, 83, 782, 1];
run(&mut numbers);
assert_eq!(numbers, [-31, 0, 1, 2, 2, 4, 65, 83, 99, 782]);
}

#[test]
fn test_quicksort_2() {
let mut numbers = [1, 2, 2];
run(&mut numbers);
assert_eq!(numbers, [1, 2, 2]);
}
}

Click on the debug icon on the left to start debugging the unit test. Or with the CLI:

firedbg unit-test quicksort test::test_quicksort_1

Oooops... assertion failure!

We found that it's quite hard to inspect what elements are swapped in each partition. To help debugging, we can add a swap function and rewrite the original code:

fn partition<T: PartialOrd>(arr: &mut [T], low: isize, high: isize) -> isize {
let pivot = high as usize;
let mut store_index = low - 1;
let mut last_index = high;

loop {
store_index += 1;
while arr[store_index as usize] < arr[pivot] {
store_index += 1;
}
last_index -= 1;
while last_index >= 0 && arr[last_index as usize] > arr[pivot] {
last_index -= 1;
}
if store_index >= last_index {
break;
} else {
- arr.swap(store_index as usize, last_index as usize);
+ swap(&mut arr[store_index as usize..=last_index as usize]);
}
}
- arr.swap(store_index as usize, pivot as usize);
+ swap(&mut arr[store_index as usize..=pivot as usize]);
store_index
}

+ fn swap<T: PartialOrd>(arr: &mut [T]) {
+ arr.swap(0, arr.len() - 1);
+ }

Tip: you can add #[cfg_attr(not(debug_assertions), inline)] to the swap function to inline it in release build, so that it will not incur any overhead.

Now, we can clearly see what was swapped and how many times swap was called. Upon a closer inspection we will see a pattern, i.e. the first element is always untouched in all partition operations. That's the bug ๐Ÿ›!

fire::dbg! Trace Macroโ€‹

Let's try to debug the same program with a different approach. An non-invasive approach, this time we only trace the swap without modifying the program structure.

FireDBG provide a fire::dbg! macro similar to std::dbg! to capture the variable of interest.

We can trace the swap actions with the help of fire::dbg!. The main advantage compared to std::dbg!, is that the trace data is associated with the stack frame of the calling function.

Undo the previous change and go back to the original implementation.

Cargo.toml
[dependencies]
+ firedbg-lib = "0.1"
+ use firedbg_lib::fire;

fn partition<T: PartialOrd>(arr: &mut [T], low: isize, high: isize) -> isize {
let pivot = high as usize;
let mut store_index = low - 1;
let mut last_index = high;

loop {
store_index += 1;
while arr[store_index as usize] < arr[pivot] {
store_index += 1;
}
last_index -= 1;
while last_index >= 0 && arr[last_index as usize] > arr[pivot] {
last_index -= 1;
}
if store_index >= last_index {
break;
} else {
+ fire::dbg!("swap", &arr[store_index as usize..=last_index as usize]);
arr.swap(store_index as usize, last_index as usize);
}
}
+ fire::dbg!("swap", &arr[store_index as usize..=pivot as usize]);
arr.swap(store_index as usize, pivot as usize);
store_index
}

This time the swap is shown in the variables of the partition function. As expected, we see 3 swaps. This should help us to observe the pattern, locate and fix the bug ๐Ÿ”Ž!

Debugging Integration Testsโ€‹

Similar to unit test, we can debug integration test with FireDBG. Let's add an integration test file:

quicksort/tests/bookshelf.rs
#[test]
fn test_quicksort_1() {
let mut books = [
"The Rust Programming Language",
"Beginning Rust: From Novice to Professional",
"Rust in Action",
"Programming Rust: Fast, Safe Systems Development",
"Rust Programming Language for Beginners",
];
quicksort::run(&mut books);
assert_eq!(
books,
[
"Beginning Rust: From Novice to Professional",
"Programming Rust: Fast, Safe Systems Development",
"Rust Programming Language for Beginners",
"Rust in Action",
"The Rust Programming Language",
]
);
}

Alternatively, you can debug integration test with the CLI:

firedbg test bookshelf test_quicksort_1

Debugging Binary Targetsโ€‹

Let's create an executable program. We need to add some dependencies first.

Cargo.toml
[dependencies]
firedbg-lib = "0.1"
+ fastrand = "2"
+ structopt = "0.3"
quicksort/src/main.rs
use firedbg_lib::fire;
use std::iter::repeat_with;
use structopt::StructOpt;

#[derive(StructOpt, Debug)]
struct Opt {
/// Random seed
#[structopt(long, default_value = "2525")]
seed: u64,
/// Number of random numbers to be sorted
#[structopt(default_value = "10")]
n: usize,
}

fn main() {
let Opt { seed, n } = Opt::from_args();

fire::dbg!(&seed);
fire::dbg!(&n);

fastrand::seed(seed);

let max = if n <= 10 { 100 } else { 1000 };

println!("Sort {n} numbers in ascending order");
let mut numbers: Vec<_> = repeat_with(|| fastrand::i32(1..max)).take(n).collect();

println!("Input: {:?}", numbers);
quicksort::run(&mut numbers);
println!("Sorted: {:?}", numbers);

let mut c = 0;
for n in numbers {
assert!(n >= c);
c = n;
}
}

We can add a [[targets]] entry in firedbg.toml to create additional profiles:

[[targets]]
name = "quicksort_100"
target.type = "binary"
target.name = "quicksort"
argv = ["100", "--seed", "1212"]

See the full example.

Or use the FireDBG CLI to pass additional parameters to the Rust binary:

firedbg run quicksort -- 100 --seed 1212

Debugging Examplesโ€‹

Examples work the same as binary targets, just that they are located under the examples/ directory.

We can also debug example with the CLI:

firedbg example random100

firedbg/ Output Folderโ€‹

A firedbg folder will be created for storing the symbols, debug runs and other supporting files. You should ignore this folder from your source control, i.e. add firedbg/ to .gitignore.

firedbg.toml Configโ€‹

Let's try and add one more crate to the workspace.

$ cargo new --lib book-store
$ cd book-store

Update the Cargo.toml at workspace root, adding our new book-store package.

Cargo.toml
[workspace]
members = ["quicksort", "book-store"]

Below we have a simple function to list the inventory in alphabetical order.

book-store/src/lib.rs
use anyhow::Result;
use std::fs::File;
use std::io::{BufRead, BufReader};

pub fn inventory(path: &str) -> Result<Vec<String>> {
let file = File::open(path)?;
let reader = BufReader::new(file);

let mut books = Vec::new();
for line in reader.lines() {
let book = line?.trim().to_owned();
books.push(book);
}
quicksort::run(&mut books);
Ok(books)
}

To put it in action we can add a books.txt file to the package root and then write a unit test to invoke the inventory function.

book-store/books.txt
The Rust Programming Language
Rust Programming Language for Beginners
Programming Rust: Fast, Safe Systems Development
Beginning Rust: From Novice to Professional
Rust in Action
book-store/src/lib.rs
#[cfg(test)]
mod test {
use super::*;
use anyhow::Result;

#[test]
fn test_inventory_1() -> Result<()> {
let path = concat!(env!("CARGO_MANIFEST_DIR"), "/books.txt");
let books = inventory(path)?;
assert_eq!(
books,
[
"Beginning Rust: From Novice to Professional",
"Programming Rust: Fast, Safe Systems Development",
"Rust Programming Language for Beginners",
"Rust in Action",
"The Rust Programming Language",
]
);
Ok(())
}
}

Umm... we see that function calls to the quicksort crate are missing in the call tree.

By default FireDBG will only trace the functions of the debug target. If you want to trace other crates in your local workspace, you will need to create a firedbg.toml config file in your workspace root.

firedbg.toml
[workspace.members]
quicksort = { trace = "full" }
# Syntax: <PACKAGE> = { trace = "<full | none>" }

Now, we can see the function calls of the quicksort crate!

The Event Indexโ€‹

When you open a .firedbg.ss file, FireDBG indexer will create a .sqlite file to store the analyzed debug info. You can also run the indexer manually with the firedbg index sub-command. You can now write SQL queries to your heart's content!


  1. Supported Windows (WSL 2) distributions: Ubuntu 22.04, Ubuntu 20.04โ†ฉ
  2. Supported macOS versions: macOS 13 (Ventura), macOS 14 (Sonoma)โ†ฉ
  3. Supported Linux distributions: Ubuntu 22.04, Ubuntu 20.04, Debian 12, Fedora 39โ†ฉ

ยท 13 min read
Chris Tsang

Debugging programs is hardโ€‹

Debugging programs is hard. It is extremely mind boggling when something does not work as expected. More often than not 90% of the work is on tracking down where the bug is, and 10% is actually solving it!

But why don't programmers use debuggers more often these days? Instead, we all enjoy adding println all over the place, running the program, adding more prints, rinse and repeat, just to find out that we still haven't printed the key variable and may even need to make guesses about the program state while drowning in the logs.

A running program is like a blackbox to us, and the console is our only window into its internal states. However, at the end of the day, it is still the programmer's responsibility to print the relevant information.

How does a debugger help us debug a program? We identified that it usually comes down to:

  • Where did the control flow of my program go?
  • Where do my errors come from?
  • What are inside my variables?
  • What happened in what order?

That's why we created FireDBG - a Time Travel Visual Debugger for Rust ๐ŸŽฌโ€‹

Time travel debuggingโ€‹

According to Wikipedia, time travel debugging is the process of stepping back in time through source code to understand what is happening during execution of a computer program. Unlike "traditional" debuggers which can only step forward, time travel debuggers allow us to step backward and see the cause and effects.

Time travel debuggers exist1, and they usually work by recording data on the instruction level. But that would result in a gigantic amount of data, so now the question is: how do we make sense of this data and navigate through time effectively?

Call tree - the trail across the forestโ€‹

Let's take a look at the following call tree for an implementation of quicksort:

pub fn run<T: PartialOrd>(arr: &mut [T]) {
let len = arr.len();
quick_sort(arr, 0, (len - 1) as isize);
}

fn quick_sort<T: PartialOrd>(arr: &mut [T], low: isize, high: isize) {
if low < high {
let p = partition(arr, low, high);
quick_sort(arr, low, p - 1);
quick_sort(arr, p + 1, high);
}
}

fn partition<T: PartialOrd>(arr: &mut [T], low: isize, high: isize) -> isize {
let pivot = high as usize;
let mut store_index = low - 1;
let mut last_index = high;

loop {
store_index += 1;
while arr[store_index as usize] < arr[pivot] {
store_index += 1;
}
last_index -= 1;
while last_index >= 0 && arr[last_index as usize] > arr[pivot] {
last_index -= 1;
}
if store_index >= last_index {
break;
} else {
arr.swap(store_index as usize, last_index as usize);
}
}
arr.swap(store_index as usize, pivot as usize);
store_index
}
FireDBG screenshot for an implementation of quicksort

If you instantly get it, you can skip this section! If not, let's go through it step by step:

"A call tree is all the state transitions of a program's call stack laid out on a 2D plane, organized in form of a tree"

Every time a function call is made, we create a child node; when the function returns, we go back to the parent node. So subsequent function calls become sibling nodes.

At any given node, traversing up to the root node gives us a chain - which is the call stack. We have two notations, -->-- for function calls; --<->-- for function calls with return values. We assign a unique frame ID to each node and this is the time sequence used in the timebar.

Call tree visualization and algorithmsโ€‹

FireDBG is built around the call tree. You may ask, wouldn't this call tree become very large? Yes, that's why we spent enormous effort into linearizing everything. We developed a streaming file format to process the data in real time, and figured out a layout algorithm that can render infinite trees.

What's so interesting about call trees? Their shapes reveal a lot about the properties of algorithms. To illustrate, here's a side-by-side comparison of quicksort and mergesort, and we can see that quicksort yields a deeper tree than mergesort2. So quicksort is using more stack memory3 than mergesort, which makes sense because mergesort uses some memory on the heap!

Since both are binary trees, they are not tail-call optimizable. And since both algorithms claim to be of O(n log n), statistically only one of the two branches should grow deeper. There are a lot more to observe and I have been drawing this on paper by hand for many years.

Comparing quicksort and mergesort side by side with the same input

Tracing errors - the error pathโ€‹

This might be a Rust specific problem: Rust don't have exceptions. Instead, Rust uses a Result system coupled with the ergonmic ? operator to effectively signal and handle errors. This has both pros and cons - the good thing is that errors propagate back to the parent along the exact same path the function is being called, which is deterministic and predictable. The bad thing is, when you finally unwrap an error and decide to panic, the context of the error has long been lost.

This echos our second question: where do my errors come from? Consider the following toy program, where we get to roll some dice and each time we could be rolling one, two or three dice. Once in a while, throwing a die can cause an error. This is deliberately randomized, so there is no way to statically analyze where the error might come from - such is life!

fn roll(i: i32) -> Result<(), ()> {
let (a, b, c) = (dice(i), dice(i), dice(i)); a?; b?; c?;
if fire::dbg!(fastrand::u32(0..4)) == 0 {
roll(i - 1)
} else {
throw(i - 1)
}
}

fn throw(i: i32) -> Result<(), ()> {
match fire::dbg!(fastrand::i32(1..=3)) {
1 => { let a = dice(i); a?; }
2 => { let (a, b) = (dice(i), dice(i)); a?; b?; }
3 => { let (a, b, c) = (dice(i), dice(i), dice(i)); a?; b?; c?; }
_ => unreachable!(),
}
if fire::dbg!(fastrand::bool()) {
roll(i - 1)
} else {
throw(i - 1)
}
}

fn dice(i: i32) -> Result<(), ()> {
if fire::dbg!(fastrand::i32(0..i)) == 0 { Err(()) } else { Ok(()) }
}

fn main() {
roll(25).unwrap();
}

FireDBG captures all functions' return values and so is able to locate where the error originated from and how it propagated up! Here is the rendered call tree of this roll_dice program, where FireDBG is able to highlight the error path up until the point of panic, which is conveniently marked by !:

A screenshot of the roll_dice call tree

Collecting parameters and return valuesโ€‹

As you have seen in the above screenshots, FireDBG captures all functions' parameters and return values. Capturing parameters is the easy part, because it's a basic feature of any debuggers4. But capturing return values is a little intricate. The current implementation is: we set a breakpoint at every ret instruction and capture the return value at that particular point in time. The next instruction will immediately jump back to the parent function, which may overwrite the value.

Let's talk about ABI. There are different call conventions and Rust is particularly flexible in whether a value is returned via stack or register. In some cases complex data can be packed into one register, because Rust can return complex types like tuples and unions5. We tried super hard, but return value capture is still a hit-or-miss, but some visibility is better than no visibility!

Inspecting the parameters and return value of the partition function

A type system for the Rust type systemโ€‹

We rely on lldb's awesome SBType API to parse Rust's debug symbols, and capture variables' values in binary forms - not all things are printable, in particular, pointers (references) and containers. We have to reference the memory locations to get their values. This becomes non-trivial if the references form a diamond or worse, cycles6.

For pointers, our brilliant intern7 has a great idea - capture and store the pointees as context for each variable and dereference those pointers on replay. Each memory address will only be serialized once, but can be copied many times on deserialization.

As such, we were able to construct a type system that can represent Rust values, which are serialized as binary data. We can then visualize them beautifully!

The Car data structure with visualization; left panel: Rust definition

For containers, it is more murky. For vectors, we can simply read a blob of memory from the heap, because the items are contiguous. But ... how about hash maps? Luckily, in Rust, the standard library's hashmap is a flat hashmap, which means items are densely packed, although not contiguous. So with some memory fiddling, we are able to capture the contents of hashmaps!

Note that in the screenshot below, the hash key is &str, which means after extracting the hash keys from the hash map, we have to dereference the pointers to retrieve the string content.

Left panel: captured hashmap; middle panel: rendered hashmap; right panel: raw representation of a hashmap

A new lens into our codeโ€‹

Visualization is a new lens into our code. Above are just a few simple examples, and FireDBG opens the door to domain-specific visualizations that are meaningful to us. Below are a few more examples:

A Sudoku solver
A SQL tokenizer
My personal favourite - symbolic recognition

I want to debug my code - not system libraries!โ€‹

We tried our best to boost the signal-to-noise ratio. Instead of alloc::sync::Arc { core::ptr::non_null::NonNull { alloc::sync::ArcInner { strong: core::sync::atomic::AtomicUsize { .. }, weak: core::sync::atomic::AtomicUsize { .. }, data: pointer::Object { .. } } } }, we try to prettify standard library types like Arc as RefCounted { strong: 1usize, weak: 1usize, value: pointer::Object { .. } }.

FireDBG only traces user code - not standard library, not vendor libraries - only the code inside the specific crate you are currently programming. Which means the call tree will only contain functions you wrote. No implementation details like <I as alloc::vec::in_place_collect::SpecInPlaceCollect<T,I>>::collect_in_place.

Which also means that the overhead of program tracing is confined - even with a very complex program, the subset of user code that we want to trace would only be a fraction of the program binary.

Multi-threaded programming ๐ŸŽฌโ€‹

Rust's slogan is "Fearless Concurrency", so how do we debug multi-threaded Rust programs? It's possible to pause and step through a multi-threaded program, but doing so manually would affect the timing and synchronization of other threads.

FireDBG can track multiple threads and keep them running as smoothly as possible, allowing us to observe how multiple threads work together. It's a proof-of-concept, but let's take a look at a small Rust program that spawns multiple worker threads to perform matrix computation:

fn inverse(m: &Matrix<D, D>) -> Option<Matrix<D, D>> {
fire::dbg!("return", Matrix::inv(m))
}

fn runner(receiver: Receiver<Matrix<D, D>>, sender: Sender<(Matrix<D, D>, Option<Matrix<D, D>>)>) {
while let Ok(m) = receiver.recv() {
// send back the input with solution
let mm = inverse(&m);
sender.send((m, mm)).unwrap();
}
}

fn main() {
let (result, collector) = channel();
let mut senders = Vec::new();
for _ in 0..THREADS {
// spawn worker threads
let (sender, receiver) = channel();
senders.push(sender);
let result = result.clone();
std::thread::spawn(move || runner(receiver, result));
}

for c in 0..ITEMS {
// spmc; fan out
let m = Matrix::<D, D>::random();
senders[c % THREADS].send(m).unwrap();
}

for _ in 0..ITEMS {
// mpsc; consolidate
let (m, mm) = collector.recv().unwrap();
println!("Source = {m:?}");
println!("Inverse = {mm:?}");
}
}
Top panel: a matrix and its inverse computed by a worker thread; bottom panel: timeline showing events across all threads within a specific time window

The visionโ€‹

What you see today is an MVP of FireDBG. It may be slow and buggy, and notably, async Rust support is still a work-in-progress. But it outlines the vision I have for what a debugger could be and how it can level up our debugging experience. I still have a dozen of visualization ideas: channels, mutexes, arc/rc graph, async timeline, etc.

We aim to bring FireDBG to other programming languages and engineering domains. Our ultimate goal is to improve the debugging experience for all developers!

Let's work together!โ€‹

Call for stargazersโ€‹

Please star our GitHub repo! Your support is vital. Even if you are not a Rust developer, let us know your thoughts too.

Call for early adoptersโ€‹

If you think your use case can be drastically improved by having a visual debugger, let's collaborate and shape the product together8.

Call for co-founderโ€‹

I am looking for a co-founder. If you are like me, deeply passionate about developer experience and willing to commit to this idea, let's team up and embark on a journey to transform software engineering. You can find my profile on YC Co-Founder Matching9.

Acknowledgementโ€‹

Many great software inspired me, but two of them is worth mentioning: Firebug and Flame Graph. Firebug changed web development forever, and is the predecessor to the awesome browser developer tools we enjoy today.

Flame Graph is the classic example of how a simple but powerful idea became ubiquitous, and it really demonstrates the magic of visualization. Good visualization enlightens our brains.

A huge thanks to the CodeLLDB project, which powers FireDBG's lldb driver. In case you're wondering, FireDBG uses both SeaORM and SeaStreamer under the hood. And of course none of this could be possible without the Rust programming language and the awesome Rust community!


  1. WinDBG, rrโ†ฉ
  2. Probably because of bad pivot selectionโ†ฉ
  3. A deep call stack could harm performance or at least pose a risk of overflowโ†ฉ
  4. Except not really, in some cases the function prologue is wrongโ†ฉ
  5. A future blog post will be published to explain the intricaciesโ†ฉ
  6. Although normal, safe Rust programs should not have reference cyclesโ†ฉ
  7. Who would rather not be namedโ†ฉ
  8. Feel free to reach us via GitHub or emailโ†ฉ
  9. Please do not DM me; let's connect on the platformโ†ฉ

ยท 8 min read
Chris Tsang

Under the hood of FireDBGโ€‹

Component diagram of FireDBG

firedbg-cliโ€‹

firedbg-cli drives everything. It acts as a proxy to cargo, so a cargo run command becomes firedbg run, cargo test becomes firedbg test and so on. firedbg-cli also relies on cargo to list all the executable targets shown in the "Run and Debug" tab of VS Code.

parserโ€‹

firedbg-parser parses all source files in the workspace and outputs a symbol table per file. These .map files are cached, so they will be reused if the source files are unchanged since the last parse.

debuggerโ€‹

firedbg-debugger is the debugging engine of FireDBG. It is configured according to firedbg.toml. The debugger drives the target process via lldb and streams the breakpoint events in real-time. The output file, with the extension .firedbg.ss, follows the binary file format defined in SeaStreamer File.

It sometimes uses rustc on the host for miscellaneous things.

indexerโ€‹

firedbg-indexer is a streaming indexer. It can stream events from the .firedbg.ss file and process them in real-time. It outputs a .sqlite file with the same name, using SeaORM to drive SQLite. The index enables the GUI to quickly look up the call chain and frame info.

Mode of operationโ€‹

Overviewโ€‹

Data flow diagram of FireDBG

The basic idea of FireDBG is to automate the actions done by a user on a debugger CLI/TUI/GUI. For example, a user would usually set some breakpoints at some strategic locations and inspect all local variables every time a breakpoint is hit. FireDBG does the same! But our goal is to make each breakpoint hit as brief as possible, in order to keep the program-under-debug running in real-time. This is important because some resources like sockets and timers are time sensitive.

This mode of operation is called "galloping"1, as it only breaks on user code - library code and system calls are all skipped. In other words, the call tree we construct is not the full process call tree; it's down-sampled. In theory, we can use a VM to execute and record2 the process, then run it through FireDBG to condense the data.

The thesis is: "too much details obfuscate our understanding", and more often than not, we want to see the big picture. FireDBG lays out the call tree on a plane, so our brain can make sense of the two-dimensional space. The UX of the GUI is designed based on modern interactive maps3.

Static Analysisโ€‹

We parse the Rust source files with syn, looking for all functions, methods, and trait implementation blocks. The location and span of these functions are then dumped into .map files. In the future, we're hoping to support constructing a static call graph, so as to allow the debugger to only set breakpoints at functions reachable from main (or the program entry point, whatever it is).

Runtime Debuggingโ€‹

On startup, the symbol tables are read. After loading the target executable into memory, the debugger loads the corresponding symbol tables and set breakpoints at those functions according to the configuration. We also set breakpoints on the panic handler and heap allocators.

The program will then be run. The debugger keeps a logical stack model for each thread. On each function call, a new frame ID will be assigned. The tuple (thread ID, frame ID, function call) uniquely identifies any point in program execution.

When a function is first called, we disassemble it. Then breakpoints are set at all ret instructions, so that whenever the function returns, the breakpoints will be hit, and a function return event is recorded. (We also cache the SBType definition of the function, with which the function return handler can salvage the return value from the registers, but this is an implementation detail). All parameters of the function are captured once the program has gone past the prologue4.

All breakpoint events happening meanwhile will be tagged with the current frame ID in the current thread.

All events will be streamed out in real-time. The format of the stream events is defined in firedbg-protocol.

Multi-threadingโ€‹

It is actually possible to debug multiple threads by hand using a conventional GUI debugger, you just need to know this one trick ;)

Multiple threads can be hitting the same breakpoint at the same time, so we need to inspect all the threads each time a breakpoint is hit. We look at the PC address to determine whether this thread was actually paused on a breakpoint, and if so, record the event. All threads are resumed as soon as possible.

Value Captureโ€‹

Value capture is currently done via lldb's excellent SBValue / SBType API. There are some edge cases, particularly around Rust's "complex enums" aka union types. There are many hacks5 done by firedbg-debugger to capture various Rust standard types, including but not limited to Vec<u8>, &str, &dyn T, Rc/Arc, HashMap.

Return value capture is currently done by looking at the return type and guessing where it will be placed at, registers or stack, and salvage the value. More ideally, we would query the call convention and extract accordingly.

All values are serialized as binary blobs in native endian. There are several motivations: 1) faithfulness 2) avoiding floating point and utf-8 idiosyncrasies6 3) avoiding serialization to strings which is slow 4) smaller file size.

Event Indexโ€‹

The indexer reconstructs the call stack for each thread from the event stream. It then represents the call trees in SQL by self-references7. It also performs basic analysis like counting hits for each breakpoint.

The indexer deserializes the value blobs and transforms them into pretty-printed strings and JSON. They will then be queried by the GUI for display and visualization.

The SeaORM schema is defined in firedbg-indexer.

Parallelismโ€‹

Parallelism in FireDBG; outer boxes: processes; inner boxes: threads

A lot of effort has been put in making FireDBG to improve responsiveness. The previous diagram shows the pipeline where data is streamed real-time. This diagram gives a different perspective: how the components work in parallel.

The Debugger, Indexer and GUI are separate processes, and each uses multiple threads for stream producer and consumer. Except in node.js the streamer is a subprocess instead of a thread.

The Call Tree Renderer is incremental: nodes are added on canvas as they arrive.

Support for other languagesโ€‹

Our vision is to bring FireDBG to all programming languages. Some possible candidates are:

  • C++: supported by lldb; but it is difficult to distinguish user code from library code
  • Swift: supported by lldb; but need to support a lot of Apple system stuff
  • Go: delve seems very API drivable!
  • node.js: we can use the DevTools Protocol
  • Python: debugpy seems very promising
  • Your favourite language: suggestions are welcome!

When designing this architecture, we have been keeping in mind how we'd piece in other languages. Each language would have its own firedbg-xxx-debugger, outputting the same .firedbg.ss stream protocol. Primitives can more or less be shared so we can reuse the same PValue. RValue actually stands for "Rust Value", so you can assume we'd have GoValue for Go, JsValue for Javascript, etc.

We will ship multiple indexer binaries, but they will likely share the same codebase.

The CLI will be each implemented in its own language.

The GUI will all share the same codebase, but of course each language will have its own VS Code extension.

Rust probably has the most complex type system8, hopefully it will not get much more complicated than what we have already implemented.

The async programming model should be similar among languages, so we should be able to visualize them under the same model.

Pure functional languagesโ€‹

The call tree visualization is universal to all programming languages9. Other than that I have not thought about other parts yet.


  1. Or "tiptoeing", which is better?โ†ฉ
  2. Or rr, but we don't have a gdb driver yetโ†ฉ
  3. You can pan with click-and-drag (or three finger drag on macOS), scroll to zoom; we also have (x, y) coordinates: x-axis is the depth, y-axis is the breadth in the treeโ†ฉ
  4. Sometimes the function prologue is wrong and FireDBG currently has some logic to guess the prologueโ†ฉ
  5. Many; maybe in the next version we will abandon debug symbols altogetherโ†ฉ
  6. They are not interpreted on serialization; only on deserialization which is the job of the indexerโ†ฉ
  7. If we have a MySQL backend, a single recursive CTE query can reconstruct the call chain of a given frameโ†ฉ
  8. Probablyโ†ฉ
  9. We can probably make one for Lisp tooโ†ฉ