icon / menu / white V2Created with Sketch.
Switch LanguageSwitch Language
Speed Up Your JavaScript with Rust

Speed Up Your JavaScript with Rust

For a recent personal project, I had only needed a fairly simple node.js server to do exponential and costly computing tasks. To be honest, I could have switched the entire tech stack, but I estimated that the development time of such a choice wasn’t worth it…still, I had some functions taking ages to compute. So I had a look around, and decided to let that task be handled by a more appropriate language, in this case Rust. This choice made me dedicate the good task to the good language, as my other-than-that simple server just had to handle routes and calls, which node does with ease, and dedicate all the tedious calculus to Rust.

What is Rust?

rust logo

Rust is a low level, safety-focused language designed by Mozilla and has topped the StackOverflow Developer Survey as the most loved programming language for four years in a row.

It runs blazingly fast when compared to most other languages, often even slightly faster than C itself.

It’s neither a functional nor an object oriented language, and its syntax is close to C++.

Its npm equivalent is called Cargo, and the packages are named crates.

How Do You Mix Rust with NodeJs?

Fortunately for me, I wasn’t the first person who wished to mix Rust with NodeJs. This has been handled by far more talented people, through what is called a Foreign Function Interface (FFI) and Dynamic Libraries (these files that end in .dylib for example). This allows a program that runs in one language (the host language) to call functions written in another, (the guest language), just like what we’d call a host language library. And inside the guest languages functions, you can have access to whichever useful guest languages in any third party-library !

So How Do We Write It?

Let’s get started with the basics then. First we will need Rust and Cargo:

curl https://sh.rustup.rs -sSf | sh

Once we’re done, we can create a new project, in this case a library:

cargo new --lib <libraryname>

This set ups a new directory with a src folder, and a Cargo.toml (which is the equivalent of the package.json).

Now that we’re set up, let’s write our code:

For this example, in order to keep it simple yet explicit, we’ll just create a recursive Fibonacci number function. It is very simple, commonly used for benchmarking (it runs as O(2^n)), and makes use of tail-recursion, which is quite a limitation for JavaScript (even more on the front-end as very few browsers support it).

So let’s open src/lib.rs and write our function:

fn fibonacci(n: u64) -> u64 {
if n <= 2 { return 1 }
return fibonacci(n-1) + fibonacci(n-2);
}
#[no_mangle]
pub extern "C" fn fibonacci(n: u64) -> u64 {
if n <= 2 { return 1 }
return fibonacci(n-1) + fibonacci(n-2);
}

The first function would be how to write it if it was destined for the same Rust program. However, we are building a dynamic library so we need to make a few changes that I am going to review:

#[no_mangle]

This line is a macro: it gives instructions in order to modify the program at compile-time. In this case, we prevent the compiler from changing the name of the function through name mangling. In short, name mangling is the way your compiler renames functions in order to make sure they call the correct one (differentiate List.get() from Array.get() for example). The output often looks like _Z4xfunction1HgF7jffibonacci for example. But this would be the name we have to call from node, so we want to keep it simple.

Then we have pub. This means that the function is publicly available to use, so we can call it from outside this module.

Finally, the extern "C". This indicates that we are using the C ABI(Application Binary interface). In our case, we can remove this, as Rust uses the C ABI by default. We can use it to specify if we're targeting other foreign API calling conventions, such as Windows API.

We can then compile and try it within a small node app. We’ll do this with the --release flag as we want Rust to optimise the binary (without instruction, Cargo will build it in debug mode which can be surprisingly slow).

cargo build --release

This will create a lib.dylib in ./target

In a node js app, let’s try to call our function. For this, we will need node-ffi :

npm i ffi

In our code, let’s import our dynamic library. This will result in a var, similarly to a require();

After giving the path to the library, we also need to indicate the functions we wish to import from this library, specifying the return type and the parameters they take.

var lib = ffi.Library(path.join(__dirname, './target/release/libdemo-rust-node.dylib'), {
fibonacci: ['int', ['int']],
killorface: ['int', ['int']]
});

A function like pow taking a double and an integer, and returning a double would be imported like this :

pow: [ 'double', [ 'double', 'int' ] ]

We’ll declare an equivalent function in js and call both of them with a console.time to benchmark them:

var ffi = require('ffi');
var path = require('path')
var lib = ffi.Library(path.join(__dirname, './target/release/libdemo-rust-node.dylib'), {
fibonacci: ['int', ['int']],
});
function fibonacci(n) {
if (n <= 2) {
return 1;
} return fibonacci(n - 1) + fibonacci(n - 2);
}

console.time()
var rustFibonacci = lib.fibonacci(30);
console.timeEnd()
console.time()
var nodeFibonacci = fibonacci(30);
console.timeEnd()
console.log(rustFibonacci, nodeFibonacci)

Let’s run it :

user$ node index.js 
default: 2.850ms
default: 10.805ms
832040 832040

As we can see, both returned the same result. However, we can see a noticeable difference in computing time. Keep in mind, however, that this microbenchmark does not account for the loading time of the library.

Restrictions

There are still some restrictions to using FFIs. First, keep in mind that an FFI call is very costly… as stated in the readme of ffi-node for example :

There is non-trivial overhead associated with FFI calls. Comparing a hard-coded binding version of strtoul() to an FFI version of strtoul() shows that the native hard-coded binding is orders of magnitude faster. So don't just use the C version of a function just because it's faster. There's a significant cost in FFI calls, so make them worth it.

If you’re loading a dynamic library, this also comes at a cost, and you might not reach the presumed performance. Also, if you’re only looking after low-level and optimised code, the best you can make do is to load the library only once for many uses, otherwise a C extension would be better.

Precisions: This example being trivial, it only uses simple integer types in the functions. If you’re looking to work with JavaScript objects and types directly in Rust, have a look at Neon.

More importantly, you can’t make code running in a browser handle such calls…

How About Front-end Then?

You might have heard about WebAssembly (wasm). It is a stack machine aiming to execute at native speed C/C++ or other fast languages in JavaScript. In fact, in a very short time, it implements what we did previously at a higher level, and using cross-languages standards.

Rust makes the wasm module build a core of itself. You can write and publish your npm module in Rust. You can also install it and run it through most modules bundler, though the popular WebPack is the most documented of them all. Let’s have a quick tour on how to proceed with the previous example:

First we install wasm-pack in order to compile and produce a npm package from our code.

$ cargo install wasm-pack

Also, in order to publish our package, we’ll assume you have an npm accountalready set up. Then, let’s create the project:

$ cargo new --lib wasm-fibo
Created library `wasm-fibo` project

In the Cargo.toml, we need to add a few things:

[lib]
crate-type = ["cdylib"]
[dependencies]
wasm-bindgen = "0.2"

In the newly generated src/lib.rs, let’s write our function:

extern crate wasm_bindgen;
use wasm_bindgen::prelude::*;
#[wasm_bindgen]
pub fn fibonacci(n: i32) -> i32 {
if n < 2 { return 1 } else { return fibonacci(n - 1) + fibonacci( n - 2) }
}

We are using wasm_bindgen, a bridge between Rust and JavaScript. It will take care of the mangling problem, but we still need to state our function as publicly available with the pub keyword in front.

Let’s build our package now:

$ wasm-pack build [--scope <mynpmusername>] --release

That creates a pkg directory at the root with a lot of different files in it. They bundle everything we need to know about your package, which function it exports, what are the arguments they require, and what types to return.

Now, let’s publish it and use it:

$ cd ./pkg && npm publish

Now, in our webpack application, we’ll just have to install it through npm:

$ npm i [@<mynpmusername>]/wasm_fibo

(If you published it with your username in scope, you will need to carry it in all imports.)

Done! Now we can use it as any other npm package, here with the es6 syntax:

import { fibonacci } from "wasm-fibo";
console.log('This is wasmfresult : ', fibonacci(23));

To Conclude

FFIs or WebAssembly are two practical solutions for faster processing without having a huge payback in development time and comfort, giving you more time to develop in a higher level language, while still having the right tool handling the right work and giving you access to a library that doesn’t exist in your host language. Between the two, the nuances can be subtle.

In short, WebAssembly is platform agnostic: it runs in browsers, servers, inside PHP, anywhere. WebAssembly exports symbols, like functions, or memories. You can call those functions from the outside. Other than running environment, wasm module makes it possible to interact with JavaScript functions such as the console or alert() easily, but it also has some limitations, including those of the bundler and the browser you use if run in front-end. Most of the time, if not carefully designed, the run time performance gain is very small for a single call and not as fast as FFI calls (outside loading).

In both cases, there is a payoff on calling external functions. A single call to a very “heavy” function will often be worth with a FFI, whereas the WebAssembly solutions will pay off from at least a moderate amount of calls.

Related articles

From Ideation to Deployment: Gen AI in the Software Development Life Cycle
3 mins
Developer toolbox
From Ideation to Deployment: Gen AI in the Software Development Life Cycle
How can AI improve SDLC security?
4 mins
Developer toolbox
How can AI improve SDLC security?
Speed-up Project Initiation with Scaffolding
3 mins
Developer toolbox
Speed-up Project Initiation with Scaffolding

Button / CloseCreated with Sketch.