Week 4: WebAssembly

Welcome to Week 4 of the 10 week series on learning Rust! This week, I am going to talk about WebAssembly in the context of Rust and its use cases on the web.

If you have never heard of WebAssembly, maybe the best way to explain it is from the group that created it:

WebAssembly (abbreviated Wasm) is a binary instruction format for a stack-based virtual machine. Wasm is designed as a portable compilation target for programming languages, enabling deployment on the web for client and server applications. - source

Wow, that’s a mouthful, but also a really succinct way of describing the vast use cases and application of wasm. Let’s break this quote down.

  • binary instruction format:  You may have guessed from the “Assembly” in its name, wasm is a compiled instruction format. Sweet, that means it can be deployed just about anywhere: supercomputer clusters, servers, browsers, Raspberry Pi’s, tiny microchips, etc. It also means that it can work with any instruction set: ARM, RISC, x86. This means you can access rich features of instruction sets, such as SIMD.
  • for a stack-based virtual machine: Whenever I think of a VM, I think Docker, hypervisors, and kubernetes. However, you and I are probably forgetting the most popular VM in the world: your browser. As JavaScript is not a compiled language, it must be interpreted at runtime. Modern JavaScript engines use a JIT (just-in-time) compiler to run JavaScript whenever you load a webpage. These instructions are then executed inside of a virtual machine for every tab you have open. While JITs are extremely fast these days, nothing can beat feeding instructions directly to a CPU, which is exactly what WebAssembly gives a browser to execute.
  • enabling deployment on the web for client and server applications: There are many interesting applications for WebAssembly outside of a zippy-fast environment for the web. For example, a popular wasm runtime for cloud native applications called WasmEdge claims to be "100x faster at start-up and 20% faster at runtime” than Linux containers. Consider the application of this fast startup for serverless functions, i.e. a literal “function” of code that runs as a webhook or API call that receives a request and returns a response all within one body of code. You could redesign a serverless datacenter to simply be a reverse proxy and a registry of executable wasm binary. When a request routes to a certain function, the proxy simply executes the binary function and returns the result.

So what does WASM look like?

The cool part is that WASM can be written in most compiled (and JIT!) languages: C/C++, Rust, Python, Go, Java, and more! Let’s look at an example:

At WasmFiddle, we can try out an online compiler for wasm. In this case, it is in C.

int main() { 
  return 42;
}

The compiled output should look familiar (that is, if you have seen assembly before).

wasm-function[0]:
  sub rsp, 8                            ; 0x000000 48 83 ec 08
  mov eax, 0x2a                         ; 0x000004 b8 2a 00 00 00
  nop                                   ; 0x000009 66 90
  add rsp, 8                            ; 0x00000b 48 83 c4 08
  ret                                   ; 0x00000f c3

The main reason behind using WebAssembly is performance and portability. However, to be portable (and also practical), WebAssembly must support native operations in whatever platform it is being deployed on. For example, in my use case, I was needing to create a REST API interface inside of a client that would be compiled to wasm. This is where the idea of WASI comes in. WASI stands for WebAssembly System Interface and allows for native bridging between non-wasm functionality, i.e threads, networking, filesystems, etc. and wasm binary. I was impressed with the fact that most HTTP libraries in the Rust ecosystem support WASM through a WASI interface (in my case, the interface was through the JavaScript fetch API). However, this system is not without it’s limitations, which I will discuss in more detail in Week 9.