<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[bikes and bytes]]></title><description><![CDATA[Thoughts, stories and ideas.]]></description><link>https://siwiec.us/blog/</link><generator>Ghost 5.39</generator><lastBuildDate>Thu, 30 Apr 2026 04:56:25 GMT</lastBuildDate><atom:link href="https://siwiec.us/blog/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Week 10: Cheers to Rust!]]></title><description><![CDATA[<p>Thanks for reading through this rollercoaster series documenting my learning process with Rust!</p><p>I have to admit that Rust is now one of my favorite languages, especially as I was learning it simultaneously to taking CS: 212, rated the <a href="https://www.quora.com/What-is-the-most-difficult-computer-science-course-at-Stanford?ref=siwiec.us">most challenging CS class at Stanford</a>, where I had to implement</p>]]></description><link>https://siwiec.us/blog/week-10-rust-summary/</link><guid isPermaLink="false">6419988cb543a30b0acaa938</guid><dc:creator><![CDATA[Adam Siwiec]]></dc:creator><pubDate>Tue, 21 Mar 2023 11:44:45 GMT</pubDate><content:encoded><![CDATA[<p>Thanks for reading through this rollercoaster series documenting my learning process with Rust!</p><p>I have to admit that Rust is now one of my favorite languages, especially as I was learning it simultaneously to taking CS: 212, rated the <a href="https://www.quora.com/What-is-the-most-difficult-computer-science-course-at-Stanford?ref=siwiec.us">most challenging CS class at Stanford</a>, where I had to implement a Unix kernel (threads, syscalls, virtual memory, file system, cache, etc.) in C. While a really challenging project that I should write about at another point, there was many instances where I could simply compile code that to a certain extent looked like this:</p><pre><code class="language-c">struct thread* main_thread = malloc(sizeof(struct thread));
// setup the thread!
char* name = thread-&gt;name;
free(thread);

// time passes
printf(&quot;thread %s&quot;, name);
</code></pre><p>Ouch. Kernel panic.</p><p>Every time I debugged a simple semantic misunderstanding I had with the compiler like this, I would ask myself why the compiler would let me do this to myself, simultaneously muttering</p><blockquote>&#x201C;Rust would never let me do that.&#x201D;</blockquote><p>The rust compiler became the invisible mother that would scold me every morning that I didn&#x2019;t make my bed. It made me think about the code I was writing, before I even hit the build button. I knew I would get hit by the proverbial red-error flip-flop.</p><p>The paradigm of ultra-strictness and genuine helpfulness that the compiler offers has made be grateful for the deep insights that modern compilers give to developers and makes me genuinely look over each warning (both in C and in Rust).</p><p>I also underestimated how beneficial a type system can be for not just memory safety, but also in functionality. For example, the serde package quickly allowed for the serialization/deserialization of most structs I made by simply tagging the definition with a <code>#[derive(Default, Deserialize, Serialize)]</code> macro. This enabled me to parse HTTP requests with ease, only needing a few lines of simple, readable code. On the other hand, the solution to serialization and deserialization is a non-obvious solution in Python or Javascript; everything is just <strong><strong>some</strong></strong> object, and its up to you as a developer to ensure exactly what object that is. Of course, there are ways to serialize and deserialize in all languages, but in Rust the solution is obvious, practical, and simple.</p><p>I hope to keep seeing the Rust community grow and hope that the growth of the language keeps pushing the frontier of development (especially for <code>async/await</code> ). I really hope I can do more web development with Rust in the future, building robust micro-services that can perform well with limited resources, and scale to infinity. Cheers to Rust in 2023!</p>]]></content:encoded></item><item><title><![CDATA[Week 9: Limitations]]></title><description><![CDATA[<p>Welcome to Week 9, the penultimate entry in this series (and normally the &#x201C;calm before the storm&#x201D; week at Stanford). In this article, I thought I would discuss some general gripes I had with Rust as well as specific limitations that I encountered.</p><h1 id="sometimes-bleeding-edge-really-does-mean-bleeding">Sometimes bleeding edge really does</h1>]]></description><link>https://siwiec.us/blog/week-9-limitations/</link><guid isPermaLink="false">64198f36b543a30b0acaa925</guid><dc:creator><![CDATA[Adam Siwiec]]></dc:creator><pubDate>Tue, 21 Mar 2023 11:05:15 GMT</pubDate><content:encoded><![CDATA[<p>Welcome to Week 9, the penultimate entry in this series (and normally the &#x201C;calm before the storm&#x201D; week at Stanford). In this article, I thought I would discuss some general gripes I had with Rust as well as specific limitations that I encountered.</p><h1 id="sometimes-bleeding-edge-really-does-mean-bleeding">Sometimes bleeding edge really does mean bleeding</h1><p>The biggest issue I encountered with Rust is in the async department. One of the primary goals of my development this quarter was building a way to separate the frontend GUI from the backend data source providing the logging details and metadata. This would allow a server with a larger memory pool to be able to load a multi-gigabyte log into memory for quick access but leave the frontend with a smaller memory footprint, small enough to be quickly loaded as a wasm web-app. This meant creating a datasource that was able to make HTTP requests to fetch data. Easy enough, right? Rust has plenty of production-ready networking libraries like <a href="https://actix.rs/?ref=siwiec.us">actix</a> (our server framework), <a href="https://github.com/seanmonstar/reqwest?ref=siwiec.us">reqwest</a> (our initial frontend framework), <a href="https://tokio.rs/?ref=siwiec.us">tokio</a>, <a href="https://hyper.rs/?ref=siwiec.us">hyper</a>, or even the built in http module.</p><p>I quickly scaffolded out a prototype implementation of our <em>DataSource</em> <strong>trait</strong> (keep this word in mind) that would simply make an HTTP request, and then serialize the response into our data types using the <a href="https://serde.rs/?ref=siwiec.us">serde</a> package. We utilized the <strong><strong><strong><strong><strong><strong><strong><strong>blocking</strong></strong></strong></strong></strong></strong></strong></strong> feature of reqwest to create a synchronous HTTP request. This should not be utilized for a real world workload for multiple reasons:</p><blockquote>Since we only have a single thread in our application, any blocking requests we make will stall all other code from running, including the renderer that is responsible for generating the next frame.</blockquote><p>Users with a slow internet connection would experience multi-second lag between each frame that made a request, and there are many requests! &#x274C; Unacceptable solution &#x274C;</p><p>I only tested the client/server dimension of our app through <a href="http://localhost/?ref=siwiec.us">localhost</a> which essentially provides a 0-latency, infinite bandwidth fabric between services, I experienced simultaneous frame renders that I couldn&#x2019;t distinguish from the data-native version of the app.</p><p>Given this situation, it made since to try to implement an async approach to data loading. Sure enough, all the HTTP libraries already use the async approach by default (blocking was a feature I added), so it made since for a datasource to produce a Rust <code>Result</code> which is similar to a <code>Promise</code> in JavaScript. However, as soon as I tried to return a <code>Result</code>, I got a blaring error from the compiler: <code>error[E0706]: trait fns cannot be declared async</code> </p><p>Sure enough the official docs confirmed this:</p><blockquote>Currently, <code>async fn</code> cannot be used in traits on the stable release of Rust.</blockquote><p>Ouch. I quickly looked for workarounds but the two options were either:</p><ul><li>use a hacky external dependency with a macro <code>#[async_trait</code> to apply a wrapper to the trait functions.</li><li>Upgrade to the <code>nightly</code> version of the rust toolchain, which has just added (November, 2022) support for async functions in traits.</li></ul><p>The second option seems good right? Async is relatively new for Rust so it would make since that the core developers are still working on supporting everything. However there was one major caveat: If we apply this new toolchain and its new <code>#![feature(async_fn_in_trait)]</code> macro, we wouldn&#x2019;t be able to create a <code>dyn DataSource</code> object, which is a way of telling the compiler that the size of a trait is not known at compile time. To be able to apply <code>dyn</code>, a trait must be &#x201C;Object-safe&#x201D;, but the new macro voided the object safety that our app needed to function. It seems like the bleeding edge of Rust killed this feature, (for now).</p><h1 id="wasm-troubles">Wasm Troubles</h1><p>You might ask:</p><blockquote>&#x201C;Well synchronous isn&#x2019;t that bad, right? It still WORKS!&#x201D;</blockquote><p>Sure, however I ran into an immediate issue when trying to compile prof-viewer with the target platform being WebAssembly.</p><p>When compiling I got vague errors of HTTP libraries missing or using unsupported and after doing some quick research, I immediately saw why: Many Rust libraries only support their async functions in Rust through the WASI interface. This is because the WASI interface enables HTTP through the <code>fetch</code> interface in native JavaScript. If you recall, <code>fetch</code> is an asynchronous function, and therefore is also an async function when ported through WASI. Ouch, I am back at the same problem I walked through above. Well maybe there is a way I could spin up a separate thread to handle the async request, and then apply the returned response in an independent fashion from the synchronous render thread. The issue: threads are complicated/non-existent in Rust as well. The traditional idea of a thread does not exist at all in Rust. In JavaScript, the tangential idea of a thread is a &#x201C;web worker&#x201D; which enables asynchronous execution of code, independent from the main JS thread. This is supported through the WASI interface, but provides an extremely complicated and low-level abstraction for developers trying to integrate this into their WASM-compatible codebase.</p><p>For example, the simplest web-worker WASM example I could find (source: <a href="https://www.tweag.io/blog/2022-11-24-wasm-threads-and-messages/?ref=siwiec.us">https://www.tweag.io/blog/2022-11-24-wasm-threads-and-messages/</a>) is 4 months old (this is also bleeding edge for WASM)</p><pre><code class="language-rust">// A function imitating `std::thread::spawn`.
pub fn spawn(f: impl FnOnce() + Send + &apos;static) -&gt; Result&lt;web_sys::Worker, JsValue&gt; {
  let worker = web_sys::Worker::new(&quot;./worker.js&quot;)?;
  // Double-boxing because `dyn FnOnce` is unsized and so `Box&lt;dyn FnOnce()&gt;` is a fat pointer.
  // But `Box&lt;Box&lt;dyn FnOnce()&gt;&gt;` is just a plain pointer, and since wasm has 32-bit pointers,
  // we can cast it to a `u32` and back.
  let ptr = Box::into_raw(Box::new(Box::new(f) as Box&lt;dyn FnOnce()&gt;));
  let msg = js_sys::Array::new();
  // Send the worker a reference to our memory chunk, so it can initialize a wasm module
  // using the same memory.
  msg.push(&amp;wasm_bindgen::memory());
  // Also send the worker the address of the closure we want to execute.
  msg.push(&amp;JsValue::from(ptr as u32))
  worker.post_message(&amp;msg);
}

#[wasm_bindgen]
// This function is here for `worker.js` to call.
pub fn worker_entry_point(addr: u32) {
  // Interpret the address we were given as a pointer to a closure to call.
  let closure = unsafe { Box::from_raw(ptr as *mut Box&lt;dyn FnOnce()&gt;) };
  (*closure)();
}
</code></pre><p>Followed by an equivalently opaque, polyfill-esque JavaScript code to import the WASM binding:</p><pre><code class="language-jsx">importScripts(&quot;./path/to/wasm_bindgen/module.js&quot;)
self.onmessage = async event =&gt; {
  // event.data[0] should be the Memory object, and event.data[1] is the value to pass into child_entry_point
  const { child_entry_point } = await wasm_bindgen(
    &quot;./path/to/wasm_bindgen/module_bg.wasm&quot;,
    event.data[0]
  )
  child_entry_point(Number(event.data[1]))
} 
</code></pre><p>Only to imitate the native version of <code>channels</code> in Rust (which is the traditional way of sharing messages across multiple threads):</p><pre><code class="language-rust">let (to_worker, from_main) = std::sync::mpsc::channel();
let (to_main, from_worker) = std::sync::mpsc::channel();
spawn(move || { to_main.send(from_main.recv().unwrap() + 1.0); });
to_worker.send(1.0);
assert_eq!(from_worker.recv().unwrap(), 2.0);
</code></pre><p>Yikes! Hundreds of lines of low-level code simply to share an <code>int</code> between JavaScript and Rust asynchronously. The two kickers? You can&#x2019;t code yourself out of this situation. First, you must compile your Rust project with many extra flags:</p><pre><code class="language-rust">RUSTFLAGS=&quot;-C target-feature=+atomics,+bulk-memory,+mutable-globals&quot; cargo build --target=wasm32-unknown-unknown --release -Z build-std=panic_abort,std
</code></pre><p>Then, you must</p><blockquote>configure your web server with some <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/SharedArrayBuffer?ref=siwiec.us#security_requirements">special headers</a>, because shared WASM memory builds on <code><a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/SharedArrayBuffer?ref=siwiec.us">SharedArrayBuffer</a></code></blockquote><p>This train of requirements doesn&#x2019;t seem practical for your average Rust developer to have to juggle on top of the many other components of their stack they have to understand.</p><p>While these are my main roadblocks when working on this project, I had many smaller speed-humps with the Rust compiler. For example:</p><p>Some expressions like <code>some_func(another_funcs_return())</code> would &#xA0;give errors, saying that the value returned from <code>another_funcs_return</code> would be out of scope before the function returned. I would have to move this code into something like</p><pre><code class="language-rust">let val = another_funcs_return();
some_func(val);
</code></pre><p>to satiate the compiler. While I am sure the compiler has a valid reason for this that is beyond my knowledge, it seems odd that the compiler forces this level of verbosity on developers.</p><p>I also ran into issues with lifetimes and mutexes. This is because the idea of a pointer is more opaque in Rust, as everything in Rust is abstracted into references and lifetimes. In this way, I wasn&#x2019;t able to easily pass data between functions in different threads as I would have liked to.</p><p>Again, there is probably a good reason the compiler was complaining, but the solution as a beginner Rust developer was not obvious and seemed like a roadblock at times.</p><p>My last gripe has to do with the disk space required for development. I was extremely happy to hear about Yarn&apos;s Plug N&apos; Play functionality that eliminates the need for <code>node_modules/</code> because that would make a large chunk of repeated downloads and files for each NodeJS project I would work on. Cargo uses a similar structure to the Plug N&apos; Play feature that creates a global environment cache for pacakges. This keeps directories clean and backups simple, however, I noticed that debug builds with Rust made node_modules look TINY.</p><figure class="kg-card kg-image-card"><img src="https://siwiec.us/blog/content/images/2023/03/SCR-20230321-cuud.png" class="kg-image" alt loading="lazy" width="936" height="1028"></figure><p>A debug build would take about 1.5 GB for a mid-size project and a release build would be about 700 MB. If I were to be working on multiple projects simultaneously, my already filled-to-the-brim 256GB M2 Macbook Air would be complaining constantly. To solve this, you can easily run a <code>cargo clean</code> in each project. Or just get more storage (I am too cheap to pay $200 for 256 more gigs of storage). I wonder what the break down of those files are, but thats a topic for another post.</p>]]></content:encoded></item><item><title><![CDATA[Week 8: Rust Performance]]></title><description><![CDATA[<p>When working with Rust during my 10-week research project, I was more than impressed with Rust in the performance department. From compilation time to run-time, Rust just seemed to keep up.</p><h2 id="garbage-collection">Garbage Collection</h2><p>Because of Rust&#x2019;s guaranteed memory safety, ownership model, and robust checks from the compiler, the</p>]]></description><link>https://siwiec.us/blog/week-8-rust-performance/</link><guid isPermaLink="false">641980ecb543a30b0acaa90e</guid><dc:creator><![CDATA[Adam Siwiec]]></dc:creator><pubDate>Tue, 21 Mar 2023 10:06:04 GMT</pubDate><content:encoded><![CDATA[<p>When working with Rust during my 10-week research project, I was more than impressed with Rust in the performance department. From compilation time to run-time, Rust just seemed to keep up.</p><h2 id="garbage-collection">Garbage Collection</h2><p>Because of Rust&#x2019;s guaranteed memory safety, ownership model, and robust checks from the compiler, the language has no need for a garbage collector.</p><p>A garbage collector is a program that automatically frees up memory that is no longer being used by a program. It identifies and removes objects that are no longer needed, allowing the program to reclaim memory that can be used for other purposes. Garbage collection is used in Java and Python to manage memory automatically, but it can come with performance costs; ever so often, the program must be paused to reclaim memory that is not in use. Especially in a graphics context, that small delay may be unacceptable. If we are rendering frames to create a smooth view for the user and a garbage collector decides to pause even for a few milliseconds, our app may hitch around, which is obnoxious. Even if the garbage collector is in a different thread, this adds more overhead to the consumers CPU and uses more energy.</p><h2 id="search">Search</h2><p>In the context of prof-viewer, an expensive computation that I was afraid would cause bottlenecks was search.</p><p>Every time a user adds a character to the search box, I have to recompute the entire search state of the app. While it would be acceptable to have a small delay in the search and add a smaller loader / skeleton while the computation and search was being performed, this wouldn&#x2019;t be as simple to create as if we were in event-driven model. I wanted to see what Rust was capable of so I decided to shoot for a 60+ FPS target even while searching through 100,000+ elements in state. That&#x2019;s 6 million string searches a second! I ended up implementing search using the <a href="https://en.wikipedia.org/wiki/Aho%E2%80%93Corasick_algorithm?ref=siwiec.us">Aho-Corasick</a> algorithm to build a finite-state machine based on the patterns that we wanted to match (i.e. each word the user was searching for).</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://siwiec.us/blog/content/images/2023/03/aho.png" class="kg-image" alt loading="lazy" width="440" height="621"><figcaption>State machine matching different versions of an &quot;abc&quot; string</figcaption></figure><p> This state machine would then be applied on each of the metadata strings for each task, detecting the number of matches in the state-machine. This allows us to run each title search in O(n) time. With this algorithm, Rust was able to provide a near instantaneous search for an entire profile with a large number of tasks on a single thread, all while rendering the GUI too. Mighty impressive!</p>]]></content:encoded></item><item><title><![CDATA[Week 7: Rust Tooling]]></title><description><![CDATA[<p>This might be one of my favorite articles to write because I immediately fell in love with the Rust tooling ecosystem.</p><h3 id="tldr">TL;DR</h3><p>Imagine if <code>yarn</code>, <code>make</code>, and a cousin of <code>package.json</code> merged into one behemoth of a stack. Welcome to <code>cargo</code></p><p>This article is framed &#xA0;as beginner</p>]]></description><link>https://siwiec.us/blog/week-7-rust-tooling/</link><guid isPermaLink="false">64196160b543a30b0acaa8ec</guid><dc:creator><![CDATA[Adam Siwiec]]></dc:creator><pubDate>Tue, 21 Mar 2023 09:36:38 GMT</pubDate><content:encoded><![CDATA[<p>This might be one of my favorite articles to write because I immediately fell in love with the Rust tooling ecosystem.</p><h3 id="tldr">TL;DR</h3><p>Imagine if <code>yarn</code>, <code>make</code>, and a cousin of <code>package.json</code> merged into one behemoth of a stack. Welcome to <code>cargo</code></p><p>This article is framed &#xA0;as beginner guide to installing a basic Rust project as this process will expose most users to the most prominent and useful features of the Rust ecosystem.</p><h2 id="toolchain">Toolchain</h2><p>First and foremost is the Rust toolchain, <code>rustup</code>. I mentioned this briefly in Week 3, but to add the toolchain to your computer, you can use a command like <code>brew install rustup-init</code> to install and setup the default toolchain.</p><h2 id="build-tools">Build Tools</h2><p>With rustup comes <code>rustc</code> and <code>cargo</code> . The other language equivalents would be <code>python</code> and <code>pip</code> for Python, and <code>node</code> and <code>npm</code> for NodeJS (but neither are a compiled language like Rust). <code>rustc</code> is the native compiler for Rust and simply put, turns code into an executable. You can pass flags to enable optimizations and debugging symbols, just like in other compilers. You will not need to invoke <code>rustc</code> manually in most cases, as this is the job of <code>cargo</code> ! Cargo is the native package manager and build tool for Rust. A key component of the rust ecosystem are packages called &#x201C;crates&#x201D;, which represents projects written in rust that can be compiled. Every crate has a single <code>Cargo.toml</code> which is the equivalent to a <code>package.json</code> in a javascript project. Here is an example of a <code>Cargo.toml</code>:</p><pre><code class="language-toml">[package]
name = &quot;legion_prof&quot;
version = &quot;0.1.0&quot;
edition = &quot;2021&quot;

[dependencies]
actix-web = &quot;4&quot;
clap = &quot;2.33&quot;
csv = &quot;1.1&quot;
derive_more = { version = &quot;0.99&quot;, default_features = false, features = [&quot;add&quot;, &quot;display&quot;, &quot;from&quot;] }
flate2 = &quot;1&quot;
nom = &quot;7&quot;
num_enum = &quot;0.5&quot;
rayon = &quot;1.5&quot;
serde = { version = &quot;1.0&quot;, features = [&quot;derive&quot;] }
serde_json = &quot;1.0&quot;
legion_prof_viewer = { path = &quot;/Users/adam/legion/prof-viewer&quot;}

[profile.dev]
opt-level = 2
debug-assertions = true
</code></pre><p>In this .toml file, you can see that we define a crate metadata under the header <code>[package]</code> &#xA0;such as the name, version, and schema. We also declare the dependencies of our crate, which utilizes a registry called <a href="https://crates.io/?ref=siwiec.us">crates.io</a> (or can point to a local path, to link local development).</p><p>One notable feature of Rust crates is that they are compiled statically, which means that all of their dependencies are included in the final binary or library. This makes it easy to distribute and use Rust code without worrying about version conflicts or external dependencies.</p><p>One of the benefits of interpreted languages is that dependencies are loaded at runtime (i.e. Node just looks up a module every time you run your program). In a compiled language like Rust, dependency management becomes more complex, however Cargo does a great job of optimizing builds by only rebuilding the components that have changed since the last build. This saves a great deal of time since you don&#x2019;t have to recompile 300+ crates in a medium-sized project every time you change a line.</p><h2 id="crate-features">Crate Features</h2><p>Another feature of Rust crates is that they are typically designed to be highly composable. This means that they can be easily combined with other crates to create larger, more complex programs. A feature that I enjoyed about crates is it&#x2019;s use of &#x201C;features&#x201D;. A feature is an optional include that you can make on a dependency to enable extra features. This is extremely useful for reducing the dependencies of your project. For example, a perfect use case of this feature was splitting server-side code from the frontend egui viewer. In the same crate, we provide both the server-side rendering features that a server would need (along with all the dependencies associated with that) as well as the egui GUI that we built. Cargo is smart enough to detect the dependencies that only exist under a feature and therefore not include them if the feature is not requested.</p><h2 id="neat-features">Neat Features</h2><ul><li><code>Cargo.lock</code> this lock file represents the dependency tree needed to build a project, and saves cargo vasts amount of computation and network lookups to download all the dependencies needed to build a project.</li><li>Cargo manages multiple versions of the same crate to prevent conflicts.</li><li><code>cargo test</code> executes native Rust tests on a crate. TDD is baked into Rust!</li><li><code>cargo doc</code> generates static html documentation pages very similar to Python package documentation.</li><li>Cargo can create a new project template with <code>cargo new</code>.</li><li>Cargo has a built-in system for managing and publishing crates to the <a href="http://crates.io/?ref=siwiec.us">crates.io</a> registry.<br></li></ul><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://siwiec.us/blog/content/images/2023/03/image-1.png" class="kg-image" alt loading="lazy" width="2550" height="1518"><figcaption>Cargo docs made me feel comfortable diving into crates&apos; codebases.</figcaption></figure><p>Once you have finished development on your product, deployment is the easiest part. You already are executing debug binaries locally to test your app, so you can simply bundle everything together with production-level optimizations with <code>cargo build --release</code> and you now have a working executable.</p><p>To deploy to a specific target (MacOS in my case), I found <a href="https://github.com/burtonageo/cargo-bundle?ref=siwiec.us">cargo-bundle</a> to be extremely useful.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://siwiec.us/blog/content/images/2023/03/SCR-20230321-cyaf.png" class="kg-image" alt loading="lazy" width="1466" height="698"><figcaption>Tiny!</figcaption></figure><p>I was able to deploy prof-viewer as 5.7 MB a native <code>.app</code>. A more traditional way of deploying a desktop app would be Electron, which utilizes a chromium base to simply render a webpage as an app. This can lead to a several-hundred megabyte binary that users have to download for <strong><strong><strong><strong>each</strong></strong></strong></strong> app they download that includes Electron. Rust on the other hand is much more portable.</p>]]></content:encoded></item><item><title><![CDATA[Week 6: The Egui Rust Framework]]></title><description><![CDATA[<p>Welcome back to the sixth entry in this series. This week, I discuss <a href="https://egui.rs/?ref=siwiec.us">Egui</a>, the open-source GUI framework built in Rust that we use to build the Legion Prof-Viewer.</p><h2 id="what%E2%80%99s-e-gooey">What&#x2019;s E-gooey?</h2><p>Egui is a lightweight, <strong>immediate</strong> mode GUI framework for Rust that is designed to be fast and</p>]]></description><link>https://siwiec.us/blog/week-6-the-egui-rust-framework/</link><guid isPermaLink="false">64196075b543a30b0acaa8c9</guid><dc:creator><![CDATA[Adam Siwiec]]></dc:creator><pubDate>Tue, 21 Mar 2023 07:47:52 GMT</pubDate><content:encoded><![CDATA[<p>Welcome back to the sixth entry in this series. This week, I discuss <a href="https://egui.rs/?ref=siwiec.us">Egui</a>, the open-source GUI framework built in Rust that we use to build the Legion Prof-Viewer.</p><h2 id="what%E2%80%99s-e-gooey">What&#x2019;s E-gooey?</h2><p>Egui is a lightweight, <strong>immediate</strong> mode GUI framework for Rust that is designed to be fast and easy to use. With <a href="https://github.com/emilk/egui/?ref=siwiec.us">14,000+ stars on Github</a>, Egui offers an impressive toolset of graphics tools for developers wanting to create graphical user interfaces with minimal effort.</p><p>I first found out about Egui from Elliott as that was what the base framework was for the existing prof-viewer codebase. After looking through the examples, I could see how the framework was especially useful for our &#x201C;custom&#x201D; needs as the rendering model is unique, the API was extensive, and the widget ecosystem had plenty of baked in features that we could use.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://siwiec.us/blog/content/images/2023/03/SCR-20230321-bifw.png" class="kg-image" alt="Untitled" loading="lazy" width="2644" height="1674"><figcaption>Egui has an impressive set of demos that enable you to instantly plug and play (with) a template GUI on your desktop/browser.</figcaption></figure><p>One of the most appealing features of Egui is its performance. Unlike traditional GUI toolkits, Egui does not rely on an event-driven model. Instead, it uses an immediate mode approach, which means that the user interface is rebuilt every frame. This approach has several advantages, including lower memory usage and faster rendering times.</p><h2 id="immediate-mode">Immediate Mode?</h2><p>While most people are familiar with the DOM and JavaScript&#x2019;s event-driven design, immediate mode might be a new concept, as it was for me. In the eye&#x2019;s of JavaScript&#x2019;s creators, the web-page was meant to be both static and interactable.</p><p> &#xA0;99% of the time, the content and layout of a webpage stay the same, whereas the other 1% of the time is the user clicking, dragging, swiping, or adding some input the webpage. Only in the 1% cases does the website need to react and change. This reactive philosophy has driven web development for years (Ever heard of &#x201C;React&#x201D;?) and largely suits the needs of web developers. </p><p>On the other hand, more performance driven applications, like ours, needed the ability to render many frames quickly. This is even more so the case for game developers or visualization tools that need to make the screen appear as smooth as possible to it&#x2019;s users (rather the jittery loading of a webpage).</p><p>We found that the typical JavaScript web libraries, while very well-supported, did not providing the flexibility we needed to handle stateful, GUI-like menus/buttons, and other desktop app-like features that we needed with a 60+ FPS view. Immediate mode + a high performance runtime like Rust, helps us deliver anywhere from 200-400 FPS while doing most activities inside prof-viewer. Sweeet!</p><p>This performance is able to carry over to the web as Egui supports WASM deployment (To learn more about WebAssembly, check out Week 5&#x2019;s post).</p><p>Another advantage of Egui is its simplicity. The API is designed to be easy to use and understand, even for developers who are new to Rust or GUI programming. The framework provides a set of basic widgets, such as buttons, text boxes, and sliders, as well as more advanced features like layout management and event handling. Having to write immediate mode code, helped me think out of the box when rendering GUI layouts.</p><figure class="kg-card kg-image-card"><img src="https://siwiec.us/blog/content/images/2023/03/SCR-20230321-bnwe.png" class="kg-image" alt="Untitled" loading="lazy" width="584" height="212"></figure><p>For example, when building an editable text field for the interval to display, I realized that I couldn&#x2019;t simply apply the change to the interval each time the textbox changed.</p><p>If I want to set the start interval to &#x201C;100 ns&#x201D;, an immediate-mode approach would try to render every change: &#x201C;1&#x201D;, &#x201C;10&#x201D;, &#x201C;100&#x201D;, &#x201C;100 &#x201C;, &#x201C;100 n&#x201D;, &#x201C;100 ns&#x201D;. Every state but the last is invalid, so how do you program around this? </p><p>You can use buffers to store a temporary state as a user is writing something, then only apply the new state when the user is done modifying (i.e hit enter, lose focus on the textbox, etc.) The idea of buffering input while still re-rendering the page hundreds of time a second with old state was an interesting paradigm to pick up and made me think deeply about how this applies to user input validation in any input-driven context (e.g web form, search bar, etc.).</p><p>During our exploration of Egui, we found that the framework was well-documented and had an active community. The official website provides detailed tutorials and examples, and there are several third-party libraries and tools available for Egui development. Overall, we found Egui to be a promising Rust GUI framework that offers a balance of simplicity and performance. While it may not feel like it has every feature I would want, the codebase was very easy to jump into (especially thanks to the vscode rust plugin, rust-analyzer), and had enough flexibility in the desing of the API to craft our own widgets for our specific needs. Its lightweight approach and ease of use make it a good choice for many projects.</p>]]></content:encoded></item><item><title><![CDATA[Week 5: What Am I Building?]]></title><description><![CDATA[<p>Welcome to Week 5! I&#x2019;m putting this short post out there because it would be great to see how my education in Rust is contributing to a real-world problem and product.</p><p>Under Elliott Slaughter, I am working on tooling for the <a href="https://legion.stanford.edu/?ref=siwiec.us">Legion</a> project which is a HPC programming</p>]]></description><link>https://siwiec.us/blog/week-5-what-am-i-building/</link><guid isPermaLink="false">640f27abb543a30b0acaa8b5</guid><dc:creator><![CDATA[Adam Siwiec]]></dc:creator><pubDate>Mon, 13 Mar 2023 13:42:34 GMT</pubDate><content:encoded><![CDATA[<p>Welcome to Week 5! I&#x2019;m putting this short post out there because it would be great to see how my education in Rust is contributing to a real-world problem and product.</p><p>Under Elliott Slaughter, I am working on tooling for the <a href="https://legion.stanford.edu/?ref=siwiec.us">Legion</a> project which is a HPC programming system written to be used on large-scale supercomputers. With a supercomputer comes the overhead of debugging a supercomputer, which is where a large effort in collecting and visualizing terabytes of logs comes into play.</p><p>&#x201C;Terabytes?&#x201D; you ask. 100% I say, as you must consider the massive amount of scale a cluster can run at. Even if a single GPU produced 1MB of metadata/logs over the course of an hour, if you scale to 10000 GPU&#x2019;s, you now have produced 10GB. You can also factor in that the GPU is not the only device running on a cluster, you also will have the CPU running tasks, memory channels being constantly rewritten to and flushed, framebuffers driving computational display output, etc. To throw an even larger wrench into the works, imagine a job running for a whole week instead of a whole hour. This create a multi-dimensional visualization task for the engineers that need to inspect, understand, and optimize the performance of such behemoth clusters. That&#x2019;s where prof-viewer comes in. Originally a javascript based visualization tool, prof-viewer emulated the likes of a multi-device <a href="https://www.brendangregg.com/flamegraphs.html?ref=siwiec.us">flame graph</a></p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://siwiec.us/blog/content/images/2023/03/SCR-20230313-gmmo.png" class="kg-image" alt="Untitled" loading="lazy" width="2002" height="1256"><figcaption>Source: <a href="https://www.brendangregg.com/FlameGraphs/cpu-mysql-updated.svg?ref=siwiec.us">https://www.brendangregg.com/FlameGraphs/cpu-mysql-updated.svg</a></figcaption></figure><p></p><p>Flame graphs are a way of visualizing dependent tasks on a timescale for the purpose of identifying bottlenecks, loops, and other code anomalies that may be unintended or ill-performing. &#xA0;The goal of prof-viewer is to give an engineer a large magnifying glass to be able to zoom in on the state of a cluster running a job and inspect exactly how tasks behave across different machines.</p><p>Here&#x2019;s a sample view of what prof-viewer looks like.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://siwiec.us/blog/content/images/2023/03/SCR-20230313-gmuh.png" class="kg-image" alt="Untitled" loading="lazy" width="2938" height="1834"><figcaption>A web demo can also be found <a href="https://elliottslaughter.github.io/test-egui/?ref=siwiec.us">here</a></figcaption></figure><p></p><p>An engineer is able to get rich metadata on each task running with a tooltip hover, but are also able to stay zoomed out when understanding a jobs runtime.</p><p>The system is entirely built from Rust from the backend that takes raw log output and produces a proprietary format for displaying logs to the graphical user interface (which I will discuss next week, in Week 6). A primary feature that I worked on was developing a search feature that would allow the ability to quick find tasks pertaining to a certain keyword, and highlight them in the viewer. Since there can be a lot of data on the screen at once, it is important to have an accordion-style expansion of panels to hide the massive amount of 2D space tasks can take up. Regardless, a search should be able to query outside the bounds of your eye sight to give you clues towards a related task you are thinking about. The sidebar gives you a quick way to jump through the large hierarchy of nodes and devices that a task could be under.</p><p>It was really exciting to dive into a relatively verbose codebase that allowed for hands-dirty access to the UI and data structures that represent both the logs and the interface. Even though the features may be simple and intuitive to an end user, there was plenty of thought that went into the design of the data structures and querying/layout of the tasks to increase performance and reduce load on the backend.</p>]]></content:encoded></item><item><title><![CDATA[Week 4: WebAssembly]]></title><description><![CDATA[<p>Welcome to Week 4 of the 10 week series on learning Rust! This week, I am going to talk about WebAssembly in the context of Rust and its use cases on the web.</p><p>If you have never heard of WebAssembly, maybe the best way to explain it is from the</p>]]></description><link>https://siwiec.us/blog/week-4-webassembly/</link><guid isPermaLink="false">640f1d79b543a30b0acaa8a7</guid><dc:creator><![CDATA[Adam Siwiec]]></dc:creator><pubDate>Mon, 13 Mar 2023 12:57:44 GMT</pubDate><content:encoded><![CDATA[<p>Welcome to Week 4 of the 10 week series on learning Rust! This week, I am going to talk about WebAssembly in the context of Rust and its use cases on the web.</p><p>If you have never heard of WebAssembly, maybe the best way to explain it is from the group that created it:</p><blockquote>WebAssembly (abbreviated <em>Wasm</em>) is a binary instruction format for a stack-based virtual machine. Wasm is designed as a portable compilation target for programming languages, enabling deployment on the web for client and server applications. - <a href="https://webassembly.org/?ref=siwiec.us">source</a></blockquote><p>Wow, that&#x2019;s a mouthful, but also a really succinct way of describing the vast use cases and application of wasm. Let&#x2019;s break this quote down.</p><ul><li><em><strong><strong><strong><strong><strong><strong><strong><strong><strong><strong><strong><strong><strong>binary instruction format:</strong></strong></strong></strong></strong></strong></strong></strong></strong></strong></strong></strong></strong></em> &#xA0;You may have guessed from the &#x201C;Assembly&#x201D; in its name, wasm is a compiled instruction format. Sweet, that means it can be deployed just about anywhere: supercomputer clusters, servers, browsers, Raspberry Pi&#x2019;s, tiny microchips, etc. It also means that it can work with any instruction set: ARM, RISC, x86. This means you can access rich features of instruction sets, such as SIMD.</li><li><em><strong><strong>for a stack-based virtual machine:</strong></strong></em> Whenever I think of a VM, I think Docker, hypervisors, and kubernetes. However, you and I are probably forgetting the most popular VM in the world: your browser. As JavaScript is not a compiled language, it must be interpreted at runtime. Modern JavaScript engines use a JIT (just-in-time) compiler to run JavaScript whenever you load a webpage. These instructions are then executed inside of a virtual machine for every tab you have open. While JITs are extremely fast these days, nothing can beat feeding instructions directly to a CPU, which is exactly what WebAssembly gives a browser to execute.</li><li><em><strong><strong><strong><strong><strong><strong><strong><strong><strong><strong><strong><strong><strong><strong><strong><strong><strong><strong><strong>enabling deployment on the web for client and server applications</strong></strong></strong></strong></strong></strong></strong></strong></strong></strong></strong></strong></strong></strong></strong></strong></strong></strong></strong></em>: There are many interesting applications for WebAssembly outside of a zippy-fast environment for the web. For example, a popular wasm runtime for cloud native applications called <a href="https://wasmedge.org/?ref=siwiec.us">WasmEdge</a> claims to be &quot;100x faster at start-up and 20% faster at runtime&#x201D; than Linux containers. Consider the application of this fast startup for serverless functions, i.e. a literal &#x201C;function&#x201D; of code that runs as a webhook or API call that receives a request and returns a response all within one body of code. You could redesign a serverless datacenter to simply be a reverse proxy and a registry of executable wasm binary. When a request routes to a certain function, the proxy simply executes the binary function and returns the result.</li></ul><h2 id="so-what-does-wasm-look-like">So what does WASM look like?</h2><p>The cool part is that WASM can be written in most compiled (and JIT!) languages: C/C++, Rust, Python, Go, Java, and more! Let&#x2019;s look at an example:</p><p>At <a href="https://wasdk.github.io/WasmFiddle/?ref=siwiec.us">WasmFiddle</a>, we can try out an online compiler for wasm. In this case, it is in C.</p><pre><code class="language-c">int main() { 
  return 42;
}
</code></pre><p>The compiled output should look familiar (that is, if you have seen assembly before).</p><pre><code class="language-wasm">wasm-function[0]:
  sub rsp, 8                            ; 0x000000 48 83 ec 08
  mov eax, 0x2a                         ; 0x000004 b8 2a 00 00 00
  nop                                   ; 0x000009 66 90
  add rsp, 8                            ; 0x00000b 48 83 c4 08
  ret                                   ; 0x00000f c3
</code></pre><p>The main reason behind using WebAssembly is performance and portability. However, to be portable (and also practical), WebAssembly must support native operations in whatever platform it is being deployed on. For example, in my use case, I was needing to create a REST API interface inside of a client that would be compiled to wasm. This is where the idea of <a href="https://wasi.dev/?ref=siwiec.us">WASI</a> comes in. WASI stands for WebAssembly System Interface and allows for native bridging between non-wasm functionality, i.e threads, networking, filesystems, etc. and wasm binary. I was impressed with the fact that most HTTP libraries in the Rust ecosystem support WASM through a WASI interface (in my case, the interface was through the JavaScript fetch API). However, this system is not without it&#x2019;s limitations, which I will discuss in more detail in Week 9.</p>]]></content:encoded></item><item><title><![CDATA[Week 3: Learning Rust]]></title><description><![CDATA[<p>Hello! This week, Week 3, I will dive into my plan for learning Rust, how it panned out, and give some pointers and resources for those also learning Rust!</p><h2 id="baby-steps">Baby Steps</h2><p>Installing any new tool or language is my first step in learning it locally. That way, any examples I</p>]]></description><link>https://siwiec.us/blog/week-3-learning-rust/</link><guid isPermaLink="false">640efdb2b543a30b0acaa895</guid><dc:creator><![CDATA[Adam Siwiec]]></dc:creator><pubDate>Mon, 13 Mar 2023 10:52:27 GMT</pubDate><content:encoded><![CDATA[<p>Hello! This week, Week 3, I will dive into my plan for learning Rust, how it panned out, and give some pointers and resources for those also learning Rust!</p><h2 id="baby-steps">Baby Steps</h2><p>Installing any new tool or language is my first step in learning it locally. That way, any examples I find can be modified in an environment that enables my curious mind to comment out lines, refactor code to see if it still works, or allows me to interact with a compiler/debugger sooner rather than later.</p><p>That being said, my first step was to install the Rust toolchain. As a Mac user, I am blessed to have Homebrew which enables easy access to a rust installer:</p><pre><code class="language-bash">brew install rustup-init
</code></pre><p>This command installs Rust Up which is the pseudo-official way of installing the official toolchains published by the Rust team. While I will dig more into Rust tooling in Week 7, this installer has given you access to 3 important tools:</p><ul><li><code>rustup</code>: Lets you update, install, delete, and switch your default rust toolchains.</li><li><code>cargo</code>: Rust&apos;s package manager and build tool. Think npm, but for Rust.</li><li><code>rustc</code>: This is the rust compiler. You will learn to love/hate it. It&apos;s the equivalent of gcc to c.</li></ul><h2 id="education">Education</h2><p>Since Rust is relatively new, the documentation for it is both a blessing and a curse. The blessing is that the Rust team has put together many &quot;books&quot; walking you through literally every part of the Rust language. The books are easy to read, straightforward, and instructive. However, it is also a curse because each chapter of the book is so descript and encompassing that even a tiny portion of a language, like variables, can take an hour to read and fully understand. For those with less time, there is also Rust By Example, an official book with a learn-by-example approach:</p><ul><li>Many coded examples.</li><li>Great comments and annotations.</li><li>A handy crab that explains why things don&apos;t compile.</li></ul><p>This book is where I found a lot of use in copying and modifying examples in my local environment, which helped me understand why something would or wouldn&apos;t compile.</p><p>Let&apos;s walk through a basic example written out in Rust By Example to see what parts of Rust are unique at first glance! While hello world is trivial in Rust, let&apos;s bump up our example to the classic FizzBuzz:</p><pre><code class="language-rust">// Unlike C/C++, there&apos;s no restriction on the order of function definitions
fn main() {
    // We can use this function here, and define it somewhere later
    fizzbuzz_to(100);
}

// Function that returns a boolean value
fn is_divisible_by(lhs: u32, rhs: u32) -&gt; bool {
    // Corner case, early return
    if rhs == 0 {
        return false;
    }

    // This is an expression, the `return` keyword is not necessary here
    lhs % rhs == 0
}

// Functions that &quot;don&apos;t&quot; return a value, actually return the unit type `()`
fn fizzbuzz(n: u32) -&gt; () {
    if is_divisible_by(n, 15) {
        println!(&quot;fizzbuzz&quot;);
    } else if is_divisible_by(n, 3) {
        println!(&quot;fizz&quot;);
    } else if is_divisible_by(n, 5) {
        println!(&quot;buzz&quot;);
    } else {
        println!(&quot;{}&quot;, n);
    }
}

// When a function returns `()`, the return type can be omitted from the
// signature
fn fizzbuzz_to(n: u32) {
    for n in 1..=n {
        fizzbuzz(n);
    }
}
</code></pre><p>Source: <a href="https://doc.rust-lang.org/rust-by-example/fn.html?ref=siwiec.us">https://doc.rust-lang.org/rust-by-example/fn.html</a></p><p>We can see that there is many similarities to other languages you may have learned! Pheww, what a relief that you do not have to relearn all your keywords, conditional syntax, and import mechanics!</p><p>Let&#x2019;s dig into a few lines of code in particular:</p><pre><code class="language-rust">fn is_divisible_by(lhs: u32, rhs: u32) -&gt; bool { 
</code></pre><ul><li>We can declare functions (in any order!) with the keyword <code>fn</code> (<code>def</code> in Python)</li><li><code>-&gt;</code> leads to the return type, in this case a boolean, <code>bool</code> (Same as in Typescript)</li><li>Variables and arguments have types. In this case, the variable <code>lhs</code> is a 32-bit unsigned integer, <code>u32</code> .</li></ul><pre><code class="language-rust">if is_divisible_by(n, 15) {
</code></pre><ul><li>Conditions without any explicit need for conditional operators (e.g. <code>&amp;&amp;</code> or <code>||</code> ) do not need to be wrapped in parentheses, however the body of a conditional or function is delimited by curly braces (just like in C).</li></ul><pre><code class="language-rust">    for n in 1..=n {
</code></pre><ul><li>For loops use the in syntax (just like Python), but have an interesting dot notation to identify ranges: <code>..</code> for a non-inclusive suffix range, and <code>..=</code> for an inclusive range (similar to Julia).</li></ul><p>It is so interesting to see a new language like Rust build off of the best (IMO) parts of other languages that I already use and love. As a busy developer learning a new language for work or a student having to learn a new language for a class quickly, it&#x2019;s reassuring to jump into the language and feel like you already have a large tool belt full of battle-hardened idioms that other languages pioneered decades ago. This is how I felt coming to Rust, and I quickly got up to speed on the functionality built into the base language.</p><p>There are more dimensions of Rust than I have room in this article to write but here are some other unique topics in Rust that a beginner might want to explore</p><ul><li>Traits</li><li>Structs / Enums</li><li>References, borrows, and lifetimes</li><li>advanced control flow: match, collections, iterables</li><li>Basic data structures: str, String, vec, etc.</li></ul><h2 id="resources">Resources</h2><p>I found these resources super helpful in learning Rust. Sometimes I needed to have a topic explained in a different format or context to truly see the intricacy and necessity of some of Rust&#x2019;s features.</p><ul><li><a href="https://doc.rust-lang.org/book/?ref=siwiec.us">The official Rust book</a></li><li><a href="https://doc.rust-lang.org/stable/rust-by-example/?ref=siwiec.us">The official &#x201C;Rust By Example&#x201D;</a></li><li><a href="https://lborb.github.io/book/official.html?ref=siwiec.us">The (un)official Rust book directories</a></li><li><a href="https://www.youtube.com/watch?v=Az3jBd4xdF4&amp;3Blist=PLLqEtX6ql2EyPAZ1M2_C0GgVd4A-_L4_5&amp;ref=siwiec.us">Doug Milford&#x2019;s Rust Intro series on YouTube</a> (extremely useful for more advanced topics)</li></ul>]]></content:encoded></item><item><title><![CDATA[Week 2: Who Am I?]]></title><description><![CDATA[<p>Welcome back to the second installment in my series documenting my journey with Rust development. In this post, I will add context to my journey as a software engineer regarding my education, experience, and interests. This way, I &#xA0;can shed light on how I see development, system design, and</p>]]></description><link>https://siwiec.us/blog/week-2-who-am-i/</link><guid isPermaLink="false">6409d07a5022dd59660ee538</guid><dc:creator><![CDATA[Adam Siwiec]]></dc:creator><pubDate>Thu, 09 Mar 2023 12:26:49 GMT</pubDate><content:encoded><![CDATA[<p>Welcome back to the second installment in my series documenting my journey with Rust development. In this post, I will add context to my journey as a software engineer regarding my education, experience, and interests. This way, I &#xA0;can shed light on how I see development, system design, and opportunities where I would like to see Rust work well.</p><p>So, hi! I&apos;m Adam Siwiec, 21 years old and a senior studying computer science at Stanford University. I am originally from Rogers, Arkansas, where I grew up running cross country, lawn-mowing, and writing poetry. I grew fond of computers early, whether through playing Battlefield 2 with dad, witnessing my uncle&apos;s Flash website, or inspecting the code of Yahoo! Finance&apos;s graph to see if I could make my favorite stock (AAPL) go any higher. I quickly grew into an entrepreneurial mind, building websites for my P.E. coach and myself. I started a successful lawn-mowing business that offered me expendable income to build a P.C. (&quot;mom, I will do homework on it too!&quot;). I grew into scripting in my favorite video game (Team Fortress 2) and became interested in game-server hosting and Linux.</p><p>Soon enough, I was in high school competing in programming competitions and hackathons, studying Java and C++ in class, and working with NodeJS to host projects at home. I was (and still am) super interested in networking and virtualization, so I invested in an old Dell R610 1U server and an enterprise network card. I built up a home lab from scratch. Starting with a hypervisor, I built up a whole stack of network services ranging from routing my entire home&apos;s internet to running the Christmas tree limits from school.</p><p>While I love many things outside of tech, for the sake of brevity (and &#xA0;topics for future posts!), I will teleport us to college. My passion for technology fueled my application to Stanford, and it was the paramount moment of my life so far to be able to attend. The access to a top-tier education, uplifting and ferociously curious colleagues that I can call friends, and adjacency to opportunities in industry and research have pushed me to pursue excellence in my craft and help hone what the bleeding edge of technology is. At Stanford, I study systems, ranging from distributed systems, parallel computing, and networking to A.I., blockchain, and cybersecurity. Most of my in-class programming work is in C/C++ and Python. C and C++ were thoroughly drilled into our heads early on to demonstrate fundamentals in a confined, &quot;safe&quot; environment, where as Python has been the primary tool for tackling more abstract, theoretical concepts in higher-level elective classes like CS 246: Mining massive datasets and implementing ray-tracing in C.S.: 148. Working in operating systems and higher-level systems classes has given me experience in exploring Assembly, implementing filesystems and threads, and creating stack-overflow exploits. While truly useful for learning, these projects tended to feel set up and dumbed down to limit time spent setting up and debugging.<br>On the other hand, I interned with a tech company every summer between high school and now, giving me a much more creative and open-ended way of pursuing software development. My first job was at J.B. Hunt, a Fortune 500 logistics company, where I helped build out a mobile app for their truck drivers to help automate logistics paperwork using geofencing, chatbots, and a voice assistant inside of React Native. The following year, I worked on modernizing an order and load processing system that scheduled tens of thousands of truckloads moving across the country daily, streaming data from multiple microservices to feed an Elasticsearch cache for faster order lookup, all in Java and Spring. During my sophomore and junior summers, I worked at NVIDIA, where I helped build projects inside of NGC (Nvidia GPU Cloud), a competitor to AWS/Azure in providing compute to companies and scientists needing enterprise A.I. infrastructure. Here I also got the full-blown Java experience, helping write and deploy microservices in Spring, but I also got an exceptional experience in testing, deploying to production with service-level dependencies, and business development. All these enterprise experiences helped me employ the sys-admin&apos;y skills I&apos;ve kept within the basement walls.</p><p>The combination of the two sides of my experiences in school and at work have led to that itch I mentioned in Week 1: I wanted to find a middle ground where I could build interesting and practical tools from scratch but without the low-level complexity, age-old idioms, and the slow-paced development lifecycles of the traditional languages I am learning at school. Rust, are you love at first sight?</p>]]></content:encoded></item><item><title><![CDATA[Week 1: A CS Major Discovers Rust]]></title><description><![CDATA[<p></p><p><strong>Hello!</strong> Welcome to the first installment in my 10-week developer deep dive into learning and using Rust!</p><p>This series is the first (hopefully one of many) series I will be writing for my new blog, &#x201C;<a href="https://siwiec.us/blog?ref=siwiec.us">bikes and bytes</a>&#x201D;, which will explore the intersection of technology, current events, and</p>]]></description><link>https://siwiec.us/blog/week-1-a-cs-major-discovers-rust-2/</link><guid isPermaLink="false">64084f905022dd59660ee4bf</guid><dc:creator><![CDATA[Adam Siwiec]]></dc:creator><pubDate>Wed, 08 Mar 2023 10:42:13 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1611704302692-166b6a4d7cd3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDN8fHN0YW5mb3JkfGVufDB8fHx8MTY3ODI3MjI1Mg&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1611704302692-166b6a4d7cd3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDN8fHN0YW5mb3JkfGVufDB8fHx8MTY3ODI3MjI1Mg&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Week 1: A CS Major Discovers Rust"><p></p><p><strong>Hello!</strong> Welcome to the first installment in my 10-week developer deep dive into learning and using Rust!</p><p>This series is the first (hopefully one of many) series I will be writing for my new blog, &#x201C;<a href="https://siwiec.us/blog?ref=siwiec.us">bikes and bytes</a>&#x201D;, which will explore the intersection of technology, current events, and my personal projects and hobbies.</p><p>As an <a href="https://github.com/adamsiwiec?ref=siwiec.us">avid programmer</a>, I am always looking for opportunities to cut into a different piece of the technological pie that is so prevalent in our world. While I will discuss my background in development in greater detail in Week 2, in my senior year of studying computer science at Stanford, I got an itch to start working on practical research and scientific applications of computer systems that triangulate the theory (CS classes) and business development (internships) that I have experienced so far as a burgeoning developer.</p><p>While reading <a href="https://news.ycombinator.com/?ref=siwiec.us">Hacker News</a> (a past time since I was a freshman in high school!), I came across the monthly &#x201C;Who&#x2019;s Hiring&#x201D; posts that always seem to pique my interest. While not exactly in the job market yet, I am always curious to see what companies are hiring and what the in-demand candidates look like. One of the first posts I came across happened to be from Elliott Slaughter, a Staff Scientist working at SLAC, the National Accelerator Laboratory, located just a mile away from campus at Stanford and adjacent to my favorite bike climb out of Palo Alto, Sand Hill Road. (If that sounds familiar, your hunch is right: it&#x2019;s the venture capital hotspot of the world).</p><figure class="kg-card kg-image-card kg-card-hascaption"><a href="https://news.ycombinator.com/item?id=33424570&amp;ref=siwiec.us"><img src="https://siwiec.us/blog/content/images/2023/03/image.png" class="kg-image" alt="Week 1: A CS Major Discovers Rust" loading="lazy" width="2162" height="970"></a><figcaption><em>&#x201C;high-performance and distributed computing, programming languages, compilers, networks, operating systems&#x201D;</em> had me like a child in a candy store.</figcaption></figure><p>In the past, I have had great experiences connecting with users on the platform, where discussion is rich and diverse, so I thought reaching out would be a fun discussion and potentially an interesting research opportunity to work on. I went to Prof. Alex Aiken, head of the research group promoted on HN, and had an eye-opening discussion on the future of distributed systems, the intersection of science and industry (especially in regards to new graduates), and about Legion, a distributed computing paradigm that he was working on with Elliott and other researchers. While the scientific application of Legion was out of the scope of my experience or interest, I realized that there was always a need for development on these teams. Sure enough, Prof. Aiken put me in touch with Elliott to see what kind of help I could offer. The next quarter at school, I started as a research assistant under Elliott, working on profiling tooling for Legion. The one caveat? He told me:<br>Learn Rust.</p><!--kg-card-begin: html--><div style="width:100%;height:0;padding-bottom:83%;position:relative;"><iframe src="https://giphy.com/embed/9Fticsj7froxbpd5Sg" width="100%" height="100%" style="position:absolute" frameborder="0" class="giphy-embed" allowfullscreen></iframe></div><!--kg-card-end: html-->]]></content:encoded></item></channel></rss>