List of videos

WebAssembly at Google by Thomas Steiner & Thomas Nattestadt @ Wasm I/O 2024
Wasm I/O 2024 / 14-15 March 2024 Slides: https://goo.gle/wasmio24 From the V8 team to the Emscripten toolchain team to the many product teams that benefit from the advantages of Wasm in their libraries and flagship apps, WebAssembly plays a crucial role in Google’s strategy. This talk gives a comprehensive overview of the many ways Wasm is used at Google. Google is a large company with products that typically need to support multiple platforms, but the cost of supporting all features on all platforms has become untenable for team velocity. Most product teams see Wasm as the missing piece to align investments and hit all platforms with all features. On the product side, for example, Google Photos whose team says with Wasm the old dream of “Write once, run anywhere” has become true. Google Maps makes use of Wasm in several components. Google Earth was one of the first products to ship Wasm in production on the Web. Google Meet uses Wasm to optimize several hotpaths. Google Sheets enabled Java-to-Wasm compilation for their calculation engine and was crucial in driving WasmGC. In the libraries space, Tensorflow.js has a Wasm-based CPU backend that is faster for some workloads than GPU. Ink is a low-latency freehand drawing library used by many products including Canvas, Keep, YouTube, and more. Kotlin compiled to WasmGC combined with Compose Multiplatform promises to bring Android apps to the Web. The Flutter framework compiles Dart to WasmGC and via CanvasKit enables consistent and pixel-perfect UI across all platforms. On the server side, various teams in Google Cloud are exploring Wasm options to power, for example, cloud functions. For toolchains, Emscripten and ultimately the V8 team keep pushing the boundary of what’s possible by implementing new standards like WasmGC, JS Promise Integration, multiple memories, and much more. After attending the talk, the audience will have a better understanding of the manyfold ways in which WebAssembly is being used at Google.
Watch
Programmable Embedded Vision Sensors by Dan Mihai Dumitriu @ Wasm I/O 2024
Wasm I/O 2024 / 14-15 March 2024 Slides: https://2024.wasmio.tech/slides/programmable-embedded-vision-sensors-wasmio24.pdf WebAssembly is perfect for embedded systems that need to be programmed over the air. AI powered sensors are such an example. We use Wasm for isolating 3rd party code as well as enabling polyglot development on embedded vision sensors. – Embedded software development has traditionally relied on a monolithic approach, with firmware written by a single vendor and infrequent updates. Many IoT devices lack a full Linux OS and hardware-based memory isolation, thus safety is an issue. As IoT devices become increasingly connected to the cloud, there is a need for customization and frequent updates. We believe that WebAssembly (Wasm) has the potential to change this paradigm by enabling the creation of truly customizable devices. Our runtime agent and cloud mgmt, is like Kubernetes for embedded devices. Using AoT (ahead of time) compilation to native, we can achieve very good performance. We have also developed a model for vision AI called Vision Sensing Application on top of the Wasm layer. A cloud-based service automatically specializes the application for the deployment targets, removing the need for developers to be concerned with device architecture or capabilities. To further streamline the development process, we have added a REST API and a visual programming interface inspired by Node-RED. A brief demo will be shown.
Watch
Extism 1.0, your framework to build with WebAssembly by Steve Manuel @ Wasm I/O 2024
Wasm I/O 2024 / 14-15 March 2024 Extism launched in December 2022, and this talk shares how things have taken shape. 1.0 launches January 2024. The talk will culminate in a demo showcasing just how portable WebAssembly can be (the same code deployed to Cloudflare, Vercel, in the browser, Shuttle.rs, Encore, Val.town, and many more) – Extism was created to smooth out the rough edges of working with Wasm. It’s a flexible framework that eliminates the lower-level concerns of loading and executing Wasm code, as well as interoperability with numerous languages. In the talk, I’ll also cover how various other ecosystem projects compare, and draw some analogies to other ecosystem frameworks like Python/Flask, Docker/Kubernetes, etc. The audience should leave with a solid understanding of when to reach for Extism vs. the Component Model, based on what they already know about other abstraction layers like Flask & GraphQL. Extism helps make great use of one of WebAssembly’s most useful features, portability. As such, I’ll take the audience on a journey to see just how far we can take Wasm into all sorts of places… WebAssembly inside Excel? You bet!
Watch
Nobody Knows the Trouble I've Seen: Debugging Wasm for web and server by N. Venditto & R. Squillace
Wasm I/O 2024 / 14-15 March 2024 Speakers: Natalia Venditto & Ralph Squillace Debugging WebAssembly systematically, across widely varying runtimes and with very different code paths inside modules, is a total pain. Most usage in the browser focuses on features enabled in F12, which assumes a JavaScript engine is hosting the assembly and, often but not alwasy, the Chrome Debugging Protocol. And that’s the front end; outside the browser, there’s no standard debugger API (yet), protocols between languages vary widely, and not all languages treat debug symbols the same way. It’s HARD, so hard that most service based on WebAssembly implement their own debugging (because otherwise you’d get mad at the service for low productivity). But there’s hope! In this session, we’ll tour the requirements to debug into wasm, show the paths available depending on your usage of the technology, and demonstrate some tool sets that, taken as a whole, give you a native-feeling, interactive step-through debugging – the thing you want. We’ll discuss stepping through wasm mapped to source code – both for browser and interpreted languages like Python, but also for standalone runtimes and Rust, Go, C, and Zig.We’ll wrap up with the debugging UX issues yet to be solved and more importantly where you can dive in and contribute. This is the year of WebAssembly on the Desktop. (Can I say that?)
Watch
Building Durable Microservices with WebAssembly by John A. De Goes @ Wasm I/O 2024
Wasm I/O 2024 / 14-15 March, Barcelona In this talk we’ll look at how we have used WASM to create durable microservices that pick up right where they left off after hardware failures or redeployments. We’ll also talk about how we have been building an ecosystem around this to let these services talk to each other, all powered by WASM. – WebAssembly (“WASM”) is still in its early stages and one of its challenges is how to go from a compelling technology to a compelling solution for businesses outside the existing WASM ecosystem. At my company we have been working to provide at least one solution to this in the form of Golem. Golem allows you to deploy WASM based microservices that not only automatically scale up but are also durable, meaning that if the server they are running on fails or is redeployed the application will automatically be restarted on another server and resume execution in a way that is seamless to the outside world. Taking advantage of some of the features of WASM, including its sandboxed execution environment, we are able to provide this guarantee regardless of what language the user writes their code in. In this talk we’ll be diving into some of the details of how we do this, including using our own implementations of WASI to track all interactions a program has with the outside world in an “operations log” and smart snapshotting to address some of the common problems that can arise in durable computing. We’ll also look at how much more is needed for users to be able to deploy their applications with WASM and how we’re working to contribute to an open source ecosystem that makes it easy to communicate with WASM based workers, let them talk to each other, and monitor them, among other things.
Watch
Running JS via Wasm faster with JIT by Dmitry Bezhetskov @ Wasm I/O 2024
Wasm I/O 2024 / 14-15 March, Barcelona Slides: https://docs.google.com/presentation/d/1nZsIf4O0xEjkw1RvjVkYtfRCNaOQKz3QVSkobekItPk/edit?usp=sharing So, we are adding a backend for the SpiderMonkey’s codegen to enable JIT support for JavaScript running through Wasm. Sounds a bit cryptic so let’s divide it into parts. SpiderMonkey is a JavaScript engine which is used for running JavaScript inside the Firefox browser. SpiderMonkey is written in C++ and supports compilation into the Wasm module, see live demo - https://mozilla-spidermonkey.github.io/sm-wasi-demo/. However, SpiderMonkey compiled into the Wasm module supports execution of JavaScript only in the interpreter-only mode and it doesn’t support just-in-time compilation because there is no Wasm backend for that. There are backends for Arm, X86, X64 etc but there is none for Wasm. Why do we want to add support for JIT? Well, because we want speed. Right now there is no solution to run JS scripts via Wasm fast, there are only interpreters. Why does JIT improve performance? The reasons are the same for why an interpreter is slower than a compiler - because it eliminates the interpreter loop, uses a more efficient ABI and, more importantly, it can specialize polymorphic operations in JavaScript. So, we not only enable the JIT tier in SpiderMonkey for Wasm but we also provide support for inline caches. Inline caches is a mechanism for specializing the behavior of particular operations like plus or a call to specific arguments provided at runtime. With all that we can generate Wasm modules on the fly, instantiate them, and link them to provide from ~2x to ~11x speedup over the interpreter. In the talks we will cover how the whole scheme works with SpiderMonkey: 1. How to link modules on the fly into SpiderMonkey.wasm 2. How to add an exotic Wasm backend into SpiderMonkey’s supported backend line - X64, X86, Arm, Wasm 3. How to use the whole solution in the cloud instead of QuickJS 4. How to get a speedup of your JS over wasm with test data
Watch
WebAssembly Component Model: What? How? And why you should not ignore it! by Thorsten Hans @ Wasm IO
Wasm I/O 2024 / 14-15 March, Barcelona Slides: https://2024.wasmio.tech/slides/webassembly-component-model-what-how-and-why-you-should-not-ignore-it-wasmio24.pdf Uncover the advancements in the WebAssembly Component Model, revolutionizing language integration and eliminating boilerplate code. Explore its potential to shape WebAssembly usage beyond 2024, with practical examples demonstrating immediate benefits for you, your team, and customers. – In recent months, the WebAssembly Component Model has made significant progress and is gaining increasing traction. It allows developers to mix and match languages, to compose bigger systems and to focus on solving real problems instead of writing boilerplate code again and again. In this talk, you will learn what the WebAssembly Component Model is and why it will drive the overall WebAssembly usage in 2024 and beyond. We’ll explore simple but practical examples that demonstrate how you, your team, and your customers can benefit from adopting the Component Model today.
Watch
Paint by Numbers: High Performance Drawing in Wasm by Sean Isom @ Wasm I/O 2024
Wasm I/O 2024 / 14-15 March, Barcelona Slides: https://github.com/renderlet/wasmio-2024/blob/main/PaintByNumbers.pdf Have you ever tried to use WebAssembly to draw something? Complicated, isn’t it? We all know that WebAssembly was originally created as a way to bring compiled code into the browser. That compiled code typically needs a UI, but none of this is built into Wasm itself. Various parts of the web stack do polyfill and supplement Wasm by exposing WebGL through Emscripten bindings or wasm-bindgen; sometimes this leads to separate web UIs that communicate with a Wasm app core via linear memory. Although this is a complicated system, it can sometimes work well enough for apps built for a browser. But what about applications intended to run outside of the browser? In pure WASI there’s no way to access the GPU, and much of the code written for the browser is not reusable here. We could use pure Wasm to render directly into a framebuffer stored in linear memory and copy that into VRAM, but this is not very performant or scalable. We could also bypass WASI and call host functions to wrap underlying hardware or graphics APIs, but this negates the security guarantees of Wasm for arbitrary code. None of these tradeoffs are particularly scalable or lead to a reasonable developer experience. By building the wander toolkit, we’re on a mission to build universal rendering in Wasm - and learning a lot of lessons along the way. Come listen to some of these lessons about efficiently using linear memory, orchestrating effective task level parallelism, and get an early preview of how you could access GPUs in a standardized way through WASI, bridging the gap between native and web code.
Watch
Chicory: Creating a Language-Native Wasm Runtime by Benjamin Eckel / Andrea Peruffo @ Wasm I/O 2024
Wasm I/O 2024 / 14-15 March, Barcelona Slides: https://andreaperuffo.com/chicory-wasmio-deck/ Repo: https://github.com/dylibso/chicory This talk will outline how and why we created Chicory: a JVM native Wasm runtime. You should walk away with an understanding of what it takes to create a language-native runtime and why you might want to create one for your own language. – There are a number of mature Wasm runtimes to choose from to execute a Wasm module. To name a few v8, wasmtime, wasmer, wasmedge, etc. Although these can be great choices for running a Wasm application, embedding them into your existing application has some downsides. Because these runtimes are written in C/C++/Rust/etc, they must be distributed and run as native code. This can cause a lot of additional friction and restrictions in a JVM application. And similar problems exists in other ecosystems as well (see Golang and Wazero). In this talk we will outline what these problems are and how building a language-native runtime can solve them. We’ll also discuss what work is involved in creating a new runtime and what we have learned from the Wazero project.
Watch