List of videos

Multi-Platform Libraries With Swift for WebAssembly by Max Desiatov @ Wasm I/O 2024
Wasm I/O 2024 - Barcelona, 14-15 March WebAssembly support in Swift started as a community project and over the years evolved into an ecosystem of libraries and developer tools. The talk showcases features of Swift, including its ability to seamlessly interoperate with C and C++ libraries, all supported by the WebAssembly toolchain. I’ll demo capabilities of Swift, directly calling into a lower level C++ library, using programmatic electronic music composition as an example. When compiled to Wasm, it can run in the browser or on the edge as well as natively on macOS, Linux, and Windows, with little changes to library code.
Watch
Bringing honest computing to the WASM world by Etienne Bossé @ Wasm I/O 2025
Wasm I/O 2025 - Barcelona, 27-28 March Slides: https://2025.wasm.io/slides/bringing-honest-computing-to-the-wasm-world-wasmio25.pdf In this talk, we will introduce the concept of honest computing and why the need for it has never been more critical. Honest computing envisions system that prioritise data and code integrity, enable accountability and provide confidentiality by design and by default in an attestable and demonstrable manner. These systems operate transparently, ensuring reliability without hidden agendas or biases. We will then see how the WebAssembly (Wasm) ecosystem fits perfectly in this vision. Its portable, sandboxed execution environment and deterministic builds provide the foundation for secure, verifiable computation. By combining the Wasm ecosystem with technologies like Trusted Execution Environments (TEEs) and distributed ledgers, Wasm empowers developers to build “honest applications” that uphold integrity and accountability at all times. This talk explores how Wasm’s unique characteristics such as its format, memory sandboxing, and portability make it an ideal building block for developers and an essential tool for broader adoption of honest computing. We will explore practical architectures, demonstrate how the Wasm ecosystem integrates within honest computing, and showcase some real-world examples. By the end, you will have a better understanding on what is Honest Computing, why it matters and how to leverage Wasm to build honest application in your software.
Watch
Smarter Operating Systems Will Use Wasm - The Coming OS Revolution by Jonas Kruckenberg @ Wasm I/O
Wasm I/O 2025 - Barcelona, 27-28 March Slides: https://2025.wasm.io/slides/smarter-operating-systems-will-use-wasm-the-coming-os-revolution-wasmio25.pdf Repo: https://github.com/JonasKruckenberg/k23 Operating systems are the ultimate generic challenge: They have to run on a wide variety of hardware and support a multitude of programs. This puts them in the awkward position of being very conservative and making less ideal decisions in the name of portability, reliability, and security. k23 started with the realization that WASM allows us to flip the script on this, though: By embedding a WASM just-in-time compiler directly into a microkernel, we can aggressively optimize components for the hardware at hand and tune memory allocations to the running programs. We can also be much more helpful, automatically checking for use-after-free bugs, printing stack traces, snapshotting application state, tracing program execution, and more. In my talk, I want to present k23, how strong isolation and capability-based security lead to more secure software, how embedded JIT compilers lead to a more performant and helpful OS, and talk about what the future might hold.
Watch
Empowering the Future of WebAssembly: The Bytecode Alliance Mission
Wasm I/O 2025 - Barcelona, 27-28 March By Natalia Venditto - Microsoft, Oscar Spencer - F5 NGINX, Till Schneidereit - Fermyon, Bailey Hayes - Cosmonic & Ralph Squillace / Microsoft.
Watch
Code Anywhere, Share Everywhere: Wasm-Powered Dev Environments by Danny Macovei @ Wasm I/O 2025
Wasm I/O 2025 - Barcelona, 27-28 March Leveraging compilers and runtimes that have been compiled to wasm, we can create wasm from source code directly in the browser. With the Component Model and WASI, we can even author wasm components that make use of the network. Moreover, with service workers, we can run the backend services that comprise a service mesh in the browser as we develop them. In this talk we’ll cover the nuts and bolts of how this is possible with WebAssembly, as well as how these tools enable sharing with links at each phase of the development cycle, from writing your code, to debugging and testing it, to deployments.
Watch
Do You Want to Play Doom in Your Browser? By Bruce Gain @ Wasm I/O 2025
Wasm I/O 2025 - Barcelona, 27-28 March Slides: https://docs.google.com/presentation/d/1hORpnPsTxS0_zBTVZcL-GvWcLjdU6-yLEWU1DTV7e8w/edit#slide=id.p1 WebAssembly (Wasm) is increasingly playing a key role in video game distribution and online gameplay. Additionally, Wasm serves as a pivotal enabler for successful media streaming by industry giants such as Microsoft, Disney, Netflix, and others. While the intricate “under-the-hood” workings and infrastructure of media streaming largely remain opaque due to proprietary services, this talk provides an overview of how gaming and other content are currently utilized and how they should evolve in the future. As an example of how gaming distribution and play can work, we demonstrate how WebAssembly, in conjunction with Emscripten, is used to compile the original Doom’s C code into a format executable within a browser environment. The fork we created and worked with on GitHub proved to be more challenging than anticipated. During the talk, we discuss the struggles we faced and the solutions we implemented to successfully get Doom running in a browser. By dissecting this process, attendees will gain valuable insight into the inner workings of WebAssembly for game streaming. Additionally, attendees will have the chance to play Doom directly in their browser using a link provided during the session. Join us to explore how WebAssembly is shaping the future of gaming and media streaming.
Watch
Making PHP apps in Wasm incredibly fast by Syrus Akbary & Edoardo Marangoni @ Wasm I/O 2025
Wasm I/O 2025 - Barcelona, 27-28 March Slides: https://speakerdeck.com/syrusakbary/making-php-extremely-fast-in-webassembly In this talk, we will share the story of how we leveraged WebAssembly to run PHP perform nearly as fast as native code, and in many cases even faster. We’ll walk through the steps we took, the challenges we faced, and the tangible impact it had on our services. Whether you’re a developer, architect, or performance enthusiast, this talk will showcase practical insights into bridging the gap between interpreted languages and native performance. KEY TAKEAWAYS -Learn how WebAssembly can be a game-changer for PHP and other interpreted languages. -Discover the optimizations and strategies we employed to achieve near-native speed -Understand the real-world impact on scalability, latency, and resource efficiency. WHO SHOULD ATTEND This talk is for developers, architects, and anyone interested in WebAssembly, PHP, or high-performance web applications. Whether you’re working on modernizing legacy systems or exploring cutting-edge web technologies, this session will provide actionable insights. WHY THIS TALK? The convergence of WebAssembly and traditional languages like PHP opens up a new frontier for web development. Our experience demonstrates how embracing Wasm can redefine performance expectations, making this talk a must-attend for anyone looking to push boundaries.
Watch
Privacy First: Building LLM-Powered Web Apps with client side WASM by Shivay Lamba & Saiyam Pathak
Wasm I/O 2025 - Barcelona, 27-28 March It’s no secret that for a long time machine learning has been mostly a Python game, but the recent surge in popularity of ChatGPT has brought many new developers into the field. With JavaScript being the most widely-used programming language, it’s no surprise that this has included many web developers, who have naturally tried to build web apps. There’s been a ton of ink spilled on building with LLMs via API calls to the likes of OpenAI, Anthropic, Google, and others but in these cases, the user is sending the data and the prompt to the servers of these tools and hence is not a 100% secure. and relying solely on cloud APIs raises issues like cost, latency, and privacy moreover some companies/organizations might require a privacy focused approach which requires building web apps using exclusively local models and technologies, preferably those that run in the browser! This is where open source tools like LangChain, Voy come into the picture. In this talk, we demonstrate building real-time conversational agents using local machine learning that address these concerns while unlocking new capabilities. We detail constructing a complete language model pipeline that runs fully in the browser. This includes ingesting documents, embedding text into vectors, indexing them in a local vectorstore, and interfacing with a state-of-the-art model like Ollama for text generation. By using lightweight packages like Transformers.js and Voy which is an open source vector store running on the browser with the help of WebAssembly, we can quantize and compile models to run efficiently on the user’s device all thanks to WASM. This allows us to build complex conversational workflows like retrieval augmented generation entirely on-device. We handle ingesting external knowledge sources, “dereferencing” conversational context, and chaining local models together to enable contextual, multi-turn conversations.
Watch
The Future of Write Once, Run Anywhere: From Java to WebAssembly by Patrick Ziegler & Fabio Niephaus
Wasm I/O 2025 - Barcelona, 27-28 March Slides: https://2025.wasm.io/slides/the-future-of-write-once-run-anywhere-from-java-to-webassembly-wasmio25.pdf Programming languages that are designed to be statically compiled such as Rust, Go, and C++ already provide good support for Wasm, unlike languages such as Java or Python. But what does it take to compile Java to Wasm? In this session, we’ll introduce the brand new Wasm backend for GraalVM Native Image, which allows developers to compile Java applications into efficient Wasm modules leveraging the new Wasm GC proposal. We explain how Native Image and the new backend work, outline possible use cases, and show live demos. We also discuss current limitations and provide an overview of what to expect next. If you’re interested in the future of Java and WebAssembly, this talk is for you!
Watch