We’re incredibly excited to announce the release of JetStream 3, built in close collaboration with Apple, Mozilla, and other partners in the web ecosystem!
While we’ve covered the high-level details of this release in our shared announcement blog post, we wanted to take a moment here to dive a little deeper. In this post, we’ll pull back the curtain on the benchmark itself, explore the methodology behind our choices, and share the motivations driving these major updates.
Before we get into the "what," it helps to talk about the "why." Why do browser engineers care so much about benchmarks?
At its core, benchmarking serves as a critical safety net for catching performance regressions before they ever reach users. But beyond that, benchmarks act as a powerful motivation function—a sort of "gamification" for browser engineers. Having a clear target helps us prioritize our efforts and decide exactly which optimizations deserve our focus. It also drives healthy competitiveness between different browser engines, which ultimately lifts the entire web ecosystem.
Of course, the ultimate goal isn't just to make a number on a chart go up; it's to meaningfully improve user experience and real-world performance.
Just like Speedometer 3, JetStream 3 is the result of a massive collaborative effort across all major browser engines, including Apple, Mozilla, and Google.
We adopted a strict consensus model for this release. This means we only added new workloads when everyone agreed they were valuable and representative. This open governance model has led to an incredibly productive collaboration with buy-in from multiple parties, ensuring the benchmark serves the best interests of the overall Web ecosystem.
The last major release, JetStream 2, came out in 2019. In the technology space—and especially on the Web—six years is an eternity.
There's a well-known concept in economics called Goodhart's Law, which states that when a measure becomes a target, it ceases to be a good measure. Over time, engines naturally optimize for the specific patterns of a benchmark, and the metrics slowly lose their correlation with real-world performance. Speedometer recently received a massive update to account for this, and it only makes sense that JetStream is next in line.
You might be wondering: with the recent release of Speedometer 3, why do we need another benchmark?
While Speedometer is fantastic for measuring UI rendering and DOM manipulation, JetStream has a different focus: the computationally intensive parts of Web applications. We're talking about use cases like browser-based games, physics simulations, framework cores, cryptography, and complex algorithms.
There are also practical engineering considerations. JetStream is designed so that it can run in engine shells—like d8, the standalone shell for V8. For engine developers, this is a massive advantage. Building a shell is significantly quicker than compiling a full browser like Chrome, allowing engineers to iterate faster. Because d8 is single-process, it also produces far less background noise, leading to more stable testing. This shell-compatibility also makes JetStream highly valuable for hardware and device vendors running simulators. It is a trade-off—a shell is slightly further removed from a full, real-world browser environment—but the engineering velocity it unlocks is well worth it.
d8
Building a benchmark requires a delicate balance between microbenchmarks and real applications.
Microbenchmarks are great engineering tools; they have a high signal-to-noise ratio and make it easy to see the effects of one specific optimization. While they make sense for early improvements of new features, they also often encourage overfitting in the long run. Engines might optimize heavily for a tiny loop that looks great on the benchmark but does absolutely nothing to help real users.
Because of this, a primary criterion for inclusion in JetStream 3 is that a workload should represent a real, end-to-end use case (or at least a highly abstracted form of one).
We also heavily prioritized diversity. We don’t want workloads that all exercise the exact same hot loop. We want coverage across different frameworks, varied libraries, diverse source languages, and distinct toolchains.
Finally, we had to lay down some practical ground rules:
One of the most significant shifts in JetStream 3 is an increased focus and major update with regards to WebAssembly (Wasm).
When JetStream 2 was created, Wasm was still in its infancy. Fast forward to today, and Wasm is significantly more widespread.
Because the language has evolved so rapidly, JetStream 2 became outdated quickly. It only tested the Wasm MVP (Minimum Viable Product). Today, the Wasm spec includes powerful features like SIMD (single instruction, multiple data), WasmGC, and Exception Handling—none of which were being properly benchmarked.
The ecosystem of tools has also completely transformed. The old workloads relied almost entirely on ancient versions of Emscripten compiling C/C++, often utilizing the deprecated asm.js backend via asm2wasm. Furthermore, some of the old microbenchmarks mis-incentivized the wrong optimizations. For example, the old HashSet-wasm workload rewarded aggressive inlining that actually hurt performance in real-world user scenarios.
asm.js
asm2wasm
HashSet-wasm
To fix this, we sought out entirely new Wasm workloads, introducing 12 in total.
We expanded our toolchain coverage from just C++ to include five new toolchains: J2CL, Dart2wasm, Kotlin/Wasm, Rust, and .NET. This means we are now actively benchmarking Wasm generated from Java, Dart, Kotlin, Rust, and C#!
These workloads represent actual end-to-end tasks, including:
These aren't tiny, kilobyte-sized modules anymore. These are multi-megabyte applications that produce diverse, complex flamegraphs, pushing engines to their limits. Reflecting its heightened importance on the modern web, Wasm now makes up 15-20% of the overall benchmark suite, up from just 7% in JetStream 2. Beyond new workloads, JetStream 3 also overhauls scoring to ensure that runtime performance—not just instantiation—is accurately reflected in the total score.
We have many new larger JavaScript workloads that better represent how JS is used in the wild. Additionally to just measuring the pure execution speed we have "startup" workloads that include parsing and frameworks setup code – more closely matching what happens on initial page load.
With JetStream 3, the browser benchmarking space has made another big step forward and brought a new tool for browsers to improve performance for their valued users. Alongside Speedometer and MotionMark, these benchmarks give a clear view not only to browser vendors but also to users about various engine’s performance.
If you’d like to contribute to the benchmark with your own workloads or have suggestions for how we can make it better, feel free to join the repository on GitHub. We’re continually iterating on these benchmarks and will have more updates on each in the future as well.
A core part of the Android experience is the web. Whether you are browsing in Chrome or using one of the >90% of Android apps that utilize WebView, the speed of the web defines the speed of your phone. Today, we are proud to celebrate a major milestone: Android is now the fastest mobile platform for web browsing.
Through deep vertical integration across hardware, the Android OS, and the Chrome engine, the latest flagship Android devices are setting new performance records, outperforming all other mobile competitors in the key web performance benchmarks Speedometer and LoadLine and providing a level of responsiveness previously unseen on mobile.
Android flagship phones reach new high-scores inweb performance benchmarks (Chrome 146, March 2026)
Web performance isn't just about high scores—it’s about how your device feels every day. On Android, web content and its performance is central to the user experience.
Whether searching for information, catching up on the latest news, or online-shopping, Android users spend a significant portion of their daily screen time interacting with web content. Chrome is one of the most popular Android apps in the US and worldwide. Furthermore, this usage increases sharply on tablets and foldables, where productivity use cases are key.
While the web is clearly important, a great web experience necessitates a fast browser and device: Modern websites are highly complex, with more than 200 million active sites serving everything from blog posts with dynamic ad auctions to desktop-class productivity tools. This complexity makes for a demanding workload that can stress even powerful devices.
To ensure a high-quality user experience, we focus on two critical pillars when evaluating web performance: responsiveness and page load speed.
Speedometer is the collaborative industry standard used by all major browser engine developers to measure web app responsiveness. It simulates real-world user actions—like adding items to a to-do list—to measure interaction latency.
While synthetic, Speedometer's workloads offer high consistency and are built using relevant, state-of-the-art web frameworks, such as React, Angular or jQuery, and include to-do apps, text editors, chart rendering, and a mock news portal.
Speedometer scores have a strong correlation (-0.8) with 99th-percentile interaction latency (INP) in the field. Thus, a higher Speedometer score directly translates to a more fluid, snappy feeling when you tap, scroll, or type on a website.
While interaction responsiveness is vital, it’s only half of the story. Users also care about how fast a page appears after they click a link. To measure this, Chrome and Android teams worked with Android SoC and OEM partners to develop LoadLine, an emerging end-to-end benchmark that simulates the complete process of loading a website.
Where traditional benchmarks often focus on synthetic tasks, LoadLine uses recorded, stable versions of select real-world websites. This includes simpler and more complex sites with varied characteristics, reflecting the most important types of mobile web content, such as shopping, search, and news portals.
LoadLine has proven that Android's page load performance is world-class: Top tier Android phones score up to 47% higher than non-Android competitors. And this matters: LoadLine scores also correlate well (-0.8) with median and high-percentile page load latency in the field.
Speedometer (left) and examples of LoadLine workloads (right)
Android’s current lead is the result of a concerted effort to tune the entire "stack"—from silicon to software.
We encouraged our Android partners to evaluate and tune their devices against Speedometer and LoadLine. While advances in SoCs' core performance build the foundation for fast web experiences, tuning of the OS and browser software stack are critical to utilize the hardware effectively. Collaborating with select SoC and OEM partners, we utilized Speedometer and LoadLine to optimize Chrome and kernel scheduler policies.
As a result of these improvements, some Android flagship phones improved their Speedometer and LoadLine scores by 20-60% year-over-year, compared to their respective predecessor models. And these improvements translate to faster real-world web performance: Today, page loads are 4-6% faster and high-percentile interactions 6-9% faster on these newer models, for real users in the field.
We invite all developers and hardware partners to join us in using these benchmarks to push the boundaries of what’s possible on the mobile web.
We’re excited to announce that Google will launch Chrome for ARM64 Linux devices in Q2 2026, following the successful expansion of Chrome to Arm-powered macOS devices in 2020 and Arm-powered Windows devices in 2024.
Launching Chrome for ARM64 Linux devices allows more users to enjoy the seamless integration of Google’s most helpful services into their browser. This move addresses the growing demand for a browsing experience that combines the benefits of the open-source Chromium project with the Google ecosystem of apps and features.
This release represents a significant undertaking to ensure that ARM64 Linux users receive the same secure, stable, and rich Chrome experience found on other platforms.
Get the best of the Google ecosystem
With Chrome, you are able to leverage the full power of the Google ecosystem, providing a more cohesive and feature-rich environment designed for convenience and cross-device continuity. By signing into a Google Account, your bookmarks, browsing history, and open tabs follow you across devices. You can easily access the best extensions the Chrome Web Store has to offer, without needing to use specialized tools or alter developer settings. And you can effortlessly translate webpages with a single click.
Use the browser that is secure by design
Chrome also offers the added benefit of Google’s strongest security protections. Enabling Enhanced Protection in Safe Browsing offers real-time protection against phishing and malware by leveraging AI alongside Google’s list of known threats. With the Google Pay integration you can easily and securely manage your payments, using Chrome autofill for an added level of convenience. And the Google Password Manager lets you securely store, generate, and sync complex passwords across all your devices, eliminating the need to memorize multiple logins. It goes beyond simple storage by actively monitoring your credentials for data breaches and providing "Password Checkup" alerts if any of your accounts are compromised.
Partnering with the industry
Last year, NVIDIA introduced the DGX Spark, an AI supercomputing device that packs its Grace Blackwell architecture into a compact, 1-liter form factor. Google is partnering with NVIDIA to make it easier for DGX Spark users to install Chrome. Users with other Linux distributions can also install the ARM64 version of Chrome by visiting chrome.com/download.
This launch marks a major milestone in our commitment to the Linux community and the Arm ecosystem. We look forward to seeing how developers and power-users leverage Chrome on this next generation of high-performance devices.