Bun

Bun v1.3.14


Jarred Sumner · May 13, 2026

To install Bun

curl
npm
powershell
scoop
brew
docker
curl
curl -fsSL https://bun.sh/install | bash
npm
npm install -g bun
powershell
powershell -c "irm bun.sh/install.ps1|iex"
scoop
scoop install bun
brew
brew tap oven-sh/bun
brew install bun
docker
docker pull oven/bun
docker run --rm --init --ulimit memlock=-1:-1 oven/bun

To upgrade Bun

bun upgrade

Bun.Image — Built-in Image Processing

Bun now ships a built-in image processing API that handles JPEG, PNG, WebP, GIF, and BMP — plus HEIC, AVIF, and TIFF on macOS and Windows — with zero native module installs.

Bun.Image provides a chainable pipeline for decoding, transforming, and encoding images, designed as a drop-in alternative to sharp for common server-side image operations.

// Resize and convert a photo to WebP
await Bun.file("photo.jpg")
  .image()
  .resize(1024, 1024, { fit: "inside" })
  .rotate(90)
  .webp({ quality: 85 })
  .write("thumb.webp");

// Generate a thumbnail from an upload in a single expression
return new Response(new Bun.Image(upload).resize(200).jpeg());

Input sources

Bun.Image accepts path strings, ArrayBuffer/TypedArray (zero-copy), Blob/BunFile/S3File, and data: URLs. You can also use Bun.file("photo.jpg").image() or blob.image() to start a pipeline.

Chainable transforms

The pipeline supports .resize(w, h?, {filter, fit, withoutEnlargement}), .rotate(90|180|270), .flip(), .flop(), and .modulate({brightness, saturation}). Output format is set with .jpeg(), .png(), .webp(), .heic(), or .avif() — each with format-specific quality/compression options.

Resize filters

All sharp filters are supported: nearest, box, bilinear, cubic, mitchell, lanczos2, lanczos3, plus mks2013 and mks2021.

Terminal methods

All processing runs off the main thread (except metadata()). Output via .bytes(), .buffer(), .blob(), .toBase64(), .dataurl(), .placeholder() (thumbhash), .metadata(), or .write(dest).

const meta = await new Bun.Image(buf).metadata();
// { width: 1920, height: 1080, format: "jpeg", ... }

const placeholder = await Bun.file("hero.jpg").image().placeholder(); // thumbhash data URL for blur-up

Body integration

Bun.Image instances work directly as response/request bodies with automatic Content-Type:

return new Response(new Bun.Image(upload).resize(200).jpeg());

Platform-specific formats

FormatmacOSWindowsLinux
JPEG
PNG
WebP
GIF
BMP (simple)
TIFF✅ decode✅ decode
HEIC✅ decode + encode✅ decode + encode
AVIF✅ decode (+ encode on Apple Silicon)✅ decode + encode

JPEG, PNG, WebP, GIF, and BMP use statically linked codecs and produce identical output across all platforms. HEIC, AVIF, and TIFF use OS system backends (ImageIO + vImage on macOS, WIC on Windows) with lazy symbol resolution for zero startup cost.

Performance vs sharp 0.34.5

Benchmarked on linux/x64 with 50 iterations and sharp.concurrency(1):

OperationBun.ImagesharpSpeedup
metadata()0.004 ms0.28 ms70×
1080p PNG → 400×400 → JPEG28.6 ms39.5 ms1.38×
1080p PNG → 800×600 → WebP82.7 ms110.1 ms1.33×
4K JPEG → 800×450 → JPEG35.8 ms45.5 ms1.27×
4K JPEG → 1920×1080 → JPEG57.2 ms69.9 ms1.22×
12MP JPEG → 1024×768 → WebP138 ms165 ms1.20×

The performance comes from i16 fixed-point SIMD resize kernels, JPEG IDCT scaling to the smallest sufficient size, zero-copy ArrayBuffer borrowing, and a single pre-allocated arena for resize scratch memory.

Global Virtual Store

bun install --linker=isolated now supports a shared global virtual store via the install.globalStore = true option in bunfig.toml. Instead of cloning every package from the cache into each project's node_modules on every install, packages are materialized once into a global <cache>/links/ directory, and each project's node_modules/.bun/<pkg>@<ver> becomes a symlink into it.

Warm installs — lockfile present, cache warm, node_modules wiped (the common CI path) — now perform ~1 symlink() per package instead of ~1 clonefileat() per file copy. On macOS APFS, clonefileat() holds a volume-wide kernel lock that made parallelization ineffective. The global store eliminates those calls entirely on the warm path.

Benchmarks — warm install of a ~1,400-package fixture on Apple Silicon macOS (hyperfine --warmup 3 --runs 10):

Wall timeSystem timeclonefileat calls
--linker hoisted823 ms478 ms1,387
--linker isolated (before)841 ms1,256 ms1,387
--linker isolated (after)115 ms94 ms0

This is still experimental, so the global store is off by default with the isolated linker.

To enable:

# bunfig.toml
[install]
globalStore = true

Or via environment variable:

BUN_INSTALL_GLOBAL_STORE=1 bun install

A package is eligible for the global store only when it comes from an immutable cache source (npm registry, git, tarball — unpatched, no trusted lifecycle scripts) and all of its transitive dependencies are also eligible. Ineligible packages fall back to per-project copies automatically.

The entry hash encodes the package's resolved dependency closure, so two projects that resolve a package to the same transitive versions share one on-disk entry, while a project with different resolutions gets its own.

This release also fixes a pre-existing issue: Bun now synthesizes an implicit "*" optional peer dependency for entries that appear in peerDependenciesMeta but not in peerDependencies (matching pnpm/yarn behavior). This fixes compatibility with packages like webpack-cli.

HTTP/3 (QUIC) support in Bun.serve

⚠️ Highly experimental. HTTP/3 support is new and likely has bugs. Do not deploy http3: true to production yet.

Bun.serve now supports HTTP/3 over QUIC. Enable it with a single flag:

Bun.serve({
  port: 443,
  tls: { cert, key },
  http3: true, // also listen on UDP/443 for HTTP/3
  fetch(req) {
    return new Response("hi");
  },
});

When http3: true is set alongside tls, Bun binds TCP for HTTP/1.1+2 and UDP for HTTP/3 on the same port. Your existing fetch handler and routes work identically across all three protocols — no code changes needed. HTTP/1.1 and HTTP/2 responses automatically include Alt-Svc: h3=":<port>"; ma=86400 so browsers discover the QUIC endpoint.

You can also serve HTTP/3 only:

Bun.serve({
  port: 443,
  tls: { cert, key },
  http3: true,
  http1: false, // disable HTTP/1.1
  fetch(req) {
    return new Response("h3 only");
  },
});

Everything you'd expect works over HTTP/3: new Response(readableStream) for streaming, new Response(Bun.file("large.bin")), new Response(req.body) passthrough, req.url/req.headers/req.method across await boundaries, requestIP(), server.reload(), and graceful server.stop().

Performance

On Linux x64 (single process, loopback), HTTP/3 is significantly faster than HTTPS/1.1 from the same server instance:

BenchmarkHTTP/3HTTPS/1.1HTTP/1.1
Static route (routes)509,135 req/s189,130 req/s239,476 req/s
Dynamic fetch handler283,485 req/s142,323 req/s171,696 req/s

~50% of HTTP/3 CPU time is inside lsquic; further optimizations may come in a future releas.e

Limitations

  • WebSocket over HTTP/3 is not supported yet (server.upgrade() returns false). WebTransport is a separate project.
  • 0-RTT is disabled.
  • unix: socket addresses skip the H3 listener (QUIC over Unix sockets is non-standard).
  • No trailer support, no Expect: 100-continue (matching HTTP/1.1 behavior).

Powered by lsquic v4.6.2.

Experimental HTTP/2 Client for fetch()

fetch() now supports HTTP/2 as an experimental feature. When enabled, Bun negotiates h2 via TLS ALPN — multiple concurrent fetches to the same origin share a single multiplexed TCP+TLS connection instead of opening separate HTTP/1.1 connections.

Enable it globally with an environment variable or CLI flag, or opt in per-request:

// Opt in globally:
//   BUN_FEATURE_FLAG_EXPERIMENTAL_HTTP2_CLIENT=1 bun run app.js
//   bun run --experimental-http2-fetch app.js

// Or per-request (works without the env flag):
const res = await fetch("https://example.com", { protocol: "http2" });

Multiplexing & connection coalescing

Parallel fetches to the same origin share one TLS handshake and one connection. The first request opens the socket; subsequent requests attach to the same HTTP/2 session up to the server's MAX_CONCURRENT_STREAMS limit, with overflow queued automatically.

Per-request protocol control

The new protocol option in RequestInit lets you pin the HTTP version:

// Force HTTP/2 — fails with HTTP2Unsupported if the server doesn't support it
await fetch("https://example.com", { protocol: "http2" });

// Force HTTP/1.1 — ignores the experimental flag
await fetch("https://example.com", { protocol: "http1.1" });

Accepted values: "http2", "h2", "http1.1", "h1".

What works

  • Keep-alive pooling — idle HTTP/2 sessions (with HPACK state) are reused by subsequent requests
  • Streaming request bodiesReadableStream bodies are sent as DATA frames with proper flow control
  • REFUSED_STREAM and graceful GOAWAY — transparently retried (up to 5 attempts) for replayable bodies
  • Content-Length enforcement per RFC 9113 §8.1.1
  • Expect: 100-continue support

Hardening

The HTTP/2 client also includes RFC 9113 conformance and denial-of-service protections:

  • CONTINUATION flood / HPACK bomb mitigation: 256 KiB cap on both header-block accumulation and decoded header lists, advertised via SETTINGS_MAX_HEADER_LIST_SIZE.
  • PING reflection attack mitigation: 1 MiB cap on queued PING/SETTINGS-ACK control frames prevents unbounded memory growth from malicious servers.
  • The first server frame must be SETTINGS per RFC 9113 — connections that violate this are immediately terminated.
  • RST_STREAM(NO_ERROR) mid-body now correctly fails the request instead of silently truncating the response.
  • REFUSED_STREAM retries only when no data has been delivered to the caller.
  • Content-Length mismatches with actual DATA frame bytes are now detected and rejected.
  • Trailers without END_STREAM are now rejected per spec.
  • GOAWAY no longer drops already-completed streams.

Not yet supported

HTTP proxies/CONNECT tunneling, Unix sockets, server push, and cleartext HTTP/2 (h2c) are not yet supported. The HTTP/1.1 path is completely unchanged when the flag is off and protocol is not set.

Experimental HTTP/3 Client for fetch()

fetch() now supports an experimental HTTP/3 client using the protocol option. This uses an lsquic-backed QUIC transport that runs alongside the existing HTTP/1.1 and HTTP/2 paths.

⚠️ Highly experimental. This is an early preview — the API may change in future releases.

const res = await fetch("https://example.com/", { protocol: "http3" });
console.log(await res.text());

Both "http3" and "h3" are accepted as protocol values. The HTTP/3 client shares the same redirect, decompression, and response handling pipeline as HTTP/1.1 and HTTP/2, so existing fetch() behavior is preserved.

What's supported:

  • All standard HTTP methods (GET, POST, PUT, DELETE, HEAD)
  • Request and response headers, JSON bodies, gzip compression
  • Redirects
  • Large request/response bodies (1 MB+ round-trips)
  • Concurrent multiplexed requests over a single QUIC connection
  • Connection pooling and sequential reuse
  • ReadableStream request body uploads
  • Full-duplex bidirectional streaming (server can respond while upload is still in progress)
  • rejectUnauthorized TLS option (defaults to true)
  • AbortSignal support

Alt-Svc HTTP/3 upgrades

The HTTP/3 client can also automatically upgrade fetch() requests to HTTP/3 via the Alt-Svc header (RFC 7838). When a server advertises Alt-Svc: h3 in an HTTPS response, subsequent requests to that origin are routed over QUIC instead of TCP.

This is opt-in while the HTTP/3 client matures. Enable it with the CLI flag or environment variable:

# CLI flag
bun --experimental-http3-fetch app.ts

# Environment variable
BUN_FEATURE_FLAG_EXPERIMENTAL_HTTP3_CLIENT=1 bun app.ts
// First request goes over TCP/TLS as usual
const res1 = await fetch("https://example.com/api");
// If the response includes `Alt-Svc: h3=":443"`,
// the next request to the same origin uses QUIC/HTTP-3
const res2 = await fetch("https://example.com/api");

The upgrade is transparent and per-origin — cross-origin redirects re-evaluate from HTTP/1.1, and requests that aren't eligible (proxied, unix socket, sendfile, or pinned to a specific protocol) gracefully fall back to TCP.

Rewritten fs.watch() backend on Linux, macOS, and FreeBSD

Bun's fs.watch() implementation on POSIX platforms has been completely rewritten to talk directly to the OS file-watching APIs (inotify, FSEvents, kqueue) instead of routing through Bun's internal bundler watcher. This fixes several long-standing bugs and reduces complexity significantly.

Recursive watching now tracks new directories (Linux)

Previously, fs.watch("dir", { recursive: true }) only registered the directory tree that existed at the time watch() was called. Directories created after the watch started were never tracked, so files inside them were invisible to the watcher.

import fs from "node:fs";

// Now correctly detects changes inside directories created after watch() starts
fs.watch("./src", { recursive: true }, (event, filename) => {
  console.log(event, filename);
});

// mkdir src/newDir && touch src/newDir/file.txt
// Previously: only "rename newDir" — file.txt was missed
// Now: "rename newDir", "rename newDir/file.txt", "change newDir/file.txt"

Deleted-and-recreated files emit change events again (Linux)

When a watched file was deleted and recreated, subsequent modifications to the recreated file would silently stop emitting change events. This is now fixed — the new inotify watch descriptor is correctly registered on recreation.

macOS no longer spins up two watcher threads

Previously, fs.watch() on a directory on macOS would start both a kqueue watcher (via the bundler watcher) and an FSEvents CFRunLoop thread. The new implementation uses FSEvents exclusively for both files and directories, matching libuv's behavior and halving the thread overhead.

--no-orphans — exit when the parent process dies

Bun now supports an opt-in mode that automatically exits when its parent process dies — even if the parent was SIGKILLed and never had a chance to forward a signal. On exit, Bun also recursively SIGKILLs every descendant process it spawned.

This is useful when Bun is launched by a supervisor (Electron, a CI runner, a thin shim) that may be force-killed. Without this option, Bun would be silently reparented to launchd/init and keep running forever, along with anything it spawned.

There are three equivalent ways to enable it:

# CLI flag (new)
bun --no-orphans run my-script

# bunfig.toml
[run]
noOrphans = true

# Environment variable
BUN_FEATURE_FLAG_NO_ORPHANS=1 bun run my-script

The flag is automatically inherited by nested Bun processes, so enabling it once at the top level is sufficient.

How it works:

  • Linux: Uses prctl(PR_SET_PDEATHSIG, SIGKILL) — kernel-delivered, no polling, no thread. Children spawned from the main thread also inherit PDEATHSIG by default so non-Bun descendants are covered.
  • macOS: Registers a EVFILT_PROC/NOTE_EXIT watch for the original parent pid on the existing event loop's kqueue — the same mechanism Bun already uses to watch child process exits. No dedicated thread, no extra file descriptor.

On clean exit, Bun walks its process tree and uses a stop-verify-kill strategy for pid-reuse safety: each descendant is SIGSTOPped, its ppid is re-verified, and only then is it SIGKILLed. This prevents accidentally killing an unrelated process that recycled a stale pid.

macOS coverage is now comprehensive. Previously, bun run <script> and bunx on macOS had no parent-death watching — if the parent was killed, spawned scripts could be left orphaned. Bun now uses a dedicated kqueue watcher for these paths, monitoring both the parent process and child stdio. bun run --filter and bun run --parallel on macOS are also now covered.

LinuxmacOS
bun <file>prctl✅ Event loop watcher
bun run / bunxprctlkqueue watcher (new)
--filter / --parallelprctl✅ MiniEventLoop watcher (new)

Linux and macOS only (no-op on Windows).

process.execve() support

Bun now implements process.execve(execPath, args, env), matching the API added in Node.js v24. This POSIX syscall replaces the current process image in-place — it never returns on success.

// Replace the current process with a new one
process.execve("/usr/bin/echo", ["echo", "hello from execve"], {
  PATH: process.env.PATH,
});

// ^ If successful, this line is never reached

Key details:

  • stdio is inherited — file descriptors 0/1/2 are preserved across the exec boundary, while all other descriptors are marked close-on-exec to prevent leaks.
  • Signal mask is reset before calling execve(2).
  • Throws ERR_WORKER_UNSUPPORTED_OPERATION when called from a worker thread.
  • Throws ERR_FEATURE_UNAVAILABLE_ON_PLATFORM on Windows.
  • Emits an ExperimentalWarning once per process, matching Node.js behavior.
  • If execve fails, the process prints the error to stderr and aborts (consistent with Node.js behavior, since process state has already been mutated).

Bun.Terminal on Windows via ConPTY

Bun.Terminal and Bun.spawn({ terminal }) now work on Windows, powered by the Windows ConPTY API (CreatePseudoConsole). Previously, Bun.Terminal was only available on macOS and Linux.

const terminal = new Bun.Terminal({
  cols: 80,
  rows: 24,
  onData(data) {
    process.stdout.write(data);
  },
});

const proc = Bun.spawn({
  cmd: ["cmd.exe", "/c", "echo", "hello from ConPTY"],
  terminal,
});

await proc.exited;
terminal.close();

Platform differences

The core behavior — child sees a TTY, write() reaches the child's stdin, child output reaches the data callback, resize() updates the child's window size — is the same on every platform. A few details differ on Windows:

  • No termios on Windows. inputFlags, outputFlags, localFlags, and controlFlags always read as 0 and setting them is a no-op.
  • No echo without a child process. On POSIX, the kernel line discipline echoes write() input back to the data callback even with no process attached. ConPTY has no line discipline, so input is buffered for the next reader.
  • ConPTY re-encodes output. The data callback receives semantically equivalent — but not byte-identical — escape sequences compared to what the child emitted. Colors and text are preserved; cursor-positioning sequences may be reordered or coalesced.

Thanks to @dylan-conway for the contribution!

using / await using no longer lowered when targeting Bun

Bun's underlying JavaScript engine (JavaScriptCore) natively supports the Explicit Resource Management proposal (using and await using). Starting in this release, Bun no longer transpiles these declarations into __using / __callDispose helper calls wrapped in try/catch/finally when the target is Bun.

This applies to:

  • bun run / bun <file>
  • Bun.Transpiler({ target: "bun" })
  • bun build --target=bun (including --compile and --bytecode)

Other targets (browser, node) continue to lower using as before.

Before:

// bun build --target=bun entry.js
var __using = (stack, value, async) => {
  /* ... */
};
var __callDispose = (stack, error, hasError) => {
  /* ... */
};
{
  let __stack = [];
  try {
    const x = __using(
      __stack,
      {
        [Symbol.dispose]() {
          /* ... */
        },
      },
      0,
    );
    console.log("hi");
  } catch (_catch) {
    var _err = _catch,
      _hasErr = 1;
  } finally {
    __callDispose(__stack, _err, _hasErr);
  }
}

After:

// bun build --target=bun entry.js
{
  using x = {
    [Symbol.dispose]() {
      /* ... */
    },
  };
  console.log("hi");
}

This also fixes a bug where using inside a CommonJS module (.cjs) would inject an ESM import … from "bun:wrap" inside the CommonJS function wrapper, causing an Expected CommonJS module to have a function wrapper error instead of the expected TypeError for non-disposable values.

SIGHUP and SIGBREAK signal handling on Windows

process.on('SIGHUP', …) and process.on('SIGBREAK', …) now correctly receive Windows console-control events, matching Node.js behavior:

Console eventSignal
CTRL_CLOSE_EVENT (closing the console window)SIGHUP
CTRL_BREAK_EVENT (Ctrl+Break)SIGBREAK

Previously, these signal names were missing from Bun's Windows signal map, so registering a listener was treated as a plain EventEmitter event — no uv_signal_t was created, and the default handler would terminate the process immediately.

// Gracefully handle console window close on Windows
process.on("SIGHUP", () => {
  cleanup();
  process.exit();
});

// Handle Ctrl+Break
process.on("SIGBREAK", () => {
  console.log("Ctrl+Break received");
  process.exit();
});

Thanks to @ig-ant for the contribution!

WebSocket perMessageDeflate: false now respected in upgrade requests

Previously, setting perMessageDeflate: false when creating a WebSocket connection was silently ignored — Bun always sent the Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits header in the upgrade request. This broke deployments where gateways or proxies reject upgrade requests that advertise unwanted extensions.

Now, passing perMessageDeflate: false correctly suppresses the extension header, matching the behavior of Node.js and the ws package.

const WebSocket = require("ws");

// Extension header is now correctly omitted
const ws = new WebSocket("ws://localhost:3000", {
  perMessageDeflate: false,
});

// Also works with globalThis.WebSocket
const ws2 = new WebSocket("ws://localhost:3000", {
  perMessageDeflate: false,
});

Additionally, if the server responds with a Sec-WebSocket-Extensions header when the client did not offer any extensions, the handshake is now correctly failed per RFC 6455 §9.1 — matching upstream ws behavior.

FreeBSD and Android Support

Bun now has 1st-party native builds for FreeBSD and Android.

Reduced memory usage for MongoDB & Mongoose

All TLS-using APIs in Bun — Bun.connect, Bun.SQL (Postgres & MySQL), Valkey, upgradeTLS, new WebSocket(), and node:tls — now share a single native SSL_CTX cache per VM. Connections with identical TLS configurations reuse the same SSL_CTX instead of allocating a fresh one (~50 KB of BoringSSL state + cert/key parsing) per connection.

This is especially impactful for database connection pools: a Postgres or MySQL pool with sslmode=require and N connections previously created N separate SSL_CTX objects. Now it creates one.

import { SQL } from "bun";

// All connections in this pool now share a single SSL_CTX
const db = new SQL("postgres://user:pass@host/db?sslmode=require");

// These also share the same cached SSL_CTX since the config is identical
const conn1 = await Bun.connect({
  hostname: "example.com",
  port: 443,
  tls: true,
  socket: {
    /* ... */
  },
});
const conn2 = await Bun.connect({
  hostname: "example.com",
  port: 443,
  tls: true,
  socket: {
    /* ... */
  },
});

The cache is keyed by a SHA-256 digest of the TLS configuration fields. servername and ALPNProtocols are excluded from the digest (they're per-connection, not per-context), so Bun.connect({ tls: { servername: "x" } }) correctly shares the default SSL_CTX with tls: true.

This was the root cause behind long-standing memory leak reports when using tls.connect(), Bun.connect({tls}), socket.upgradeTLS(), and any library built on top of them (MongoDB, Mongoose, mysql2, etc.). Under connection churn — Postgres pools, Redis, fetch keepalive expiry, MongoDB heartbeats — RSS would grow rapidly until the garbage collector eventually frees the context. Now it avoids allocating unnecessary duplicate contexts.

// Before: each iteration allocated a fresh SSL_CTX (~50 KB+)
for (let i = 0; i < 1000; i++) {
  const sock = tls.connect({
    host: "localhost",
    port: 5432,
    rejectUnauthorized: false,
  });
  sock.on("secureConnect", () => sock.destroy());
}
// RSS after: ~1 GB

// After: one shared SSL_CTX, RSS stays flat
// RSS after: ~168 MB

Upgraded JavaScriptCore engine

Bun's underlying JavaScript engine (JavaScriptCore) has been upgraded with 565 upstream commits, bringing numerous performance improvements, bug fixes, and new capabilities.

JavaScript performance & correctness

  • Faster async functions — When an async function returns a value without any await, the returned promise is now optimized to avoid unnecessary overhead.
  • Faster Array.prototype.shift — New fastShift implementation for arrays.
  • Faster JSON.parse for short stringsJSString cells are now cached for short string values returned by JSON.parse.
  • Faster String.prototype.startsWith/endsWith — Single-character checks now avoid resolving rope strings.
  • Faster Intl.NumberFormat creation — Optimized construction and improved external memory reporting for Intl.NumberFormat and Intl.PluralRules.
  • Faster Array.prototype.indexOf on NodeList — New fast path added.

Bug fixes from upstream

  • Fixed Promise.prototype.finally throwing in SpeciesConstructor before calling then, matching spec behavior.
  • Fixed Object.defineProperties Proxy trap ordering to match the spec.
  • Fixed megamorphic inline cache property ownership check.
  • Fixed TypedArray toSorted/toReversed/with to correctly snapshot the span.
  • Fixed Intl.Segmenter isWordLike off-by-one error.
  • Fixed Intl.Locale to canonicalize before overriding language.
  • Fixed Intl.DateTimeFormat to preserve original legacy [[TimeZone]].
  • Fixed several RegExp JIT issues
  • Fixed JIT compiler issues with hole-handling when rematerializing sunk double arrays and escaping MultiGetByOffset constants not convertible to double.

WebAssembly

  • Relaxed SIMD support — Implements the relaxed SIMD proposal, adding instructions like f32x4.relaxed_madd, i8x16.relaxed_swizzle, and more.
  • Memory64 improvements — Atomics, bulk memory operations, and grow/size in the OMG tier now support 64-bit memory.
  • Fixed integer division/remainder with INT_MIN/-1 in BBQ JIT.
  • Fixed floating-point min/max negative-zero handling in BBQ JIT.
  • Fixed crash on wide-arithmetic instructions.

Thanks to @sosukesuzuki for the upgrade!

bun publish now sends README metadata to the registry

Previously, bun publish included README.md in the published tarball but didn't populate the readme or readmeFilename fields in the JSON body sent to the npm registry. This meant packages published with Bun showed an empty README when queried via the registry API (e.g. npm view <pkg> readme), even though the tarball contained one.

Now, bun publish matches npm publish behavior by automatically finding the first README or README.* file (case-insensitive) in your workspace and including its contents in the version metadata sent to the registry. This works for both workspace publishes (bun publish) and tarball publishes (bun publish path.tgz). A readme field already present in package.json takes precedence.

# Before: registry API returned empty readme
npm view @my-scope/my-package readme  # ""

# After: readme contents are properly sent
npm view @my-scope/my-package readme  # "# my-package\n\nA great package..."

Updated SQLite to 3.53.0

Bun's built-in SQLite has been updated from 3.51.2 to 3.53.0.

Notable changes in SQLite 3.53.0 include:

  • New SQLITE_DBCONFIG_FP_DIGITS option for controlling floating-point precision when converting doubles to text
  • New SQLITE_LIMIT_PARSER_DEPTH limit for controlling the maximum depth of the SQL parser stack
  • New SQLITE_PREPARE_FROM_DDL flag for enforcing schema-level security constraints during statement preparation

Cross-language LTO for Zig ↔ C++ on Linux

Bun's binary is now built with full link-time optimization (LTO) across the Zig and C++ boundaries on Linux. Previously, the Zig-compiled object file was a native ELF object that the linker could link but not optimize across — meaning hundreds of small cross-language function calls (Zig → C++, C → Zig, allocator calls) were never inlined.

By emitting the Zig object as LLVM bitcode and participating in the same LTO link pass as the C/C++ side, LLVM can now inline and optimize across language boundaries:

BoundaryFunctions declaredFunctions eliminated by inlining%
Zig export fn → C++33614242%
C us_* (µSockets) ← Zig1157969%
C++ uws_* (µWebSockets) ← Zig1087670%
mi_free (mimalloc)all100%

Measured impact (linux-x64):

BenchmarkBeforeAfterImprovement
Bun.escapeHTML183.2 ns171.3 ns6.5% faster
TextDecoder.decode106.8 ns104.0 ns2.6% faster
HTTP throughput (oha -n 1M -c 50)~193,800 req/s~200,600 req/s3.5% faster

This is a broad improvement — any hot path that crosses the Zig/C++ boundary benefits, including HTTP serving, text encoding/decoding, and HTML escaping.

Faster ESM module loading

Fixed an internal parser oversight where an ~8KB struct was being copied by value on every AST node allocation, causing unnecessary memcpy overhead during transpilation. Passing it by pointer instead eliminates the redundant copies, reducing _platform_memmove overhead from 7.5% to 2.9% of self time in profiling.

On a benchmark loading 500 ESM files, this results in approximately 12% faster module loading (~140ms → ~123ms).

Thanks to @sosukesuzuki for the contribution!

Reduced GC overhead for built-in objects

Bun's incremental garbage collector previously re-scanned ~63 types of built-in objects (Request, Response, Subprocess, Stats, Dirent, Timeout, and more) after every mutator yield during incremental GC — even though these objects already use write barriers that guarantee correctness without the extra pass.

This redundant work has been removed. Only visitChildren is called now for codegen'd classes, eliminating the overhead of re-walking every live instance of these common types during incremental GC cycles. Hand-written types that genuinely require output constraints (like EventTarget, AbortSignal, MessagePort, etc.) are unchanged.

This should reduce GC pause times, especially in applications with many live built-in objects.

Smaller binary size

Bun gets smaller on Windows and Linux. macOS binary size hasn't changed much.

Target
Linux aarch64-9.07 MB
Linux aarch64-musl-7.63 MB
Linux x64-8.58 MB
Linux x64-baseline-8.64 MB
Linux x64-musl-6.61 MB
Linux x64-musl-baseline-6.75 MB
Windows aarch64-18.42 MB
Windows x64-17.66 MB
Windows x64-baseline-17.67 MB

tls.getCACertificates('system') now works without --use-system-ca

Previously, tls.getCACertificates('system') returned an empty array [] unless --use-system-ca or NODE_USE_SYSTEM_CA=1 was explicitly set. Node.js returns the OS trust store unconditionally for 'system' — the flag only affects 'default'. Bun now matches this behavior.

System certificates are lazy-loaded on first demand, so there's no startup cost unless 'system' is actually queried or --use-system-ca is set.

import tls from "node:tls";

// Previously returned [] without --use-system-ca, now returns system CA certs
const systemCerts = tls.getCACertificates("system");
console.log(systemCerts.length); // > 0 on Linux/Windows

This also fixes a data race that could cause segfaults or truncated certificate lists when multiple threads (e.g. Workers) accessed root certificates concurrently during initialization.

Thanks to @cirospaciari for the contribution!

tls.getCACertificates('system') no longer stalls on managed Macs

On macOS, tls.getCACertificates('system') previously evaluated every keychain certificate using SecTrustEvaluateWithError with an SSL policy, causing trustd to attempt OCSP/CRL/AIA network fetches for each cert. On managed Macs running a NetworkExtension content filter, this turned a local lookup into ~10 seconds of wall-clock time as hundreds of outbound flows were individually signed and policy-denied by the filter.

This release rewrites the macOS keychain enumeration to match how Node.js and Chromium handle it:

  • Removed kSecMatchTrustedOnly from the keychain query — this flag forced a redundant network-revocation-enabled evaluation of every cert before per-cert filtering even started.
  • Replaced the trust-settings stub with a full parser — the previous implementation always returned UNSPECIFIED, causing every cert to fall through to the expensive SecTrustEvaluateWithError path. The new parser (ported from Node's IsTrustSettingsTrustedForPolicy) resolves certs via cheap local XPC lookups.
  • Deferred SecTrustEvaluateWithError as a last resort — only certs with no decisive trust settings in any domain reach it, and when they do, it now uses SecPolicyCreateBasicX509 + SecTrustSetNetworkFetchAllowed(false) to avoid network access entirely.

The result is functionally equivalent — OpenSSL still enforces EKU and basic-constraint checks at handshake time — but enumeration no longer triggers any network I/O.

--use-system-ca on Windows now loads intermediate and TrustedPeople certificates

When using --use-system-ca or NODE_USE_SYSTEM_CA=1 on Windows, Bun now reads from the same certificate stores as Node.js, fixing unable to get local issuer certificate errors commonly seen in enterprise and intranet environments.

Previously, Bun only enumerated the ROOT store for CURRENT_USER and LOCAL_MACHINE. This meant that when a server only sent a leaf certificate without its intermediates (very common on intranets with corporate proxies or self-signed certificates), Bun couldn't build the certificate chain — even though Windows had the intermediates cached in its CA store.

Bun now mirrors Node.js's ReadWindowsCertificates behavior:

BeforeAfter
Store namesROOTROOT, CA, TrustedPeople
LocationsLOCAL_MACHINE, CURRENT_USER+ GROUP_POLICY, ENTERPRISE variants
CERT_STORE_OPEN_EXISTING_FLAG✓ (don't create missing stores)
EKU server-auth filter✓ (skip certs restricted to e.g. code-signing only)

This brings --use-system-ca on Windows to parity with Node.js, making it significantly more reliable for enterprise environments with custom certificate authorities and proxy servers.

Event loop refactor

Large parts of the event loop have been refactored to improve reliability and simplify memory management.

Along the way, this fixed:

  • DuplexUpgradeContext was never freed (full leak per tls.connect({socket: duplex}))
  • UpgradedDuplex.onEndCallback was incorrectly wired to onReceivedData
  • SSLWrapper.init leaked the strdup'd passphrase on error paths
  • TLSSocket.memoryCost now correctly reports off-heap SSL state

Bugfixes

Node.js compatibility improvements

  • Fixed: memory leak in node:http where NodeHTTPResponse and its associated buffers were never freed when ondata was re-registered after the request body had already been fully received, also causing the event loop to stay alive unnecessarily
  • Fixed: res.setTimeout() on client-side IncomingMessage no longer keeps the event loop alive after the response completes. Previously, calling res.setTimeout(90000) would prevent the process from exiting for the full timeout duration, even when there was nothing left to do. The timer is now unref'd to match Node.js behavior. Also fixed res.setTimeout() to return this, clear the timer when called with 0, and stack listeners via res.on("timeout", cb) for Node.js compatibility.
  • Fixed: use-after-free crash in HTTPS requests when checkServerIdentity rejects a certificate due to hostname mismatch
  • Fixed: checkServerIdentity callback passed to https.request() was ignored — the native check always ran instead
  • Fixed: https.createServer() with a ca option incorrectly required client certificates, even when requestCert was not set to true
  • Fixed: TLS certificate identity verification now falls back to the Subject Common Name (CN) when the certificate has no SAN entries, matching Node.js behavior
  • Fixed: use-after-free crash in node:zlib when an onerror callback issued a re-entrant write() followed by close() on native zlib/brotli/zstd handles
  • Fixed: heap-use-after-free crash in node:zlib when calling .reset() on a zlib, Brotli, or Zstd stream while an async .write() is still in progress on the threadpool
  • Fixed: memory leak in crypto.scrypt where the callback and protected password/salt buffers were never released when the output buffer allocation failed (e.g. with an extremely large keylen)
  • Fixed: crypto.randomFill and crypto.randomFillSync bounds-checking bugs that could cause a heap overflow when offset exceeded 2²⁴ due to f32 precision loss, and a unit mismatch that caused integer underflow or silent under-fill for multi-byte typed arrays (e.g. Float64Array) when using the 3-argument form
  • Fixed: crypto.subtle.unwrapKey('jwk', ...) promise never settling and leaking memory when the decrypted payload was valid JSON but not a valid JWK (e.g. missing the required kty field). The TypeError now correctly rejects the promise instead of escaping as an uncaught exception. Also fixed a smaller native memory leak when decrypted bytes weren't valid JSON at all.
  • Fixed: process.dlopen crash when a native addon's init callback re-entrantly calls napi_module_register() (e.g. nested dlopen or registering additional modules from within an init function), which could invalidate the internal iterator and cause a use-after-free
  • Fixed: napi_create_external_arraybuffer and napi_create_external_buffer now correctly return napi_pending_exception when a NAPI exception is already pending, matching Node.js behavior. Previously, calling these functions with a pending exception could lead to double-frees or orphaned GC cells with permanently-disarmed destructors.
  • Fixed: fuzzer-detected crash in process.setgroups() and process.hrtime() with unexpected input
  • Fixed: crash when lazy construction of process.stdin/process.stdout/process.stderr throws near the stack recursion limit and an uncaughtException listener is registered
  • Fixed: process.stdin hanging or spinning at 100% CPU when reading from a FIFO pipe and the parent process dies or a new writer reappears during the drain loop
  • Fixed: process.stdin.isRaw not updating after a successful setRawMode() call on Windows, which caused readline and other modules that check isRaw to incorrectly restore cooked mode
  • Fixed: out-of-bounds read in Buffer.from() with invalid input in a less common encoding
  • Fixed: Buffer.copyBytesFrom() producing incorrect results or throwing RangeError: Out of memory when called with a TypedArray view that has a non-zero byteOffset into its backing ArrayBuffer
  • Fixed: memory leak in Buffer.from(string, 'hex') and Buffer.from(string, 'base64') when the input contained no valid encoded characters (e.g. Buffer.from('zz', 'hex')). The internal staging allocation was never freed when decoding produced zero bytes, causing ~4KB to leak per call. Also fixed a related issue where ffi.toBuffer(ptr, 0, finalizer) would silently drop the user's finalizer.
  • Fixed: crash and out-of-bounds read in Buffer#copy and Buffer#fill when a valueOf callback detaches or resizes the underlying ArrayBuffer during argument coercion
  • Fixed: memory leak in child_process stdout when reading from spawned processes — the internal FileReader.onPull memcpy path failed to free the drained buffer, causing linear RSS growth under sustained reads
  • Fixed: memory leak in tlsSocket.setSession() where each call leaked one SSL_SESSION (~6.5 KB per call) due to a missing SSL_SESSION_free after d2i_SSL_SESSION
  • Fixed: crash when calling TLSSocket.getServername() after the socket was closed, due to a null SSL pointer dereference
  • Fixed: use-after-free in tls.connect({socket: duplex}) when a pre-open duplex error races with the queued StartTLS task, causing freed Handlers memory to be read
  • Fixed: memory leaks in getPeerCertificate() on server-side TLS sockets (mTLS) where an X509 reference from SSL_get_peer_certificate was never freed and a BIO was leaked (~800 bytes per call) in X509Certificate.raw due to an incorrect destructor capture
  • Fixed: memory leak in node:net when socket.connect() fails synchronously on reused handles (e.g. connecting to a nonexistent Unix socket path), which leaked one native socket struct per failed reconnect
  • Fixed: memory leak where fs.watch(path, { persistent: false }) watchers were never garbage collected after .close(), caused by a reference count underflow that permanently pinned each watcher as a GC root
  • Fixed: memory leak in fs.watch() on macOS where the resolved directory path was never freed, leaking ~path-length bytes on every fs.watch(<directory>) call
  • Fixed: a use-after-free race condition in fs.watch on macOS where closing a watcher while events were firing could crash due to the CoreFoundation thread reading freed memory
  • Fixed: a crash on macOS where FSEventStreamCreate could return NULL under rapid fs.watch().close() churn, causing a CoreServices crash when the NULL stream was passed to FSEventStreamScheduleWithRunLoop
  • Fixed a crash in fs.readdirSync() with { encoding: 'buffer', recursive: true } when a subdirectory fails to open (e.g. due to a self-referential symlink causing ELOOP). An internal memory management bug caused a use-after-free in the error cleanup path, leading to crashes or corrupted results.
  • Fixed: memory leak in fs.readdirSync() with { recursive: true, withFileTypes: true } when the call fails partway through (e.g. due to ELOOP or EACCES), where Dirent.path references were not properly released on the error path
  • Fixed: crash involving long file paths in fs.cp, fs.promises.cp, and fs.cpSync with { recursive: true }
  • Fixed: fs.cp / fs.cpSync on Linux and FreeBSD copied symlinks with the source symlink's own path as the target instead of the path the symlink actually pointed at, causing copied symlinks to point back into the source tree and dangle if the source was deleted. Also fixed cpSync throwing ENOENT when copying a symlink to a destination whose parent directory doesn't exist.
  • Fixed: fs.cp/fs.cpSync on Windows leaking an OS handle for every symlink or junction in the source tree, which could exhaust the process handle table when copying large trees (e.g. node_modules with pnpm-style junctions)
  • Fixed: memory leak in fs.symlinkSync, fs.linkSync, and fs.renameSync where path arguments were not freed when a later argument was rejected, causing RSS to grow by hundreds of MB in error-heavy loops
  • Fixed: memory leak in dns.lookup and dns.resolve* when more than 32 concurrent c-ares DNS requests are in flight, where overflow results were never freed
  • Fixed: use-after-free crashes in node:http2 when re-entrant JS callbacks (e.g. session.request() inside a timeout listener, an options getter, or a write callback) triggered a hashmap rehash, invalidating internal stream pointers. Streams are now heap-allocated and stable for the lifetime of the session. (@Jarred-Sumner)
  • Fixed: a crash in node:vm where SourceTextModule.link() with many imports could trigger a use-after-free when the garbage collector concurrently iterated the module's resolve cache during a HashMap rehash
  • Fixed: ESM module evaluation where sibling static imports could incorrectly skip waiting for async-pending strongly connected components, causing TDZ errors when accessing bindings from cyclic module graphs with top-level await
  • Fixed: deadlock when a non-entry module with top-level await dynamically imports a module that imports it back
  • Fixed: importing a module with top-level await from multiple sibling imports in the same module graph caused a ReferenceError: Cannot access before initialization because sibling modules skipped the spec-mandated wait for the shared TLA dependency to settle (@sosukesuzuki)
  • Fixed: node:test top-level test() ignoring { skip } and { todo } options — nested Suite#test honored them, but the top-level entry point did not
  • Fixed: child processes spawned by Bun on macOS could inherit an effectively infinite RLIMIT_NOFILE soft limit, causing programs that read the limit into an int to misbehave (e.g. failing socket reads). The startup file descriptor limit raise is now capped at 1 << 20, matching Node.js. (@alii)

Bun APIs

  • Fixed: use-after-free when reading .listener on a closed client socket created via Bun.connect()
  • Fixed: use-after-free when accessing handle.listener on a socket that failed to connect, where the native socket's handlers pointer was left dangling after the per-connection Handlers allocation was freed
  • Fixed a crash when calling socket.reload() or listener.reload() on active TCP/TLS sockets. Previously, reloading handlers reset the internal active connection counter to zero, which caused integer overflow panics in debug builds and segfaults (heap-use-after-free) in release builds. This affected reloads both inside and outside of socket event handlers.
  • Fixed: crash (NULL dereference / SIGSEGV) when closing a socket inside a TLS 1.2 renegotiation handshake callback (Bun.connect with TLS)
  • Fixed: use-after-free in TLS server ALPN callback when handling concurrent handshakes, which could cause crashes or garbage data during protocol negotiation on servers with ALPNProtocols configured (@Jarred-Sumner)
  • Fixed: TLS sockets could leak memory and stay open indefinitely when SSL_shutdown failed to flush the close_notify alert (e.g. kernel buffer full or peer already gone), because the socket would wait for a peer event that never arrives. Now the socket closes immediately in this case. (@cirospaciari)
  • Fixed: use-after-free crash when calling flush() on a TLS socket after end(), where the deferred TLS close_notify handshake could dereference already-freed handler memory
  • Fixed: DNS resolution hanging indefinitely on macOS when retrying without AI_ADDRCONFIG on loopback-only network configurations. The retry path was polling the wrong mach port, causing fetch(), Bun.connect(), and other network APIs to hang instead of completing the DNS lookup.
  • Fixed: Bun.dns.lookup() with an oversized hostname could cause a crash
  • Fixed: DNS resolution over TCP could trigger an assertion failure when c-ares requested both readable and writable polling on the same socket file descriptor. FilePoll now correctly merges both directions into a single epoll_ctl(CTL_MOD) call on Linux, and properly submits two EV_DELETE changes on kqueue (macOS/FreeBSD) when both directions are registered. (@Jarred-Sumner)
  • Fixed: when using Bun.spawn with stdin: "pipe" without ever reading the .stdin property, the stdin pipe file descriptor would leak and remain open until the Subprocess was garbage collected — even after the child process had already exited. The stdin pipe fd is now properly closed when the child process exits.
  • Fixed: Bun.spawn incorrectly closing caller-owned file descriptors passed as extra stdio slots (index ≥ 3) after the subprocess was garbage collected, causing EACCES/EBADF errors on subsequent reuse of those descriptors (@cirospaciari)
  • Fixed: Bun.spawn returning null for proc.stdio[N] when a caller-supplied file descriptor was passed as extra stdio (index ≥ 3), instead of returning the fd number back. Caller-owned fds are now correctly exposed via proc.stdio[N] while still never being closed by the subprocess.
  • Fixed: Bun.spawn reading uninitialized memory when stdio[N>=3] is undefined or a sparse array hole, which could expose garbage file descriptors or crash by dereferencing invalid pointers in release builds
  • Fixed: memory leak where Bun.spawn() subprocess objects were never garbage collected when stdout/stderr pipes drained asynchronously after the child process exited (e.g. when a grandchild process inherited the pipe)
  • Fixed: Bun.spawn({ ipc }) subprocesses were never garbage collected after the child exited, leaking the subprocess along with its stdout/stderr buffers, stdin FileSink, and other retained objects for the lifetime of the process
  • Fixed: memory leak in subprocess PipeReader when a stdout/stderr read failed with a real error (e.g. EBADF, EIO). The PipeReader struct, its buffered bytes, FilePoll, and pipe fd would all leak, and the leaked poll's keep-alive ref would prevent the event loop from ever exiting.
  • Fixed: subprocess 'exit' event not firing on Linux when multiple child processes exit simultaneously with stdio: 'ignore'. On Linux, the pidfd poll was registered with EPOLLONESHOT, which caused the kernel to disarm the fd before user-space could process it if a nested event loop tick occurred. The pidfd is now registered as level-triggered so a dropped event is harmlessly re-delivered.
  • Fixed: 2 rare crashes in Bun.serve()
  • Fixed: memory leak in Bun.serve() when a direct ReadableStream handler writes synchronously without returning a promise — each such request leaked the internal response sink and its buffer (~400 bytes per request)
  • Fixed: GC root leak in Bun.serve() when a "direct" response stream rejects while a controller.end() or controller.flush(true) promise is pending due to transport backpressure
  • Fixed: heap-use-after-free and RequestContext leak when a chunked request body exceeds maxRequestBodySize and the fetch handler returns a pending Promise
  • Fixed: memory leak in server.fetch(string) where the intermediate URL buffer was leaked on every call
  • Fixed: server.fetch() crashing with a segfault when passed a BigInt argument, now properly rejects with a TypeError
  • Fixed: memory leak when calling server.reload() with a WebSocket config that lacks open or message handlers — discarded handler functions were permanently rooted and never garbage collected
  • Fixed: crash in Bun.serve() when websocket.perMessageDeflate was set to a non-boolean primitive (e.g. number, string, bigint, or symbol). Now throws a TypeError with a descriptive message instead.
  • Fixed: Bun.serve with development: true and a [serve.static] plugin whose setup() throws would cause requests to hang forever instead of returning an error
  • Fixed: rare crash in server.upgrade() when an option getter (e.g. get data()) mutates req.headers during the upgrade
  • Fixed: server.upgrade() crashing or firing the open handler twice when a user-defined getter on the data or headers option re-entrantly called server.upgrade() on the same request
  • Fixed: Bun.FileSystemRouter crashing with a panic when route filenames contained certain byte values
  • Fixed: memory leak in Bun.FileSystemRouter where accessing .params on a MatchedRoute
  • Fixed: rare Bun.FileSystemRouter use-after-free crash
  • Fixed: rare heap corruption in Bun.file().json()
  • Fixed: memory leak in Bun.zstdDecompressSync where the partial output buffer was not freed when streaming decompression failed (e.g. corrupt or truncated zstd frames), and a double-free when known-size fast-path decompression failed
  • Fixed: file descriptor leak in Bun.Glob when encountering NAMETOOLONG errors during directory traversal (@alii)
  • Fixed: Bun.pathToFileURL() crashing with an out-of-bounds panic when given a relative path that, when joined with cwd, exceeded 4096 bytes
  • Fixed: memory leak in Bun.password.hash() and Bun.password.hashSync() where the hash output buffer was not freed after being copied into a JavaScript string
  • Fixed: crash in Bun.markdown.ansi() when input contained invalid UTF-8 lead bytes (lone continuation bytes 0x80-0xBF and bytes 0xF8-0xFF) by treating them as replacement characters instead of passing them to the multibyte decoder
  • Fixed: Bun.S3Client({ queueSize }) panicking when queueSize exceeded 255, and silently overriding any valid queueSize (1–255) to 255
  • Fixed: Bun.s3.list() panicking when prefix, delimiter, continuationToken, or startAfter exceeded ~341 characters after URL-encoding, since S3 keys can be up to 1024 bytes and percent-encoding can triple that size.
  • Fixed: Bun.Archive.prototype.files() memory leak when processing corrupted/truncated archives where previously-read entries were not freed on mid-stream readData failures
  • Fixed: Bun.RedisClient getting permanently stuck in a failed state after reconnection attempts were exhausted, close() was called, or a fatal socket error occurred. Previously, calling client.connect() would not recover the client. Now connect() properly resets internal state and replays the handshake, allowing the client to recover without replacing the instance.
  • Fixed: RedisClient TLS connections now properly verify hostnames against the server certificate when rejectUnauthorized: true is set. Previously, hostname mismatches and self-signed certificates were silently accepted due to unreachable verification code paths. Connections with mismatched hostnames now correctly reject with ERR_TLS_CERT_ALTNAME_INVALID, and untrusted certificates reject with the appropriate verification error.
  • Fixed: FileSink memory leak where native instances were never freed when a pending buffered write failed (e.g. EPIPE after the reader closes), causing the FileSink, its outgoing buffer, and associated refs to leak for the lifetime of the process
  • Fixed: memory leak when passing Bun.file() as cert, key, or ca in TLS options — each config parse leaked one buffer per file
  • Fixed: crash in new Bun.Terminal() when passing a non-object argument (e.g. a number, string, or boolean) instead of an options object
  • Fixed: Bun.udpSocket() leaked memory when creation failed (e.g. invalid port, bind failure, or throwing getter) because the internal strong reference prevented garbage collection of the wrapper object
  • Fixed: a use-after-free in UDPSocket.send() and UDPSocket.sendMany() where user code in valueOf() or toString() callbacks could detach an ArrayBuffer (via .transfer()) between payload capture and the actual send, causing reads from freed memory
  • Fixed: heap out-of-bounds write in UDPSocket.sendMany() when the socket's connection state changed mid-iteration via user JS callbacks (e.g. valueOf(), array index getters). This could cause memory corruption or crashes when connect() or disconnect() was called synchronously during a sendMany() call.
  • Fixed: UDPSocket.setTTL() and UDPSocket.setMulticastTTL() crashing with a null pointer dereference when the argument's valueOf closes the socket during coercion
  • Fixed: import("bun:main") was incorrectly resolved as the npm main package instead of the built-in bun:main module, due to a missing alias mapping in the runtime transpiler (@dylan-conway)
  • Fixed: use-after-free crash in HTMLRewriter.transform() when a document or element handler returns a rejected promise during the final end() chunk
  • Fixed: memory leak in HTMLRewriter where handler structs allocated by .on() and .onDocument() were never freed when the rewriter was garbage-collected, causing unbounded memory growth
  • Fixed: HTMLRewriter attribute iterators could read freed memory when saved outside an element handler callback — calling .next() on a leaked iterator now safely returns {done: true} instead of dereferencing a dangling pointer. Iterators are also correctly detached when setAttribute or removeAttribute mutates the underlying attribute buffer.

bun:sql

  • Fixed: bun:sql PostgreSQL connections that entered a .failed state (e.g. ECONNREFUSED, SSL refused with sslmode=require, or a normal close()) were never garbage collected, leaking the entire native connection including buffers, statements, and SSL state. The request queue's backing buffer was also not freed during cleanup.
  • Fixed: sql.unsafe() with multiple semicolon-separated statements in simple mode returned wrong column names for all result sets after the first, and leaked memory from previous field descriptors
  • Fixed: memory leak in bun:sql when querying PostgreSQL array-typed columns (text[], int8[], json[], bytea[], bool[], etc.) that caused RSS to grow ~72 MB per 1,000 iterations instead of stabilizing after warmup
  • Fixed: a potential heap buffer overflow when parsing binary-format int4[]/float4[] arrays from a malicious or buggy PostgreSQL server. The server-provided len field is now validated against the actual column byte length before iterating, preventing out-of-bounds reads and writes.
  • Fixed: MySQL stored procedures called via prepared statements (tagged templates or sql.unsafe(..., params)) would resolve after only the first result set, causing the trailing OK packet to surface as an unhandled TypeError outside the caller's catch block
  • Fixed: heap buffer overflow in sql MySQL client when the user-supplied parameter array was mutated (e.g. via a side-effecting getter) between query preparation and binding, which could cause out-of-bounds writes in release builds
  • Fixed: MySQL sql client returning garbage error messages when a cached failed prepared statement was re-executed, caused by a dangling pointer into an overwritten socket read buffer
  • Fixed: MySQL .raw() returning length-prefix bytes in the buffer for length-encoded columns (JSON, VARCHAR, TEXT, BLOB, ENUM, SET, GEOMETRY, NEWDECIMAL), causing garbled output when decoded as UTF-8
  • Fixed: MySQL client could panic or silently read out-of-bounds memory when a server sent a short auth nonce during AuthSwitchRequest for mysql_native_password. The client now validates nonce length and rejects with ERR_MYSQL_MISSING_AUTH_DATA before accessing the buffer. Also fixed potential divide-by-zero with empty nonces for caching_sha2_password and invalid indexing with empty public key payloads.
  • Fixed: a use-after-free crash in bun:sql MySQL client when a prepared statement's column reallocation failed
  • Fixed: MySQL BLOB parameters could be corrupted when ArrayBuffer.transfer() or GC occurred during query parameter binding
  • Fixed: memory leak in MySQL adapter when using dynamic interpolation in SQL template literals
  • Fixed: a crash in Bun.sql MySQL client when a query's .catch() callback called connection.close()
  • Fixed: SSL_CTX leak in Postgres and MySQL connections when path coercion throws after SSL context creation
  • Fixed: us_listen_socket_add_server_name not propagating duplicate-hostname errors, preventing App.h's rollback from firing

Web APIs

  • Fixed: FormData multipart boundary format now matches WebKit exactly (----WebKitFormBoundary{hex} with 4 leading dashes and capital K), fixing compatibility issues with downstream multipart parsers including OpenAI's API
  • Fixed: memory leak in FormData serialization when a Bun.file() entry fails to read (e.g. ENOENT). Previously, constructing a Response or Request from a FormData containing a valid Bun.file() followed by an invalid one would leak the already-read file buffers on each failed attempt.
  • Fixed: TextDecoder.decode reading through a stale pointer when an options.stream getter detaches or transfers the input ArrayBuffer, which could cause incorrect output, non-deterministic behavior, or crashes due to heap corruption
  • Fixed: memory leak in TextDecoder when decoding UTF-16LE and UTF-16BE encoded buffers. Each successful call to .decode() leaked the decoded output buffer.
  • Fixed: empty Blob and File objects incorrectly displaying as [Blob detached] / [File detached] in console.log and after structuredClone
  • Fixed: use-after-free in Blob when a duplicated blob's content_type was heap-allocated, causing Response headers to read freed memory and return garbage values (e.g. after Bun.file(path, { type: "..." }) followed by new Response(file))
  • Fixed: memory leak in structured-clone deserialization of Blob/File objects where truncated or malformed payloads would leak allocated buffers for content_type, bytes payload, Store, and heap Blob on error paths
  • Fixed: structuredClone() crash when serializing large ArrayBuffer, SharedArrayBuffer
  • Fixed: memory leak in fetch() when following long HTTP redirect chains
  • Fixed: memory leak when using fetch() with percent-encoded data: URLs, where the intermediate decoded buffer was never freed on each call
  • Fixed: fetch() silently hanging against certain hosts due to ECH GREASE being enabled in the TLS ClientHello. Some servers and middleboxes treated the encrypted_client_hello extension as hostile—completing the TLS handshake but never sending a response. This aligns Bun's fetch TLS behavior with curl, Node.js, and Bun's own node:tls.
  • Fixed: WebSocket.close() and WebSocket.terminate() called during the CONNECTING state would leave the socket permanently stuck in CLOSING, never fire close/error events, prevent the process from exiting, and leak the WebSocket instance. Now correctly transitions to CLOSED, fires error then close (code 1006, wasClean: false) per the spec, and releases all internal references.
  • Fixed: memory leak in WebSocket when TLS options were provided but the connection failed during option parsing or validation
  • Fixed: memory leak where every new WebSocket("wss://…") routed through an HTTP CONNECT proxy (tunnel mode) leaked one internal HTTPUpgradeClient struct due to a missing reference count release
  • Fixed: per-connection memory leak when using WebSocket clients over wss:// through an HTTP CONNECT proxy (tunnel mode). The internal I/O-layer reference was never released because the tunnel path doesn't adopt a uSockets socket, so the close handler never fired — leaking send/receive FIFOs, deflate state, and poll refs for every connection.
  • Fixed: a TypeError [ERR_INVALID_STATE]: Controller is already closed error thrown when streaming small files (e.g. Bun.file(small).stream()) where the first pull returns data and EOF simultaneously, causing the controller's close() to be called twice
  • Fixed: a race condition where concurrent ReadableStream instances (e.g. process.stdin and fetch(file://...) bodies) could close each other due to a shared mutable closer array at the class factory scope. This caused stdin to be spuriously closed, breaking subsequent operations like setRawMode.
  • Fixed: concurrent ReadableStream instances (e.g. Bun.stdin.stream() and fetch(file://...) bodies) could spuriously close each other due to a shared mutable EOF flag, causing stdin to close unexpectedly with EBADF errors
  • Fixed: TransformStream instances that were dropped without being explicitly closed, errored, or aborted were never garbage collected, causing an out-of-memory crash in long-running applications. A GC-root cycle between the global object's guarded objects set and the internal writable stream kept the entire stream graph permanently reachable.
  • Fixed: a long-lived AbortSignal reused across many addEventListener/removeEventListener cycles would accumulate dead closures in memory indefinitely. Each call to addEventListener with a { signal } option registered an internal abort algorithm, but removing the listener (via removeEventListener, { once: true } firing, or removeAllEventListeners) never cleaned up that algorithm — causing unbounded memory growth on the signal.
  • Fixed: A non-303 redirect with a ReadableStream body now correctly rejects with TypeError instead of UnexpectedRedirect.
  • Fixed: AbortSignal.reason is now properly forwarded to the request body's ReadableStream.cancel(reason).

Security

  • Fixed: HTTP request smuggling attack vector
  • Fixed: missing bounds check in maliciously-crafted Blob deserialization
  • Fixed: integer overflow in IPC advanced serialization mode with malicious input

Worker fixes

  • Fixed: stack overflow crash when closing a deep chain of nested transferred MessagePorts
  • Fixed: MessagePort memory leak when workers are terminated without explicitly closing their ports. When port.onmessage was assigned or port.ref() was called inside a Worker, the internal self-reference was never released during worker teardown, causing every such MessagePort to leak for the lifetime of the process.
  • Fixed: a race condition crash in MessageEvent when using BroadcastChannel or MessagePort where the GC marker thread could observe a torn variant in m_data during concurrent access, causing a SIGSEGV in release builds
  • Fixed: segfault in worker.getHeapSnapshot() caused by a cross-thread race condition where the parent VM's HandleSet was mutated from the worker thread without holding the parent VM's lock
  • Fixed: memory leak and crash when terminating Workers that have a PerformanceObserver without calling .disconnect() — a reference cycle between Performance and PerformanceObserver prevented both objects from being freed
  • Fixed: IPC subprocess cleanup on Windows when uv_read_start fails after pipe open — the embedded SendQueue and its pending close task are now properly cancelled before the allocation is freed, preventing a use-after-free crash
  • Fixed: a crash on Linux (glibc) during Worker teardown caused by a stale .eh_frame_hdr section reference in the stripped release binary

Timers

  • Fixed: native memory leak in setTimeout when clearTimeout(t), t.refresh(), or t._repeat = N is called synchronously inside the timer's own callback. The native TimeoutObject struct was never freed because the post-callback cleanup only checked for the .FIRED state, missing transitions to .CANCELLED or .ACTIVE that occur when the timer is cleared, refreshed, or converted to an interval during execution.
  • Fixed: crash when calling clearImmediate on a setImmediate followed by garbage collection (Bun.gc(true)) — the cleared immediate's internal reference could panic during event loop cleanup if the GC had already finalized the JS wrapper
  • Fixed: timer.ref() on an already-fired setTimeout or setImmediate no longer incorrectly keeps the event loop alive, which previously caused the process to hang indefinitely
  • Fixed: setTimeout with an out-of-range delay no longer leaves a pending JS exception when the timeout overflow warning triggers user code that throws (e.g., a throwing getter on process._exiting), which previously caused crashes in debug builds and unexpected errors in release builds

bun install

  • Fixed: bun install no longer hangs on stalled TLS handshakes and will instead timeout.
  • Fixed: bunx @scope/name no longer matches unrelated system binaries in $PATH When running scoped packages with bunx, the bin name was guessed by stripping the scope (e.g. @uidotsh/installinstall), then searched against the full system $PATH. If the guessed name collided with an existing system binary like /usr/bin/install, /usr/bin/git, or /usr/bin/find, the system binary would be silently executed instead of the package's actual bin.
  • Fixed: bun install --force now correctly replaces corrupted entries in the global store instead of silently keeping the broken version
  • Fixed: bun install hanging indefinitely when a tarball download returns a 4xx/5xx HTTP error (e.g. 404 Not Found) during the install phase, particularly when resolving from an existing lockfile with an empty cache. Both the hoisted and isolated linkers now fail fast with a clear error message instead of blocking forever. (@alii)
  • Fixed: bun install hanging indefinitely when a tarball integrity check fails with the isolated linker. When a tarball's SHA-512 didn't match the manifest (e.g. due to a registry redirect serving wrong content), the isolated installer would deadlock instead of reporting an error.
  • Fixed: a race condition in bun install where worker threads could read lockfile data (packages/string_bytes) that was simultaneously being reallocated by the main thread, potentially causing crashes or failed tarball resolution when installing workspaces with relative tarball paths
  • Fixed: bun add (and remove/link/unlink/bunx) crashing with a segfault when a positional argument exceeded 2048 bytes

JavaScript bundler

  • Fixed: a type confusion bug in the bundler plugin error handling that could cause a segfault when the plugin builtin threw synchronously
  • Fixed: memory leak in Bun.build() when using sourcemap: 'inline' without outdir — the intermediate sourcemap JSON buffer was not being freed after base64-encoding
  • Fixed a crash in Bun.build() and Bun.Transpiler when the loader or define options contained an empty-string key (e.g., loader: { "": "js", ".ts": "ts" }) that left uninitialized memory slots, causing a segfault when the entries were later hashed or freed
  • Fixed: bun build --target=browser panic involving long "browser" field paths
  • Fixed: crash reading .referrer on a ResolveError after the resolving frame had returned, particularly with non-ASCII source paths
  • Fixed: bundler leaving behind empty else {} blocks after dead code elimination when not using --minify-syntax
  • Fixed: crash with malicious input in Bun.Transpiler().transformSync()
  • Fixed: crash in Bun.Transpiler when the loader argument contains non-Latin-1 (UTF-16) characters, now properly returns a TypeError instead of panicking

Module resolver

  • Fixed: wildcard exports patterns (e.g. "./*": "./dist/packages/*") failing to resolve subpaths containing @ characters, such as test-pkg/@scope/sub/index.js. The resolver incorrectly treated the @ in the subpath as a version delimiter, causing resolution to fail even though the file existed on disk. This notably affected ember-source@6.12 and its @ember/*, @glimmer/*, and @simple-dom/* subpackages.
  • Fixed: crash in the module resolver when a previously inaccessible directory (e.g. due to EACCES) becomes readable again, caused by reading a cached error entry as a valid pointer
  • Fixed: use-after-free when resolving module specifiers containing both a query string (?) and non-ASCII characters (e.g. import("./target.js?v=café")), which could cause crashes or corrupted resolved paths
  • Fixed: use-after-free crash when resolving http://, https://, or // specifiers containing non-ASCII characters
  • Fixed: a use-after-free in the runtime auto-install path where enqueueDependencyToRoot could read from a dangling pointer into the lockfile's dependency buffer after it was reallocated by fromNPM
  • Fixed: auto-install resolution corruption where resolving a package's subpath (e.g. nanoid/non-secure) could fail with "Cannot find module" after auto-installing a second package, due to cached directory info pointing at a reused threadlocal buffer
  • Fixed: memory leak when resolving tsconfig.json files with extends chains — intermediate TSConfigJSON structs and their paths maps were never freed, causing repeated leaks on every HMR reload or FileSystemRouter.reload() call
  • Fixed: memory leak in BunString__toThreadSafe where the original StringImpl reference was never released when creating an isolated copy for thread safety, causing one leaked StringImpl per call in code paths like Bun.file() and async fs.write()

Dev server / HMR

  • Fixed: stale asset index in Bun's dev server causing panics or crashes when one CSS file had a syntax error and another CSS file was subsequently edited
  • Fixed: use-after-free crash in Bun's dev server (Bun.serve with framework) when a "use client" directive is removed from a component that had a pending resolution failure, causing the directory watcher to read freed memory on the next file change
  • Fixed: double-free crash in Bun's dev server when shutting down after a directory watch resolves a previously-failing import dependency
  • Fixed: HMR bundles crashing with X is not a function when a file has multiple import { ... } from statements referencing the same barrel package with sideEffects: false (regression from barrel import optimization in v1.3.0)
  • Fixed: use-after-free crash in bun dev when saving files in editors that use atomic rename (vim, emacs, IntelliJ) — the dev server's incremental graph stored pointers into a buffer that could be reallocated by the file watcher thread before the bundler consumed them
  • Fixed: bun dev on Linux dropping file change notifications when multiple inotify events were coalesced into a single batch — e.g. an atomic-save creating a temp file and renaming it over the target in quick succession would silently lose the rename, leaving the file unwatched until restart
  • Fixed: bun --hot on macOS stopping to detect file changes after the first atomic write (temp file + rename) when watching multiple imported modules, causing the module graph to oscillate between old and new state
  • Fixed: --hot mode could print wrong source locations or silently drop errors when a file-watcher event arrived between a module's rejection and its error being reported — the reload now defers until the error is printed against the correct sourcemap

bun build --compile

  • Fixed: ELF layout regression causing some executables on WSL1 to fail with ENOEXEC
  • Fixed: out-of-bounds write in toUTF16Alloc when handling invalid UTF-8 input in Windows bun build --compile metadata flag

CSS Parser

  • Fixed: CSS background-clip with vendor prefixes and multi-layer backgrounds could cause a double-free crash (≥2 background layers) and incorrect output where prefixed clip values were silently dropped instead of being flushed as separate declarations

bun test

  • Fixed: crash in bun test --isolate on macOS ARM64 caused by the concurrent garbage collector visiting a half-initialized global object during test isolation swaps
  • Fixed: bun test --isolate and bun test --parallel crashing with segfaults when test files load native NAPI addon whose deferred finalizers outlive the file's global object. The --parallel coordinator also no longer silently retries crashed workers, which previously masked panics and let the run exit 0. Fatal signals (SIGSEGV, SIGABRT, etc.) now abort the entire test run with a clear error message.
  • Fixed: bun test --changed silently skipping test files whose only dependency path to a changed source file went through a tsconfig paths alias with a bare-looking key (e.g. @/*, ~/*, components/*). The resolver's packages=external short-circuit was incorrectly marking these aliased imports as external before tsconfig paths could resolve them.
  • Fixed: test.each() / describe.each() table arrays could be garbage collected between the .each(arr) call and the subsequent ("name", cb) invocation, causing callbacks to receive corrupted data or throwing Expected array, got … errors
  • Fixed: crash when a custom asymmetric matcher's implementation throws an exception during .asymmetricMatch() — the exception is now properly propagated to JavaScript instead of triggering an internal assertion failure
  • Fixed: mock.module / vi.mock calling the module resolver before validating the callback argument, which caused unexpected behavior

Bun Shell

This release fixes over 70 bugs with the Bun Shell.

  • Fixed: cd in Bun's shell hanging forever when encountering errno values other than ENOTDIR/ENOENT/ENAMETOOLONG (e.g. EACCES, ELOOP, EIO). Now properly reports the error to stderr and exits with code 1.
  • Fixed: a crash in Bun's shell when cd or .cwd() received a path longer than 4096 bytes — oversized paths now correctly return ENAMETOOLONG instead of segfaulting
  • Fixed: [[ -f path ]] in Bun Shell incorrectly returned true for directories, character devices, sockets, and FIFOs instead of only regular files
  • Fixed: shell tilde expansion dropping path segments after command substitutions in compound words (e.g. echo ~/$(echo bin)/subdir now correctly outputs /home/user/bin/subdir instead of /home/user/bin)

TypeScript types

  • Fixed: bun-types FFI type declarations failing to compile under tsgo (TypeScript native preview) due to duplicate computed property keys in FFITypeToArgsType and FFITypeToReturnsType interfaces

Windows

  • Fixed: Bun.connect() on Windows named pipes could crash (heap-buffer-overflow under ASAN) or leak memory on connection close due to the socket handlers being incorrectly marked as server mode instead of client mode
  • Fixed: heap buffer overflows in Windows path normalization (normalizePathWindows) that could cause heap corruption when handling paths near or exceeding the 32,767 UTF-16 code unit limit. Affected code paths include UTF-8 to UTF-16 conversion, device paths (\\.\), absolute path normalization, separator-free paths, and relative path joining with dirfd. Now properly returns ENAMETOOLONG instead of writing out of bounds.
  • Fixed: crash on Windows (panic: integer overflow) when tearing down hundreds of spawned child processes during process exit, caused by an unsigned integer underflow in the libuv event loop's active_handles counter
  • Fixed: a panic on Windows when resolving error names from libuv error codes (e.g. ENOENT, EBADF) due to an integer overflow in internal errno translation (@dylan-conway)
  • Fixed: a crash on Windows (Panic: invalid enum value) when libuv returned unmapped error codes during file system operations like fs.readFile

CLI and runtime

Pipeline producer exit no longer clobbers downstream pager's terminal state

When piping Bun's output to tools like less, fzf, or fx (e.g. bun script.js | less), Bun would unconditionally restore its startup terminal (termios) snapshot at exit. Because termios state belongs to the underlying /dev/pts/* device — not the file descriptor — this would overwrite the raw mode that the downstream pager had already set, leaving it unresponsive to keypresses.

# Before: 'q' does nothing in less — terminal is stuck in cooked/line-buffered mode
bun app.js | less

# After: less, fzf, fx, and other pagers work correctly
bun app.js | less

The fix gates the exit-time tcsetattr on whether Bun actually modified the terminal during its lifetime. When Bun is a pipeline producer (stdout is a pipe), fds it never touched are now left alone at exit. When stdout is a TTY (e.g. bun run vim, crash handler, --watch reload), the unconditional restore is preserved so the shell prompt always comes back to a usable state.

Other CLI/runtime fixes

  • Fixed: bun -p with top-level await expressions now returns the final completion value instead of the first awaited value — e.g. bun -p '(await 1) + 1' now correctly prints 2 instead of 1

Thanks to 11 contributors!