To install Bun
curl -fsSL https://bun.sh/install | bashnpm install -g bunpowershell -c "irm bun.sh/install.ps1|iex"scoop install bunbrew tap oven-sh/bunbrew install bundocker pull oven/bundocker run --rm --init --ulimit memlock=-1:-1 oven/bunTo upgrade Bun
bun upgradeBun.Image — Built-in Image Processing
Bun now ships a built-in image processing API that handles JPEG, PNG, WebP, GIF, and BMP — plus HEIC, AVIF, and TIFF on macOS and Windows — with zero native module installs.
Bun.Image provides a chainable pipeline for decoding, transforming, and encoding images, designed as a drop-in alternative to sharp for common server-side image operations.
// Resize and convert a photo to WebP
await Bun.file("photo.jpg")
.image()
.resize(1024, 1024, { fit: "inside" })
.rotate(90)
.webp({ quality: 85 })
.write("thumb.webp");
// Generate a thumbnail from an upload in a single expression
return new Response(new Bun.Image(upload).resize(200).jpeg());
Input sources
Bun.Image accepts path strings, ArrayBuffer/TypedArray (zero-copy), Blob/BunFile/S3File, and data: URLs. You can also use Bun.file("photo.jpg").image() or blob.image() to start a pipeline.
Chainable transforms
The pipeline supports .resize(w, h?, {filter, fit, withoutEnlargement}), .rotate(90|180|270), .flip(), .flop(), and .modulate({brightness, saturation}). Output format is set with .jpeg(), .png(), .webp(), .heic(), or .avif() — each with format-specific quality/compression options.
Resize filters
All sharp filters are supported: nearest, box, bilinear, cubic, mitchell, lanczos2, lanczos3, plus mks2013 and mks2021.
Terminal methods
All processing runs off the main thread (except metadata()). Output via .bytes(), .buffer(), .blob(), .toBase64(), .dataurl(), .placeholder() (thumbhash), .metadata(), or .write(dest).
const meta = await new Bun.Image(buf).metadata();
// { width: 1920, height: 1080, format: "jpeg", ... }
const placeholder = await Bun.file("hero.jpg").image().placeholder(); // thumbhash data URL for blur-up
Body integration
Bun.Image instances work directly as response/request bodies with automatic Content-Type:
return new Response(new Bun.Image(upload).resize(200).jpeg());
Platform-specific formats
| Format | macOS | Windows | Linux |
|---|---|---|---|
| JPEG | ✅ | ✅ | ✅ |
| PNG | ✅ | ✅ | ✅ |
| WebP | ✅ | ✅ | ✅ |
| GIF | ✅ | ✅ | ✅ |
| BMP (simple) | ✅ | ✅ | ✅ |
| TIFF | ✅ decode | ✅ decode | — |
| HEIC | ✅ decode + encode | ✅ decode + encode | — |
| AVIF | ✅ decode (+ encode on Apple Silicon) | ✅ decode + encode | — |
JPEG, PNG, WebP, GIF, and BMP use statically linked codecs and produce identical output across all platforms. HEIC, AVIF, and TIFF use OS system backends (ImageIO + vImage on macOS, WIC on Windows) with lazy symbol resolution for zero startup cost.
Performance vs sharp 0.34.5
Benchmarked on linux/x64 with 50 iterations and sharp.concurrency(1):
| Operation | Bun.Image | sharp | Speedup |
|---|---|---|---|
metadata() | 0.004 ms | 0.28 ms | 70× |
| 1080p PNG → 400×400 → JPEG | 28.6 ms | 39.5 ms | 1.38× |
| 1080p PNG → 800×600 → WebP | 82.7 ms | 110.1 ms | 1.33× |
| 4K JPEG → 800×450 → JPEG | 35.8 ms | 45.5 ms | 1.27× |
| 4K JPEG → 1920×1080 → JPEG | 57.2 ms | 69.9 ms | 1.22× |
| 12MP JPEG → 1024×768 → WebP | 138 ms | 165 ms | 1.20× |
The performance comes from i16 fixed-point SIMD resize kernels, JPEG IDCT scaling to the smallest sufficient size, zero-copy ArrayBuffer borrowing, and a single pre-allocated arena for resize scratch memory.
Global Virtual Store
bun install --linker=isolated now supports a shared global virtual store via the install.globalStore = true option in bunfig.toml. Instead of cloning every package from the cache into each project's node_modules on every install, packages are materialized once into a global <cache>/links/ directory, and each project's node_modules/.bun/<pkg>@<ver> becomes a symlink into it.
Warm installs — lockfile present, cache warm, node_modules wiped (the common CI path) — now perform ~1 symlink() per package instead of ~1 clonefileat() per file copy. On macOS APFS, clonefileat() holds a volume-wide kernel lock that made parallelization ineffective. The global store eliminates those calls entirely on the warm path.
Benchmarks — warm install of a ~1,400-package fixture on Apple Silicon macOS (hyperfine --warmup 3 --runs 10):
| Wall time | System time | clonefileat calls | |
|---|---|---|---|
--linker hoisted | 823 ms | 478 ms | 1,387 |
--linker isolated (before) | 841 ms | 1,256 ms | 1,387 |
--linker isolated (after) | 115 ms | 94 ms | 0 |
This is still experimental, so the global store is off by default with the isolated linker.
To enable:
# bunfig.toml
[install]
globalStore = true
Or via environment variable:
BUN_INSTALL_GLOBAL_STORE=1 bun install
A package is eligible for the global store only when it comes from an immutable cache source (npm registry, git, tarball — unpatched, no trusted lifecycle scripts) and all of its transitive dependencies are also eligible. Ineligible packages fall back to per-project copies automatically.
The entry hash encodes the package's resolved dependency closure, so two projects that resolve a package to the same transitive versions share one on-disk entry, while a project with different resolutions gets its own.
This release also fixes a pre-existing issue: Bun now synthesizes an implicit "*" optional peer dependency for entries that appear in peerDependenciesMeta but not in peerDependencies (matching pnpm/yarn behavior). This fixes compatibility with packages like webpack-cli.
HTTP/3 (QUIC) support in Bun.serve
⚠️ Highly experimental. HTTP/3 support is new and likely has bugs. Do not deploy http3: true to production yet.
Bun.serve now supports HTTP/3 over QUIC. Enable it with a single flag:
Bun.serve({
port: 443,
tls: { cert, key },
http3: true, // also listen on UDP/443 for HTTP/3
fetch(req) {
return new Response("hi");
},
});
When http3: true is set alongside tls, Bun binds TCP for HTTP/1.1+2 and UDP for HTTP/3 on the same port. Your existing fetch handler and routes work identically across all three protocols — no code changes needed. HTTP/1.1 and HTTP/2 responses automatically include Alt-Svc: h3=":<port>"; ma=86400 so browsers discover the QUIC endpoint.
You can also serve HTTP/3 only:
Bun.serve({
port: 443,
tls: { cert, key },
http3: true,
http1: false, // disable HTTP/1.1
fetch(req) {
return new Response("h3 only");
},
});
Everything you'd expect works over HTTP/3: new Response(readableStream) for streaming, new Response(Bun.file("large.bin")), new Response(req.body) passthrough, req.url/req.headers/req.method across await boundaries, requestIP(), server.reload(), and graceful server.stop().
Performance
On Linux x64 (single process, loopback), HTTP/3 is significantly faster than HTTPS/1.1 from the same server instance:
| Benchmark | HTTP/3 | HTTPS/1.1 | HTTP/1.1 |
|---|---|---|---|
Static route (routes) | 509,135 req/s | 189,130 req/s | 239,476 req/s |
Dynamic fetch handler | 283,485 req/s | 142,323 req/s | 171,696 req/s |
~50% of HTTP/3 CPU time is inside lsquic; further optimizations may come in a future releas.e
Limitations
- WebSocket over HTTP/3 is not supported yet (
server.upgrade()returnsfalse). WebTransport is a separate project. - 0-RTT is disabled.
unix:socket addresses skip the H3 listener (QUIC over Unix sockets is non-standard).- No trailer support, no
Expect: 100-continue(matching HTTP/1.1 behavior).
Powered by lsquic v4.6.2.
Experimental HTTP/2 Client for fetch()
fetch() now supports HTTP/2 as an experimental feature. When enabled, Bun negotiates h2 via TLS ALPN — multiple concurrent fetches to the same origin share a single multiplexed TCP+TLS connection instead of opening separate HTTP/1.1 connections.
Enable it globally with an environment variable or CLI flag, or opt in per-request:
// Opt in globally:
// BUN_FEATURE_FLAG_EXPERIMENTAL_HTTP2_CLIENT=1 bun run app.js
// bun run --experimental-http2-fetch app.js
// Or per-request (works without the env flag):
const res = await fetch("https://example.com", { protocol: "http2" });
Multiplexing & connection coalescing
Parallel fetches to the same origin share one TLS handshake and one connection. The first request opens the socket; subsequent requests attach to the same HTTP/2 session up to the server's MAX_CONCURRENT_STREAMS limit, with overflow queued automatically.
Per-request protocol control
The new protocol option in RequestInit lets you pin the HTTP version:
// Force HTTP/2 — fails with HTTP2Unsupported if the server doesn't support it
await fetch("https://example.com", { protocol: "http2" });
// Force HTTP/1.1 — ignores the experimental flag
await fetch("https://example.com", { protocol: "http1.1" });
Accepted values: "http2", "h2", "http1.1", "h1".
What works
- Keep-alive pooling — idle HTTP/2 sessions (with HPACK state) are reused by subsequent requests
- Streaming request bodies —
ReadableStreambodies are sent as DATA frames with proper flow control REFUSED_STREAMand gracefulGOAWAY— transparently retried (up to 5 attempts) for replayable bodies- Content-Length enforcement per RFC 9113 §8.1.1
Expect: 100-continuesupport
Hardening
The HTTP/2 client also includes RFC 9113 conformance and denial-of-service protections:
- CONTINUATION flood / HPACK bomb mitigation: 256 KiB cap on both header-block accumulation and decoded header lists, advertised via
SETTINGS_MAX_HEADER_LIST_SIZE. - PING reflection attack mitigation: 1 MiB cap on queued PING/SETTINGS-ACK control frames prevents unbounded memory growth from malicious servers.
- The first server frame must be SETTINGS per RFC 9113 — connections that violate this are immediately terminated.
RST_STREAM(NO_ERROR)mid-body now correctly fails the request instead of silently truncating the response.REFUSED_STREAMretries only when no data has been delivered to the caller.- Content-Length mismatches with actual DATA frame bytes are now detected and rejected.
- Trailers without
END_STREAMare now rejected per spec. GOAWAYno longer drops already-completed streams.
Not yet supported
HTTP proxies/CONNECT tunneling, Unix sockets, server push, and cleartext HTTP/2 (h2c) are not yet supported. The HTTP/1.1 path is completely unchanged when the flag is off and protocol is not set.
Experimental HTTP/3 Client for fetch()
fetch() now supports an experimental HTTP/3 client using the protocol option. This uses an lsquic-backed QUIC transport that runs alongside the existing HTTP/1.1 and HTTP/2 paths.
⚠️ Highly experimental. This is an early preview — the API may change in future releases.
const res = await fetch("https://example.com/", { protocol: "http3" });
console.log(await res.text());
Both "http3" and "h3" are accepted as protocol values. The HTTP/3 client shares the same redirect, decompression, and response handling pipeline as HTTP/1.1 and HTTP/2, so existing fetch() behavior is preserved.
What's supported:
- All standard HTTP methods (GET, POST, PUT, DELETE, HEAD)
- Request and response headers, JSON bodies, gzip compression
- Redirects
- Large request/response bodies (1 MB+ round-trips)
- Concurrent multiplexed requests over a single QUIC connection
- Connection pooling and sequential reuse
ReadableStreamrequest body uploads- Full-duplex bidirectional streaming (server can respond while upload is still in progress)
rejectUnauthorizedTLS option (defaults totrue)AbortSignalsupport
Alt-Svc HTTP/3 upgrades
The HTTP/3 client can also automatically upgrade fetch() requests to HTTP/3 via the Alt-Svc header (RFC 7838). When a server advertises Alt-Svc: h3 in an HTTPS response, subsequent requests to that origin are routed over QUIC instead of TCP.
This is opt-in while the HTTP/3 client matures. Enable it with the CLI flag or environment variable:
# CLI flag
bun --experimental-http3-fetch app.ts
# Environment variable
BUN_FEATURE_FLAG_EXPERIMENTAL_HTTP3_CLIENT=1 bun app.ts
// First request goes over TCP/TLS as usual
const res1 = await fetch("https://example.com/api");
// If the response includes `Alt-Svc: h3=":443"`,
// the next request to the same origin uses QUIC/HTTP-3
const res2 = await fetch("https://example.com/api");
The upgrade is transparent and per-origin — cross-origin redirects re-evaluate from HTTP/1.1, and requests that aren't eligible (proxied, unix socket, sendfile, or pinned to a specific protocol) gracefully fall back to TCP.
Rewritten fs.watch() backend on Linux, macOS, and FreeBSD
Bun's fs.watch() implementation on POSIX platforms has been completely rewritten to talk directly to the OS file-watching APIs (inotify, FSEvents, kqueue) instead of routing through Bun's internal bundler watcher. This fixes several long-standing bugs and reduces complexity significantly.
Recursive watching now tracks new directories (Linux)
Previously, fs.watch("dir", { recursive: true }) only registered the directory tree that existed at the time watch() was called. Directories created after the watch started were never tracked, so files inside them were invisible to the watcher.
import fs from "node:fs";
// Now correctly detects changes inside directories created after watch() starts
fs.watch("./src", { recursive: true }, (event, filename) => {
console.log(event, filename);
});
// mkdir src/newDir && touch src/newDir/file.txt
// Previously: only "rename newDir" — file.txt was missed
// Now: "rename newDir", "rename newDir/file.txt", "change newDir/file.txt"
Deleted-and-recreated files emit change events again (Linux)
When a watched file was deleted and recreated, subsequent modifications to the recreated file would silently stop emitting change events. This is now fixed — the new inotify watch descriptor is correctly registered on recreation.
macOS no longer spins up two watcher threads
Previously, fs.watch() on a directory on macOS would start both a kqueue watcher (via the bundler watcher) and an FSEvents CFRunLoop thread. The new implementation uses FSEvents exclusively for both files and directories, matching libuv's behavior and halving the thread overhead.
--no-orphans — exit when the parent process dies
Bun now supports an opt-in mode that automatically exits when its parent process dies — even if the parent was SIGKILLed and never had a chance to forward a signal. On exit, Bun also recursively SIGKILLs every descendant process it spawned.
This is useful when Bun is launched by a supervisor (Electron, a CI runner, a thin shim) that may be force-killed. Without this option, Bun would be silently reparented to launchd/init and keep running forever, along with anything it spawned.
There are three equivalent ways to enable it:
# CLI flag (new)
bun --no-orphans run my-script
# bunfig.toml
[run]
noOrphans = true
# Environment variable
BUN_FEATURE_FLAG_NO_ORPHANS=1 bun run my-script
The flag is automatically inherited by nested Bun processes, so enabling it once at the top level is sufficient.
How it works:
- Linux: Uses
prctl(PR_SET_PDEATHSIG, SIGKILL)— kernel-delivered, no polling, no thread. Children spawned from the main thread also inheritPDEATHSIGby default so non-Bun descendants are covered. - macOS: Registers a
EVFILT_PROC/NOTE_EXITwatch for the original parent pid on the existing event loop's kqueue — the same mechanism Bun already uses to watch child process exits. No dedicated thread, no extra file descriptor.
On clean exit, Bun walks its process tree and uses a stop-verify-kill strategy for pid-reuse safety: each descendant is SIGSTOPped, its ppid is re-verified, and only then is it SIGKILLed. This prevents accidentally killing an unrelated process that recycled a stale pid.
macOS coverage is now comprehensive. Previously, bun run <script> and bunx on macOS had no parent-death watching — if the parent was killed, spawned scripts could be left orphaned. Bun now uses a dedicated kqueue watcher for these paths, monitoring both the parent process and child stdio. bun run --filter and bun run --parallel on macOS are also now covered.
| Linux | macOS | |
|---|---|---|
bun <file> | ✅ prctl | ✅ Event loop watcher |
bun run / bunx | ✅ prctl | ✅ kqueue watcher (new) |
--filter / --parallel | ✅ prctl | ✅ MiniEventLoop watcher (new) |
Linux and macOS only (no-op on Windows).
process.execve() support
Bun now implements process.execve(execPath, args, env), matching the API added in Node.js v24. This POSIX syscall replaces the current process image in-place — it never returns on success.
// Replace the current process with a new one
process.execve("/usr/bin/echo", ["echo", "hello from execve"], {
PATH: process.env.PATH,
});
// ^ If successful, this line is never reached
Key details:
- stdio is inherited — file descriptors 0/1/2 are preserved across the exec boundary, while all other descriptors are marked close-on-exec to prevent leaks.
- Signal mask is reset before calling
execve(2). - Throws
ERR_WORKER_UNSUPPORTED_OPERATIONwhen called from a worker thread. - Throws
ERR_FEATURE_UNAVAILABLE_ON_PLATFORMon Windows. - Emits an
ExperimentalWarningonce per process, matching Node.js behavior. - If
execvefails, the process prints the error to stderr and aborts (consistent with Node.js behavior, since process state has already been mutated).
Bun.Terminal on Windows via ConPTY
Bun.Terminal and Bun.spawn({ terminal }) now work on Windows, powered by the Windows ConPTY API (CreatePseudoConsole). Previously, Bun.Terminal was only available on macOS and Linux.
const terminal = new Bun.Terminal({
cols: 80,
rows: 24,
onData(data) {
process.stdout.write(data);
},
});
const proc = Bun.spawn({
cmd: ["cmd.exe", "/c", "echo", "hello from ConPTY"],
terminal,
});
await proc.exited;
terminal.close();
Platform differences
The core behavior — child sees a TTY, write() reaches the child's stdin, child output reaches the data callback, resize() updates the child's window size — is the same on every platform. A few details differ on Windows:
- No termios on Windows.
inputFlags,outputFlags,localFlags, andcontrolFlagsalways read as0and setting them is a no-op. - No echo without a child process. On POSIX, the kernel line discipline echoes
write()input back to thedatacallback even with no process attached. ConPTY has no line discipline, so input is buffered for the next reader. - ConPTY re-encodes output. The
datacallback receives semantically equivalent — but not byte-identical — escape sequences compared to what the child emitted. Colors and text are preserved; cursor-positioning sequences may be reordered or coalesced.
Thanks to @dylan-conway for the contribution!
using / await using no longer lowered when targeting Bun
Bun's underlying JavaScript engine (JavaScriptCore) natively supports the Explicit Resource Management proposal (using and await using). Starting in this release, Bun no longer transpiles these declarations into __using / __callDispose helper calls wrapped in try/catch/finally when the target is Bun.
This applies to:
bun run/bun <file>Bun.Transpiler({ target: "bun" })bun build --target=bun(including--compileand--bytecode)
Other targets (browser, node) continue to lower using as before.
Before:
// bun build --target=bun entry.js
var __using = (stack, value, async) => {
/* ... */
};
var __callDispose = (stack, error, hasError) => {
/* ... */
};
{
let __stack = [];
try {
const x = __using(
__stack,
{
[Symbol.dispose]() {
/* ... */
},
},
0,
);
console.log("hi");
} catch (_catch) {
var _err = _catch,
_hasErr = 1;
} finally {
__callDispose(__stack, _err, _hasErr);
}
}
After:
// bun build --target=bun entry.js
{
using x = {
[Symbol.dispose]() {
/* ... */
},
};
console.log("hi");
}
This also fixes a bug where using inside a CommonJS module (.cjs) would inject an ESM import … from "bun:wrap" inside the CommonJS function wrapper, causing an Expected CommonJS module to have a function wrapper error instead of the expected TypeError for non-disposable values.
SIGHUP and SIGBREAK signal handling on Windows
process.on('SIGHUP', …) and process.on('SIGBREAK', …) now correctly receive Windows console-control events, matching Node.js behavior:
| Console event | Signal |
|---|---|
CTRL_CLOSE_EVENT (closing the console window) | SIGHUP |
CTRL_BREAK_EVENT (Ctrl+Break) | SIGBREAK |
Previously, these signal names were missing from Bun's Windows signal map, so registering a listener was treated as a plain EventEmitter event — no uv_signal_t was created, and the default handler would terminate the process immediately.
// Gracefully handle console window close on Windows
process.on("SIGHUP", () => {
cleanup();
process.exit();
});
// Handle Ctrl+Break
process.on("SIGBREAK", () => {
console.log("Ctrl+Break received");
process.exit();
});
Thanks to @ig-ant for the contribution!
WebSocket perMessageDeflate: false now respected in upgrade requests
Previously, setting perMessageDeflate: false when creating a WebSocket connection was silently ignored — Bun always sent the Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits header in the upgrade request. This broke deployments where gateways or proxies reject upgrade requests that advertise unwanted extensions.
Now, passing perMessageDeflate: false correctly suppresses the extension header, matching the behavior of Node.js and the ws package.
const WebSocket = require("ws");
// Extension header is now correctly omitted
const ws = new WebSocket("ws://localhost:3000", {
perMessageDeflate: false,
});
// Also works with globalThis.WebSocket
const ws2 = new WebSocket("ws://localhost:3000", {
perMessageDeflate: false,
});
Additionally, if the server responds with a Sec-WebSocket-Extensions header when the client did not offer any extensions, the handshake is now correctly failed per RFC 6455 §9.1 — matching upstream ws behavior.
FreeBSD and Android Support
Bun now has 1st-party native builds for FreeBSD and Android.
Reduced memory usage for MongoDB & Mongoose
All TLS-using APIs in Bun — Bun.connect, Bun.SQL (Postgres & MySQL), Valkey, upgradeTLS, new WebSocket(), and node:tls — now share a single native SSL_CTX cache per VM. Connections with identical TLS configurations reuse the same SSL_CTX instead of allocating a fresh one (~50 KB of BoringSSL state + cert/key parsing) per connection.
This is especially impactful for database connection pools: a Postgres or MySQL pool with sslmode=require and N connections previously created N separate SSL_CTX objects. Now it creates one.
import { SQL } from "bun";
// All connections in this pool now share a single SSL_CTX
const db = new SQL("postgres://user:pass@host/db?sslmode=require");
// These also share the same cached SSL_CTX since the config is identical
const conn1 = await Bun.connect({
hostname: "example.com",
port: 443,
tls: true,
socket: {
/* ... */
},
});
const conn2 = await Bun.connect({
hostname: "example.com",
port: 443,
tls: true,
socket: {
/* ... */
},
});
The cache is keyed by a SHA-256 digest of the TLS configuration fields. servername and ALPNProtocols are excluded from the digest (they're per-connection, not per-context), so Bun.connect({ tls: { servername: "x" } }) correctly shares the default SSL_CTX with tls: true.
This was the root cause behind long-standing memory leak reports when using tls.connect(), Bun.connect({tls}), socket.upgradeTLS(), and any library built on top of them (MongoDB, Mongoose, mysql2, etc.). Under connection churn — Postgres pools, Redis, fetch keepalive expiry, MongoDB heartbeats — RSS would grow rapidly until the garbage collector eventually frees the context. Now it avoids allocating unnecessary duplicate contexts.
// Before: each iteration allocated a fresh SSL_CTX (~50 KB+)
for (let i = 0; i < 1000; i++) {
const sock = tls.connect({
host: "localhost",
port: 5432,
rejectUnauthorized: false,
});
sock.on("secureConnect", () => sock.destroy());
}
// RSS after: ~1 GB
// After: one shared SSL_CTX, RSS stays flat
// RSS after: ~168 MB
Upgraded JavaScriptCore engine
Bun's underlying JavaScript engine (JavaScriptCore) has been upgraded with 565 upstream commits, bringing numerous performance improvements, bug fixes, and new capabilities.
JavaScript performance & correctness
- Faster async functions — When an async function returns a value without any
await, the returned promise is now optimized to avoid unnecessary overhead. - Faster
Array.prototype.shift— NewfastShiftimplementation for arrays. - Faster
JSON.parsefor short strings —JSStringcells are now cached for short string values returned byJSON.parse. - Faster
String.prototype.startsWith/endsWith— Single-character checks now avoid resolving rope strings. - Faster
Intl.NumberFormatcreation — Optimized construction and improved external memory reporting forIntl.NumberFormatandIntl.PluralRules. - Faster
Array.prototype.indexOfon NodeList — New fast path added.
Bug fixes from upstream
- Fixed
Promise.prototype.finallythrowing inSpeciesConstructorbefore callingthen, matching spec behavior. - Fixed
Object.definePropertiesProxy trap ordering to match the spec. - Fixed megamorphic inline cache property ownership check.
- Fixed TypedArray
toSorted/toReversed/withto correctly snapshot the span. - Fixed
Intl.SegmenterisWordLikeoff-by-one error. - Fixed
Intl.Localeto canonicalize before overriding language. - Fixed
Intl.DateTimeFormatto preserve original legacy[[TimeZone]]. - Fixed several RegExp JIT issues
- Fixed JIT compiler issues with hole-handling when rematerializing sunk double arrays and escaping
MultiGetByOffsetconstants not convertible to double.
WebAssembly
- Relaxed SIMD support — Implements the relaxed SIMD proposal, adding instructions like
f32x4.relaxed_madd,i8x16.relaxed_swizzle, and more. - Memory64 improvements — Atomics, bulk memory operations, and grow/size in the OMG tier now support 64-bit memory.
- Fixed integer division/remainder with
INT_MIN/-1in BBQ JIT. - Fixed floating-point min/max negative-zero handling in BBQ JIT.
- Fixed crash on wide-arithmetic instructions.
Thanks to @sosukesuzuki for the upgrade!
bun publish now sends README metadata to the registry
Previously, bun publish included README.md in the published tarball but didn't populate the readme or readmeFilename fields in the JSON body sent to the npm registry. This meant packages published with Bun showed an empty README when queried via the registry API (e.g. npm view <pkg> readme), even though the tarball contained one.
Now, bun publish matches npm publish behavior by automatically finding the first README or README.* file (case-insensitive) in your workspace and including its contents in the version metadata sent to the registry. This works for both workspace publishes (bun publish) and tarball publishes (bun publish path.tgz). A readme field already present in package.json takes precedence.
# Before: registry API returned empty readme
npm view @my-scope/my-package readme # ""
# After: readme contents are properly sent
npm view @my-scope/my-package readme # "# my-package\n\nA great package..."
Updated SQLite to 3.53.0
Bun's built-in SQLite has been updated from 3.51.2 to 3.53.0.
Notable changes in SQLite 3.53.0 include:
- New
SQLITE_DBCONFIG_FP_DIGITSoption for controlling floating-point precision when converting doubles to text - New
SQLITE_LIMIT_PARSER_DEPTHlimit for controlling the maximum depth of the SQL parser stack - New
SQLITE_PREPARE_FROM_DDLflag for enforcing schema-level security constraints during statement preparation
Cross-language LTO for Zig ↔ C++ on Linux
Bun's binary is now built with full link-time optimization (LTO) across the Zig and C++ boundaries on Linux. Previously, the Zig-compiled object file was a native ELF object that the linker could link but not optimize across — meaning hundreds of small cross-language function calls (Zig → C++, C → Zig, allocator calls) were never inlined.
By emitting the Zig object as LLVM bitcode and participating in the same LTO link pass as the C/C++ side, LLVM can now inline and optimize across language boundaries:
| Boundary | Functions declared | Functions eliminated by inlining | % |
|---|---|---|---|
Zig export fn → C++ | 336 | 142 | 42% |
C us_* (µSockets) ← Zig | 115 | 79 | 69% |
C++ uws_* (µWebSockets) ← Zig | 108 | 76 | 70% |
mi_free (mimalloc) | — | all | 100% |
Measured impact (linux-x64):
| Benchmark | Before | After | Improvement |
|---|---|---|---|
Bun.escapeHTML | 183.2 ns | 171.3 ns | 6.5% faster |
TextDecoder.decode | 106.8 ns | 104.0 ns | 2.6% faster |
HTTP throughput (oha -n 1M -c 50) | ~193,800 req/s | ~200,600 req/s | 3.5% faster |
This is a broad improvement — any hot path that crosses the Zig/C++ boundary benefits, including HTTP serving, text encoding/decoding, and HTML escaping.
Faster ESM module loading
Fixed an internal parser oversight where an ~8KB struct was being copied by value on every AST node allocation, causing unnecessary memcpy overhead during transpilation. Passing it by pointer instead eliminates the redundant copies, reducing _platform_memmove overhead from 7.5% to 2.9% of self time in profiling.
On a benchmark loading 500 ESM files, this results in approximately 12% faster module loading (~140ms → ~123ms).
Thanks to @sosukesuzuki for the contribution!
Reduced GC overhead for built-in objects
Bun's incremental garbage collector previously re-scanned ~63 types of built-in objects (Request, Response, Subprocess, Stats, Dirent, Timeout, and more) after every mutator yield during incremental GC — even though these objects already use write barriers that guarantee correctness without the extra pass.
This redundant work has been removed. Only visitChildren is called now for codegen'd classes, eliminating the overhead of re-walking every live instance of these common types during incremental GC cycles. Hand-written types that genuinely require output constraints (like EventTarget, AbortSignal, MessagePort, etc.) are unchanged.
This should reduce GC pause times, especially in applications with many live built-in objects.
Smaller binary size
Bun gets smaller on Windows and Linux. macOS binary size hasn't changed much.
| Target | |
|---|---|
| Linux aarch64 | -9.07 MB |
| Linux aarch64-musl | -7.63 MB |
| Linux x64 | -8.58 MB |
| Linux x64-baseline | -8.64 MB |
| Linux x64-musl | -6.61 MB |
| Linux x64-musl-baseline | -6.75 MB |
| Windows aarch64 | -18.42 MB |
| Windows x64 | -17.66 MB |
| Windows x64-baseline | -17.67 MB |
tls.getCACertificates('system') now works without --use-system-ca
Previously, tls.getCACertificates('system') returned an empty array [] unless --use-system-ca or NODE_USE_SYSTEM_CA=1 was explicitly set. Node.js returns the OS trust store unconditionally for 'system' — the flag only affects 'default'. Bun now matches this behavior.
System certificates are lazy-loaded on first demand, so there's no startup cost unless 'system' is actually queried or --use-system-ca is set.
import tls from "node:tls";
// Previously returned [] without --use-system-ca, now returns system CA certs
const systemCerts = tls.getCACertificates("system");
console.log(systemCerts.length); // > 0 on Linux/Windows
This also fixes a data race that could cause segfaults or truncated certificate lists when multiple threads (e.g. Workers) accessed root certificates concurrently during initialization.
Thanks to @cirospaciari for the contribution!
tls.getCACertificates('system') no longer stalls on managed Macs
On macOS, tls.getCACertificates('system') previously evaluated every keychain certificate using SecTrustEvaluateWithError with an SSL policy, causing trustd to attempt OCSP/CRL/AIA network fetches for each cert. On managed Macs running a NetworkExtension content filter, this turned a local lookup into ~10 seconds of wall-clock time as hundreds of outbound flows were individually signed and policy-denied by the filter.
This release rewrites the macOS keychain enumeration to match how Node.js and Chromium handle it:
- Removed
kSecMatchTrustedOnlyfrom the keychain query — this flag forced a redundant network-revocation-enabled evaluation of every cert before per-cert filtering even started. - Replaced the trust-settings stub with a full parser — the previous implementation always returned
UNSPECIFIED, causing every cert to fall through to the expensiveSecTrustEvaluateWithErrorpath. The new parser (ported from Node'sIsTrustSettingsTrustedForPolicy) resolves certs via cheap local XPC lookups. - Deferred
SecTrustEvaluateWithErroras a last resort — only certs with no decisive trust settings in any domain reach it, and when they do, it now usesSecPolicyCreateBasicX509+SecTrustSetNetworkFetchAllowed(false)to avoid network access entirely.
The result is functionally equivalent — OpenSSL still enforces EKU and basic-constraint checks at handshake time — but enumeration no longer triggers any network I/O.
--use-system-ca on Windows now loads intermediate and TrustedPeople certificates
When using --use-system-ca or NODE_USE_SYSTEM_CA=1 on Windows, Bun now reads from the same certificate stores as Node.js, fixing unable to get local issuer certificate errors commonly seen in enterprise and intranet environments.
Previously, Bun only enumerated the ROOT store for CURRENT_USER and LOCAL_MACHINE. This meant that when a server only sent a leaf certificate without its intermediates (very common on intranets with corporate proxies or self-signed certificates), Bun couldn't build the certificate chain — even though Windows had the intermediates cached in its CA store.
Bun now mirrors Node.js's ReadWindowsCertificates behavior:
| Before | After | |
|---|---|---|
| Store names | ROOT | ROOT, CA, TrustedPeople |
| Locations | LOCAL_MACHINE, CURRENT_USER | + GROUP_POLICY, ENTERPRISE variants |
CERT_STORE_OPEN_EXISTING_FLAG | ✗ | ✓ (don't create missing stores) |
| EKU server-auth filter | ✗ | ✓ (skip certs restricted to e.g. code-signing only) |
This brings --use-system-ca on Windows to parity with Node.js, making it significantly more reliable for enterprise environments with custom certificate authorities and proxy servers.
Event loop refactor
Large parts of the event loop have been refactored to improve reliability and simplify memory management.
Along the way, this fixed:
DuplexUpgradeContextwas never freed (full leak pertls.connect({socket: duplex}))UpgradedDuplex.onEndCallbackwas incorrectly wired toonReceivedDataSSLWrapper.initleaked the strdup'd passphrase on error pathsTLSSocket.memoryCostnow correctly reports off-heap SSL state
Bugfixes
Node.js compatibility improvements
- Fixed: memory leak in
node:httpwhereNodeHTTPResponseand its associated buffers were never freed whenondatawas re-registered after the request body had already been fully received, also causing the event loop to stay alive unnecessarily - Fixed:
res.setTimeout()on client-sideIncomingMessageno longer keeps the event loop alive after the response completes. Previously, callingres.setTimeout(90000)would prevent the process from exiting for the full timeout duration, even when there was nothing left to do. The timer is now unref'd to match Node.js behavior. Also fixedres.setTimeout()to returnthis, clear the timer when called with0, and stack listeners viares.on("timeout", cb)for Node.js compatibility. - Fixed: use-after-free crash in HTTPS requests when
checkServerIdentityrejects a certificate due to hostname mismatch - Fixed:
checkServerIdentitycallback passed tohttps.request()was ignored — the native check always ran instead - Fixed:
https.createServer()with acaoption incorrectly required client certificates, even whenrequestCertwas not set totrue - Fixed: TLS certificate identity verification now falls back to the Subject Common Name (CN) when the certificate has no SAN entries, matching Node.js behavior
- Fixed: use-after-free crash in
node:zlibwhen anonerrorcallback issued a re-entrantwrite()followed byclose()on native zlib/brotli/zstd handles - Fixed: heap-use-after-free crash in
node:zlibwhen calling.reset()on a zlib, Brotli, or Zstd stream while an async.write()is still in progress on the threadpool - Fixed: memory leak in
crypto.scryptwhere the callback and protected password/salt buffers were never released when the output buffer allocation failed (e.g. with an extremely largekeylen) - Fixed:
crypto.randomFillandcrypto.randomFillSyncbounds-checking bugs that could cause a heap overflow whenoffsetexceeded 2²⁴ due tof32precision loss, and a unit mismatch that caused integer underflow or silent under-fill for multi-byte typed arrays (e.g.Float64Array) when using the 3-argument form - Fixed:
crypto.subtle.unwrapKey('jwk', ...)promise never settling and leaking memory when the decrypted payload was valid JSON but not a valid JWK (e.g. missing the requiredktyfield). TheTypeErrornow correctly rejects the promise instead of escaping as an uncaught exception. Also fixed a smaller native memory leak when decrypted bytes weren't valid JSON at all. - Fixed:
process.dlopencrash when a native addon's init callback re-entrantly callsnapi_module_register()(e.g. nesteddlopenor registering additional modules from within an init function), which could invalidate the internal iterator and cause a use-after-free - Fixed:
napi_create_external_arraybufferandnapi_create_external_buffernow correctly returnnapi_pending_exceptionwhen a NAPI exception is already pending, matching Node.js behavior. Previously, calling these functions with a pending exception could lead to double-frees or orphaned GC cells with permanently-disarmed destructors. - Fixed: fuzzer-detected crash in
process.setgroups()andprocess.hrtime()with unexpected input - Fixed: crash when lazy construction of
process.stdin/process.stdout/process.stderrthrows near the stack recursion limit and anuncaughtExceptionlistener is registered - Fixed:
process.stdinhanging or spinning at 100% CPU when reading from a FIFO pipe and the parent process dies or a new writer reappears during the drain loop - Fixed:
process.stdin.isRawnot updating after a successfulsetRawMode()call on Windows, which causedreadlineand other modules that checkisRawto incorrectly restore cooked mode - Fixed: out-of-bounds read in
Buffer.from()with invalid input in a less common encoding - Fixed:
Buffer.copyBytesFrom()producing incorrect results or throwingRangeError: Out of memorywhen called with a TypedArray view that has a non-zerobyteOffsetinto its backingArrayBuffer - Fixed: memory leak in
Buffer.from(string, 'hex')andBuffer.from(string, 'base64')when the input contained no valid encoded characters (e.g.Buffer.from('zz', 'hex')). The internal staging allocation was never freed when decoding produced zero bytes, causing ~4KB to leak per call. Also fixed a related issue whereffi.toBuffer(ptr, 0, finalizer)would silently drop the user's finalizer. - Fixed: crash and out-of-bounds read in
Buffer#copyandBuffer#fillwhen avalueOfcallback detaches or resizes the underlyingArrayBufferduring argument coercion - Fixed: memory leak in
child_processstdout when reading from spawned processes — the internalFileReader.onPullmemcpy path failed to free the drained buffer, causing linear RSS growth under sustained reads - Fixed: memory leak in
tlsSocket.setSession()where each call leaked oneSSL_SESSION(~6.5 KB per call) due to a missingSSL_SESSION_freeafterd2i_SSL_SESSION - Fixed: crash when calling
TLSSocket.getServername()after the socket was closed, due to a null SSL pointer dereference - Fixed: use-after-free in
tls.connect({socket: duplex})when a pre-open duplex error races with the queuedStartTLStask, causing freedHandlersmemory to be read - Fixed: memory leaks in
getPeerCertificate()on server-side TLS sockets (mTLS) where an X509 reference fromSSL_get_peer_certificatewas never freed and a BIO was leaked (~800 bytes per call) inX509Certificate.rawdue to an incorrect destructor capture - Fixed: memory leak in
node:netwhensocket.connect()fails synchronously on reused handles (e.g. connecting to a nonexistent Unix socket path), which leaked one native socket struct per failed reconnect - Fixed: memory leak where
fs.watch(path, { persistent: false })watchers were never garbage collected after.close(), caused by a reference count underflow that permanently pinned each watcher as a GC root - Fixed: memory leak in
fs.watch()on macOS where the resolved directory path was never freed, leaking ~path-length bytes on everyfs.watch(<directory>)call - Fixed: a use-after-free race condition in
fs.watchon macOS where closing a watcher while events were firing could crash due to the CoreFoundation thread reading freed memory - Fixed: a crash on macOS where
FSEventStreamCreatecould returnNULLunder rapidfs.watch().close()churn, causing a CoreServices crash when theNULLstream was passed toFSEventStreamScheduleWithRunLoop - Fixed a crash in
fs.readdirSync()with{ encoding: 'buffer', recursive: true }when a subdirectory fails to open (e.g. due to a self-referential symlink causingELOOP). An internal memory management bug caused a use-after-free in the error cleanup path, leading to crashes or corrupted results. - Fixed: memory leak in
fs.readdirSync()with{ recursive: true, withFileTypes: true }when the call fails partway through (e.g. due toELOOPorEACCES), whereDirent.pathreferences were not properly released on the error path - Fixed: crash involving long file paths in
fs.cp,fs.promises.cp, andfs.cpSyncwith{ recursive: true } - Fixed:
fs.cp/fs.cpSyncon Linux and FreeBSD copied symlinks with the source symlink's own path as the target instead of the path the symlink actually pointed at, causing copied symlinks to point back into the source tree and dangle if the source was deleted. Also fixedcpSyncthrowingENOENTwhen copying a symlink to a destination whose parent directory doesn't exist. - Fixed:
fs.cp/fs.cpSyncon Windows leaking an OS handle for every symlink or junction in the source tree, which could exhaust the process handle table when copying large trees (e.g.node_moduleswith pnpm-style junctions) - Fixed: memory leak in
fs.symlinkSync,fs.linkSync, andfs.renameSyncwhere path arguments were not freed when a later argument was rejected, causing RSS to grow by hundreds of MB in error-heavy loops - Fixed: memory leak in
dns.lookupanddns.resolve*when more than 32 concurrent c-ares DNS requests are in flight, where overflow results were never freed - Fixed: use-after-free crashes in
node:http2when re-entrant JS callbacks (e.g.session.request()inside a timeout listener, an options getter, or a write callback) triggered a hashmap rehash, invalidating internal stream pointers. Streams are now heap-allocated and stable for the lifetime of the session. (@Jarred-Sumner) - Fixed: a crash in
node:vmwhereSourceTextModule.link()with many imports could trigger a use-after-free when the garbage collector concurrently iterated the module's resolve cache during a HashMap rehash - Fixed: ESM module evaluation where sibling static imports could incorrectly skip waiting for async-pending strongly connected components, causing TDZ errors when accessing bindings from cyclic module graphs with top-level
await - Fixed: deadlock when a non-entry module with top-level
awaitdynamically imports a module that imports it back - Fixed: importing a module with top-level
awaitfrom multiple sibling imports in the same module graph caused aReferenceError: Cannot access before initializationbecause sibling modules skipped the spec-mandated wait for the shared TLA dependency to settle (@sosukesuzuki) - Fixed:
node:testtop-leveltest()ignoring{ skip }and{ todo }options — nestedSuite#testhonored them, but the top-level entry point did not - Fixed: child processes spawned by Bun on macOS could inherit an effectively infinite
RLIMIT_NOFILEsoft limit, causing programs that read the limit into anintto misbehave (e.g. failing socket reads). The startup file descriptor limit raise is now capped at1 << 20, matching Node.js. (@alii)
Bun APIs
- Fixed: use-after-free when reading
.listeneron a closed client socket created viaBun.connect() - Fixed: use-after-free when accessing
handle.listeneron a socket that failed to connect, where the native socket'shandlerspointer was left dangling after the per-connectionHandlersallocation was freed - Fixed a crash when calling
socket.reload()orlistener.reload()on active TCP/TLS sockets. Previously, reloading handlers reset the internal active connection counter to zero, which caused integer overflow panics in debug builds and segfaults (heap-use-after-free) in release builds. This affected reloads both inside and outside of socket event handlers. - Fixed: crash (NULL dereference / SIGSEGV) when closing a socket inside a TLS 1.2 renegotiation handshake callback (
Bun.connectwith TLS) - Fixed: use-after-free in TLS server ALPN callback when handling concurrent handshakes, which could cause crashes or garbage data during protocol negotiation on servers with
ALPNProtocolsconfigured (@Jarred-Sumner) - Fixed: TLS sockets could leak memory and stay open indefinitely when
SSL_shutdownfailed to flush theclose_notifyalert (e.g. kernel buffer full or peer already gone), because the socket would wait for a peer event that never arrives. Now the socket closes immediately in this case. (@cirospaciari) - Fixed: use-after-free crash when calling
flush()on a TLS socket afterend(), where the deferred TLS close_notify handshake could dereference already-freed handler memory - Fixed: DNS resolution hanging indefinitely on macOS when retrying without
AI_ADDRCONFIGon loopback-only network configurations. The retry path was polling the wrong mach port, causingfetch(),Bun.connect(), and other network APIs to hang instead of completing the DNS lookup. - Fixed:
Bun.dns.lookup()with an oversized hostname could cause a crash - Fixed: DNS resolution over TCP could trigger an assertion failure when c-ares requested both readable and writable polling on the same socket file descriptor.
FilePollnow correctly merges both directions into a singleepoll_ctl(CTL_MOD)call on Linux, and properly submits twoEV_DELETEchanges on kqueue (macOS/FreeBSD) when both directions are registered. (@Jarred-Sumner) - Fixed: when using
Bun.spawnwithstdin: "pipe"without ever reading the.stdinproperty, the stdin pipe file descriptor would leak and remain open until theSubprocesswas garbage collected — even after the child process had already exited. The stdin pipe fd is now properly closed when the child process exits. - Fixed:
Bun.spawnincorrectly closing caller-owned file descriptors passed as extrastdioslots (index ≥ 3) after the subprocess was garbage collected, causingEACCES/EBADFerrors on subsequent reuse of those descriptors (@cirospaciari) - Fixed:
Bun.spawnreturningnullforproc.stdio[N]when a caller-supplied file descriptor was passed as extra stdio (index ≥ 3), instead of returning the fd number back. Caller-owned fds are now correctly exposed viaproc.stdio[N]while still never being closed by the subprocess. - Fixed:
Bun.spawnreading uninitialized memory whenstdio[N>=3]isundefinedor a sparse array hole, which could expose garbage file descriptors or crash by dereferencing invalid pointers in release builds - Fixed: memory leak where
Bun.spawn()subprocess objects were never garbage collected when stdout/stderr pipes drained asynchronously after the child process exited (e.g. when a grandchild process inherited the pipe) - Fixed:
Bun.spawn({ ipc })subprocesses were never garbage collected after the child exited, leaking the subprocess along with its stdout/stderr buffers, stdin FileSink, and other retained objects for the lifetime of the process - Fixed: memory leak in subprocess
PipeReaderwhen a stdout/stderr read failed with a real error (e.g.EBADF,EIO). ThePipeReaderstruct, its buffered bytes,FilePoll, and pipe fd would all leak, and the leaked poll's keep-alive ref would prevent the event loop from ever exiting. - Fixed: subprocess
'exit'event not firing on Linux when multiple child processes exit simultaneously withstdio: 'ignore'. On Linux, the pidfd poll was registered withEPOLLONESHOT, which caused the kernel to disarm the fd before user-space could process it if a nested event loop tick occurred. The pidfd is now registered as level-triggered so a dropped event is harmlessly re-delivered. - Fixed: 2 rare crashes in
Bun.serve() - Fixed: memory leak in
Bun.serve()when a directReadableStreamhandler writes synchronously without returning a promise — each such request leaked the internal response sink and its buffer (~400 bytes per request) - Fixed: GC root leak in
Bun.serve()when a"direct"response stream rejects while acontroller.end()orcontroller.flush(true)promise is pending due to transport backpressure - Fixed: heap-use-after-free and
RequestContextleak when a chunked request body exceedsmaxRequestBodySizeand the fetch handler returns a pending Promise - Fixed: memory leak in
server.fetch(string)where the intermediate URL buffer was leaked on every call - Fixed:
server.fetch()crashing with a segfault when passed aBigIntargument, now properly rejects with aTypeError - Fixed: memory leak when calling
server.reload()with a WebSocket config that lacksopenormessagehandlers — discarded handler functions were permanently rooted and never garbage collected - Fixed: crash in
Bun.serve()whenwebsocket.perMessageDeflatewas set to a non-boolean primitive (e.g. number, string, bigint, or symbol). Now throws aTypeErrorwith a descriptive message instead. - Fixed:
Bun.servewithdevelopment: trueand a[serve.static]plugin whosesetup()throws would cause requests to hang forever instead of returning an error - Fixed: rare crash in
server.upgrade()when an option getter (e.g.get data()) mutatesreq.headersduring the upgrade - Fixed:
server.upgrade()crashing or firing theopenhandler twice when a user-defined getter on thedataorheadersoption re-entrantly calledserver.upgrade()on the same request - Fixed:
Bun.FileSystemRoutercrashing with a panic when route filenames contained certain byte values - Fixed: memory leak in
Bun.FileSystemRouterwhere accessing.paramson aMatchedRoute - Fixed: rare
Bun.FileSystemRouteruse-after-free crash - Fixed: rare heap corruption in
Bun.file().json() - Fixed: memory leak in
Bun.zstdDecompressSyncwhere the partial output buffer was not freed when streaming decompression failed (e.g. corrupt or truncated zstd frames), and a double-free when known-size fast-path decompression failed - Fixed: file descriptor leak in
Bun.Globwhen encounteringNAMETOOLONGerrors during directory traversal (@alii) - Fixed:
Bun.pathToFileURL()crashing with an out-of-bounds panic when given a relative path that, when joined with cwd, exceeded 4096 bytes - Fixed: memory leak in
Bun.password.hash()andBun.password.hashSync()where the hash output buffer was not freed after being copied into a JavaScript string - Fixed: crash in
Bun.markdown.ansi()when input contained invalid UTF-8 lead bytes (lone continuation bytes0x80-0xBFand bytes0xF8-0xFF) by treating them as replacement characters instead of passing them to the multibyte decoder - Fixed:
Bun.S3Client({ queueSize })panicking whenqueueSizeexceeded 255, and silently overriding any validqueueSize(1–255) to 255 - Fixed:
Bun.s3.list()panicking whenprefix,delimiter,continuationToken, orstartAfterexceeded ~341 characters after URL-encoding, since S3 keys can be up to 1024 bytes and percent-encoding can triple that size. - Fixed:
Bun.Archive.prototype.files()memory leak when processing corrupted/truncated archives where previously-read entries were not freed on mid-streamreadDatafailures - Fixed:
Bun.RedisClientgetting permanently stuck in a failed state after reconnection attempts were exhausted,close()was called, or a fatal socket error occurred. Previously, callingclient.connect()would not recover the client. Nowconnect()properly resets internal state and replays the handshake, allowing the client to recover without replacing the instance. - Fixed:
RedisClientTLS connections now properly verify hostnames against the server certificate whenrejectUnauthorized: trueis set. Previously, hostname mismatches and self-signed certificates were silently accepted due to unreachable verification code paths. Connections with mismatched hostnames now correctly reject withERR_TLS_CERT_ALTNAME_INVALID, and untrusted certificates reject with the appropriate verification error. - Fixed:
FileSinkmemory leak where native instances were never freed when a pending buffered write failed (e.g.EPIPEafter the reader closes), causing theFileSink, its outgoing buffer, and associated refs to leak for the lifetime of the process - Fixed: memory leak when passing
Bun.file()ascert,key, orcain TLS options — each config parse leaked one buffer per file - Fixed: crash in
new Bun.Terminal()when passing a non-object argument (e.g. a number, string, or boolean) instead of an options object - Fixed:
Bun.udpSocket()leaked memory when creation failed (e.g. invalid port, bind failure, or throwing getter) because the internal strong reference prevented garbage collection of the wrapper object - Fixed: a use-after-free in
UDPSocket.send()andUDPSocket.sendMany()where user code invalueOf()ortoString()callbacks could detach an ArrayBuffer (via.transfer()) between payload capture and the actual send, causing reads from freed memory - Fixed: heap out-of-bounds write in
UDPSocket.sendMany()when the socket's connection state changed mid-iteration via user JS callbacks (e.g.valueOf(), array index getters). This could cause memory corruption or crashes whenconnect()ordisconnect()was called synchronously during asendMany()call. - Fixed:
UDPSocket.setTTL()andUDPSocket.setMulticastTTL()crashing with a null pointer dereference when the argument'svalueOfcloses the socket during coercion - Fixed:
import("bun:main")was incorrectly resolved as the npmmainpackage instead of the built-inbun:mainmodule, due to a missing alias mapping in the runtime transpiler (@dylan-conway) - Fixed: use-after-free crash in
HTMLRewriter.transform()when a document or element handler returns a rejected promise during the finalend()chunk - Fixed: memory leak in
HTMLRewriterwhere handler structs allocated by.on()and.onDocument()were never freed when the rewriter was garbage-collected, causing unbounded memory growth - Fixed:
HTMLRewriterattribute iterators could read freed memory when saved outside an element handler callback — calling.next()on a leaked iterator now safely returns{done: true}instead of dereferencing a dangling pointer. Iterators are also correctly detached whensetAttributeorremoveAttributemutates the underlying attribute buffer.
bun:sql
- Fixed:
bun:sqlPostgreSQL connections that entered a.failedstate (e.g.ECONNREFUSED, SSL refused withsslmode=require, or a normalclose()) were never garbage collected, leaking the entire native connection including buffers, statements, and SSL state. The request queue's backing buffer was also not freed during cleanup. - Fixed:
sql.unsafe()with multiple semicolon-separated statements in simple mode returned wrong column names for all result sets after the first, and leaked memory from previous field descriptors - Fixed: memory leak in
bun:sqlwhen querying PostgreSQL array-typed columns (text[],int8[],json[],bytea[],bool[], etc.) that caused RSS to grow ~72 MB per 1,000 iterations instead of stabilizing after warmup - Fixed: a potential heap buffer overflow when parsing binary-format
int4[]/float4[]arrays from a malicious or buggy PostgreSQL server. The server-providedlenfield is now validated against the actual column byte length before iterating, preventing out-of-bounds reads and writes. - Fixed: MySQL stored procedures called via prepared statements (tagged templates or
sql.unsafe(..., params)) would resolve after only the first result set, causing the trailing OK packet to surface as an unhandledTypeErroroutside the caller'scatchblock - Fixed: heap buffer overflow in
sqlMySQL client when the user-supplied parameter array was mutated (e.g. via a side-effecting getter) between query preparation and binding, which could cause out-of-bounds writes in release builds - Fixed: MySQL
sqlclient returning garbage error messages when a cached failed prepared statement was re-executed, caused by a dangling pointer into an overwritten socket read buffer - Fixed: MySQL
.raw()returning length-prefix bytes in the buffer for length-encoded columns (JSON, VARCHAR, TEXT, BLOB, ENUM, SET, GEOMETRY, NEWDECIMAL), causing garbled output when decoded as UTF-8 - Fixed: MySQL client could panic or silently read out-of-bounds memory when a server sent a short auth nonce during
AuthSwitchRequestformysql_native_password. The client now validates nonce length and rejects withERR_MYSQL_MISSING_AUTH_DATAbefore accessing the buffer. Also fixed potential divide-by-zero with empty nonces forcaching_sha2_passwordand invalid indexing with empty public key payloads. - Fixed: a use-after-free crash in
bun:sqlMySQL client when a prepared statement's column reallocation failed - Fixed: MySQL BLOB parameters could be corrupted when
ArrayBuffer.transfer()or GC occurred during query parameter binding - Fixed: memory leak in MySQL adapter when using dynamic interpolation in SQL template literals
- Fixed: a crash in
Bun.sqlMySQL client when a query's.catch()callback calledconnection.close() - Fixed:
SSL_CTXleak in Postgres and MySQL connections when path coercion throws after SSL context creation - Fixed:
us_listen_socket_add_server_namenot propagating duplicate-hostname errors, preventing App.h's rollback from firing
Web APIs
- Fixed:
FormDatamultipart boundary format now matches WebKit exactly (----WebKitFormBoundary{hex}with 4 leading dashes and capitalK), fixing compatibility issues with downstream multipart parsers including OpenAI's API - Fixed: memory leak in
FormDataserialization when aBun.file()entry fails to read (e.g.ENOENT). Previously, constructing aResponseorRequestfrom aFormDatacontaining a validBun.file()followed by an invalid one would leak the already-read file buffers on each failed attempt. - Fixed:
TextDecoder.decodereading through a stale pointer when anoptions.streamgetter detaches or transfers the inputArrayBuffer, which could cause incorrect output, non-deterministic behavior, or crashes due to heap corruption - Fixed: memory leak in
TextDecoderwhen decoding UTF-16LE and UTF-16BE encoded buffers. Each successful call to.decode()leaked the decoded output buffer. - Fixed: empty
BlobandFileobjects incorrectly displaying as[Blob detached]/[File detached]inconsole.logand afterstructuredClone - Fixed: use-after-free in
Blobwhen a duplicated blob'scontent_typewas heap-allocated, causingResponseheaders to read freed memory and return garbage values (e.g. afterBun.file(path, { type: "..." })followed bynew Response(file)) - Fixed: memory leak in structured-clone deserialization of Blob/File objects where truncated or malformed payloads would leak allocated buffers for
content_type, bytes payload,Store, and heapBlobon error paths - Fixed:
structuredClone()crash when serializing largeArrayBuffer,SharedArrayBuffer - Fixed: memory leak in
fetch()when following long HTTP redirect chains - Fixed: memory leak when using
fetch()with percent-encodeddata:URLs, where the intermediate decoded buffer was never freed on each call - Fixed:
fetch()silently hanging against certain hosts due to ECH GREASE being enabled in the TLS ClientHello. Some servers and middleboxes treated theencrypted_client_helloextension as hostile—completing the TLS handshake but never sending a response. This aligns Bun'sfetchTLS behavior withcurl, Node.js, and Bun's ownnode:tls. - Fixed:
WebSocket.close()andWebSocket.terminate()called during theCONNECTINGstate would leave the socket permanently stuck inCLOSING, never fireclose/errorevents, prevent the process from exiting, and leak the WebSocket instance. Now correctly transitions toCLOSED, fireserrorthenclose(code1006,wasClean: false) per the spec, and releases all internal references. - Fixed: memory leak in
WebSocketwhen TLS options were provided but the connection failed during option parsing or validation - Fixed: memory leak where every
new WebSocket("wss://…")routed through an HTTP CONNECT proxy (tunnel mode) leaked one internalHTTPUpgradeClientstruct due to a missing reference count release - Fixed: per-connection memory leak when using WebSocket clients over
wss://through an HTTP CONNECT proxy (tunnel mode). The internal I/O-layer reference was never released because the tunnel path doesn't adopt a uSockets socket, so the close handler never fired — leaking send/receive FIFOs, deflate state, and poll refs for every connection. - Fixed: a
TypeError [ERR_INVALID_STATE]: Controller is already closederror thrown when streaming small files (e.g.Bun.file(small).stream()) where the first pull returns data and EOF simultaneously, causing the controller'sclose()to be called twice - Fixed: a race condition where concurrent
ReadableStreaminstances (e.g.process.stdinandfetch(file://...)bodies) could close each other due to a shared mutablecloserarray at the class factory scope. This causedstdinto be spuriously closed, breaking subsequent operations likesetRawMode. - Fixed: concurrent
ReadableStreaminstances (e.g.Bun.stdin.stream()andfetch(file://...)bodies) could spuriously close each other due to a shared mutable EOF flag, causingstdinto close unexpectedly withEBADFerrors - Fixed:
TransformStreaminstances that were dropped without being explicitly closed, errored, or aborted were never garbage collected, causing an out-of-memory crash in long-running applications. A GC-root cycle between the global object's guarded objects set and the internal writable stream kept the entire stream graph permanently reachable. - Fixed: a long-lived
AbortSignalreused across manyaddEventListener/removeEventListenercycles would accumulate dead closures in memory indefinitely. Each call toaddEventListenerwith a{ signal }option registered an internal abort algorithm, but removing the listener (viaremoveEventListener,{ once: true }firing, orremoveAllEventListeners) never cleaned up that algorithm — causing unbounded memory growth on the signal. - Fixed: A non-303 redirect with a
ReadableStreambody now correctly rejects withTypeErrorinstead ofUnexpectedRedirect. - Fixed:
AbortSignal.reasonis now properly forwarded to the request body'sReadableStream.cancel(reason).
Security
- Fixed: HTTP request smuggling attack vector
- Fixed: missing bounds check in maliciously-crafted
Blobdeserialization - Fixed: integer overflow in IPC
advancedserialization mode with malicious input
Worker fixes
- Fixed: stack overflow crash when closing a deep chain of nested transferred
MessagePorts - Fixed:
MessagePortmemory leak when workers are terminated without explicitly closing their ports. Whenport.onmessagewas assigned orport.ref()was called inside a Worker, the internal self-reference was never released during worker teardown, causing every suchMessagePortto leak for the lifetime of the process. - Fixed: a race condition crash in
MessageEventwhen usingBroadcastChannelorMessagePortwhere the GC marker thread could observe a torn variant inm_dataduring concurrent access, causing aSIGSEGVin release builds - Fixed: segfault in
worker.getHeapSnapshot()caused by a cross-thread race condition where the parent VM'sHandleSetwas mutated from the worker thread without holding the parent VM's lock - Fixed: memory leak and crash when terminating Workers that have a
PerformanceObserverwithout calling.disconnect()— a reference cycle betweenPerformanceandPerformanceObserverprevented both objects from being freed - Fixed: IPC subprocess cleanup on Windows when
uv_read_startfails after pipe open — the embeddedSendQueueand its pending close task are now properly cancelled before the allocation is freed, preventing a use-after-free crash - Fixed: a crash on Linux (glibc) during Worker teardown caused by a stale
.eh_frame_hdrsection reference in the stripped release binary
Timers
- Fixed: native memory leak in
setTimeoutwhenclearTimeout(t),t.refresh(), ort._repeat = Nis called synchronously inside the timer's own callback. The nativeTimeoutObjectstruct was never freed because the post-callback cleanup only checked for the.FIREDstate, missing transitions to.CANCELLEDor.ACTIVEthat occur when the timer is cleared, refreshed, or converted to an interval during execution. - Fixed: crash when calling
clearImmediateon asetImmediatefollowed by garbage collection (Bun.gc(true)) — the cleared immediate's internal reference could panic during event loop cleanup if the GC had already finalized the JS wrapper - Fixed:
timer.ref()on an already-firedsetTimeoutorsetImmediateno longer incorrectly keeps the event loop alive, which previously caused the process to hang indefinitely - Fixed:
setTimeoutwith an out-of-range delay no longer leaves a pending JS exception when the timeout overflow warning triggers user code that throws (e.g., a throwing getter onprocess._exiting), which previously caused crashes in debug builds and unexpected errors in release builds
bun install
- Fixed:
bun installno longer hangs on stalled TLS handshakes and will instead timeout. - Fixed:
bunx @scope/nameno longer matches unrelated system binaries in$PATHWhen running scoped packages withbunx, the bin name was guessed by stripping the scope (e.g.@uidotsh/install→install), then searched against the full system$PATH. If the guessed name collided with an existing system binary like/usr/bin/install,/usr/bin/git, or/usr/bin/find, the system binary would be silently executed instead of the package's actual bin. - Fixed:
bun install --forcenow correctly replaces corrupted entries in the global store instead of silently keeping the broken version - Fixed:
bun installhanging indefinitely when a tarball download returns a 4xx/5xx HTTP error (e.g.404 Not Found) during the install phase, particularly when resolving from an existing lockfile with an empty cache. Both the hoisted and isolated linkers now fail fast with a clear error message instead of blocking forever. (@alii) - Fixed:
bun installhanging indefinitely when a tarball integrity check fails with the isolated linker. When a tarball's SHA-512 didn't match the manifest (e.g. due to a registry redirect serving wrong content), the isolated installer would deadlock instead of reporting an error. - Fixed: a race condition in
bun installwhere worker threads could read lockfile data (packages/string_bytes) that was simultaneously being reallocated by the main thread, potentially causing crashes or failed tarball resolution when installing workspaces with relative tarball paths - Fixed:
bun add(andremove/link/unlink/bunx) crashing with a segfault when a positional argument exceeded 2048 bytes
JavaScript bundler
- Fixed: a type confusion bug in the bundler plugin error handling that could cause a segfault when the plugin builtin threw synchronously
- Fixed: memory leak in
Bun.build()when usingsourcemap: 'inline'withoutoutdir— the intermediate sourcemap JSON buffer was not being freed after base64-encoding - Fixed a crash in
Bun.build()andBun.Transpilerwhen theloaderordefineoptions contained an empty-string key (e.g.,loader: { "": "js", ".ts": "ts" }) that left uninitialized memory slots, causing a segfault when the entries were later hashed or freed - Fixed:
bun build --target=browserpanic involving long"browser"field paths - Fixed: crash reading
.referreron aResolveErrorafter the resolving frame had returned, particularly with non-ASCII source paths - Fixed: bundler leaving behind empty
else {}blocks after dead code elimination when not using--minify-syntax - Fixed: crash with malicious input in
Bun.Transpiler().transformSync() - Fixed: crash in
Bun.Transpilerwhen theloaderargument contains non-Latin-1 (UTF-16) characters, now properly returns aTypeErrorinstead of panicking
Module resolver
- Fixed: wildcard
exportspatterns (e.g."./*": "./dist/packages/*") failing to resolve subpaths containing@characters, such astest-pkg/@scope/sub/index.js. The resolver incorrectly treated the@in the subpath as a version delimiter, causing resolution to fail even though the file existed on disk. This notably affectedember-source@6.12and its@ember/*,@glimmer/*, and@simple-dom/*subpackages. - Fixed: crash in the module resolver when a previously inaccessible directory (e.g. due to
EACCES) becomes readable again, caused by reading a cached error entry as a valid pointer - Fixed: use-after-free when resolving module specifiers containing both a query string (
?) and non-ASCII characters (e.g.import("./target.js?v=café")), which could cause crashes or corrupted resolved paths - Fixed: use-after-free crash when resolving
http://,https://, or//specifiers containing non-ASCII characters - Fixed: a use-after-free in the runtime auto-install path where
enqueueDependencyToRootcould read from a dangling pointer into the lockfile's dependency buffer after it was reallocated byfromNPM - Fixed: auto-install resolution corruption where resolving a package's subpath (e.g.
nanoid/non-secure) could fail with "Cannot find module" after auto-installing a second package, due to cached directory info pointing at a reused threadlocal buffer - Fixed: memory leak when resolving
tsconfig.jsonfiles withextendschains — intermediateTSConfigJSONstructs and theirpathsmaps were never freed, causing repeated leaks on every HMR reload orFileSystemRouter.reload()call - Fixed: memory leak in
BunString__toThreadSafewhere the originalStringImplreference was never released when creating an isolated copy for thread safety, causing one leakedStringImplper call in code paths likeBun.file()and asyncfs.write()
Dev server / HMR
- Fixed: stale asset index in Bun's dev server causing panics or crashes when one CSS file had a syntax error and another CSS file was subsequently edited
- Fixed: use-after-free crash in Bun's dev server (
Bun.servewith framework) when a"use client"directive is removed from a component that had a pending resolution failure, causing the directory watcher to read freed memory on the next file change - Fixed: double-free crash in Bun's dev server when shutting down after a directory watch resolves a previously-failing import dependency
- Fixed: HMR bundles crashing with
X is not a functionwhen a file has multipleimport { ... } fromstatements referencing the same barrel package withsideEffects: false(regression from barrel import optimization in v1.3.0) - Fixed: use-after-free crash in
bun devwhen saving files in editors that use atomic rename (vim, emacs, IntelliJ) — the dev server's incremental graph stored pointers into a buffer that could be reallocated by the file watcher thread before the bundler consumed them - Fixed:
bun devon Linux dropping file change notifications when multiple inotify events were coalesced into a single batch — e.g. an atomic-save creating a temp file and renaming it over the target in quick succession would silently lose the rename, leaving the file unwatched until restart - Fixed:
bun --hoton macOS stopping to detect file changes after the first atomic write (temp file + rename) when watching multiple imported modules, causing the module graph to oscillate between old and new state - Fixed:
--hotmode could print wrong source locations or silently drop errors when a file-watcher event arrived between a module's rejection and its error being reported — the reload now defers until the error is printed against the correct sourcemap
bun build --compile
- Fixed: ELF layout regression causing some executables on WSL1 to fail with
ENOEXEC - Fixed: out-of-bounds write in
toUTF16Allocwhen handling invalid UTF-8 input in Windowsbun build --compilemetadata flag
CSS Parser
- Fixed: CSS
background-clipwith vendor prefixes and multi-layer backgrounds could cause a double-free crash (≥2 background layers) and incorrect output where prefixed clip values were silently dropped instead of being flushed as separate declarations
bun test
- Fixed: crash in
bun test --isolateon macOS ARM64 caused by the concurrent garbage collector visiting a half-initialized global object during test isolation swaps - Fixed:
bun test --isolateandbun test --parallelcrashing with segfaults when test files load native NAPI addon whose deferred finalizers outlive the file's global object. The--parallelcoordinator also no longer silently retries crashed workers, which previously masked panics and let the run exit 0. Fatal signals (SIGSEGV, SIGABRT, etc.) now abort the entire test run with a clear error message. - Fixed:
bun test --changedsilently skipping test files whose only dependency path to a changed source file went through a tsconfigpathsalias with a bare-looking key (e.g.@/*,~/*,components/*). The resolver'spackages=externalshort-circuit was incorrectly marking these aliased imports as external before tsconfigpathscould resolve them. - Fixed:
test.each()/describe.each()table arrays could be garbage collected between the.each(arr)call and the subsequent("name", cb)invocation, causing callbacks to receive corrupted data or throwingExpected array, got …errors - Fixed: crash when a custom asymmetric matcher's implementation throws an exception during
.asymmetricMatch()— the exception is now properly propagated to JavaScript instead of triggering an internal assertion failure - Fixed:
mock.module/vi.mockcalling the module resolver before validating the callback argument, which caused unexpected behavior
Bun Shell
This release fixes over 70 bugs with the Bun Shell.
- Fixed:
cdin Bun's shell hanging forever when encountering errno values other thanENOTDIR/ENOENT/ENAMETOOLONG(e.g.EACCES,ELOOP,EIO). Now properly reports the error to stderr and exits with code 1. - Fixed: a crash in Bun's shell when
cdor.cwd()received a path longer than 4096 bytes — oversized paths now correctly returnENAMETOOLONGinstead of segfaulting - Fixed:
[[ -f path ]]in Bun Shell incorrectly returned true for directories, character devices, sockets, and FIFOs instead of only regular files - Fixed: shell tilde expansion dropping path segments after command substitutions in compound words (e.g.
echo ~/$(echo bin)/subdirnow correctly outputs/home/user/bin/subdirinstead of/home/user/bin)
TypeScript types
- Fixed:
bun-typesFFI type declarations failing to compile undertsgo(TypeScript native preview) due to duplicate computed property keys inFFITypeToArgsTypeandFFITypeToReturnsTypeinterfaces
Windows
- Fixed:
Bun.connect()on Windows named pipes could crash (heap-buffer-overflow under ASAN) or leak memory on connection close due to the socket handlers being incorrectly marked as server mode instead of client mode - Fixed: heap buffer overflows in Windows path normalization (
normalizePathWindows) that could cause heap corruption when handling paths near or exceeding the 32,767 UTF-16 code unit limit. Affected code paths include UTF-8 to UTF-16 conversion, device paths (\\.\), absolute path normalization, separator-free paths, and relative path joining withdirfd. Now properly returnsENAMETOOLONGinstead of writing out of bounds. - Fixed: crash on Windows (
panic: integer overflow) when tearing down hundreds of spawned child processes during process exit, caused by an unsigned integer underflow in the libuv event loop'sactive_handlescounter - Fixed: a panic on Windows when resolving error names from libuv error codes (e.g.
ENOENT,EBADF) due to an integer overflow in internal errno translation (@dylan-conway) - Fixed: a crash on Windows (
Panic: invalid enum value) when libuv returned unmapped error codes during file system operations likefs.readFile
CLI and runtime
Pipeline producer exit no longer clobbers downstream pager's terminal state
When piping Bun's output to tools like less, fzf, or fx (e.g. bun script.js | less), Bun would unconditionally restore its startup terminal (termios) snapshot at exit. Because termios state belongs to the underlying /dev/pts/* device — not the file descriptor — this would overwrite the raw mode that the downstream pager had already set, leaving it unresponsive to keypresses.
# Before: 'q' does nothing in less — terminal is stuck in cooked/line-buffered mode
bun app.js | less
# After: less, fzf, fx, and other pagers work correctly
bun app.js | less
The fix gates the exit-time tcsetattr on whether Bun actually modified the terminal during its lifetime. When Bun is a pipeline producer (stdout is a pipe), fds it never touched are now left alone at exit. When stdout is a TTY (e.g. bun run vim, crash handler, --watch reload), the unconditional restore is preserved so the shell prompt always comes back to a usable state.
Other CLI/runtime fixes
- Fixed:
bun -pwith top-levelawaitexpressions now returns the final completion value instead of the first awaited value — e.g.bun -p '(await 1) + 1'now correctly prints2instead of1