To install Bun
curl -fsSL https://bun.sh/install | bashnpm install -g bunpowershell -c "irm bun.sh/install.ps1|iex"scoop install bunbrew tap oven-sh/bunbrew install bundocker pull oven/bundocker run --rm --init --ulimit memlock=-1:-1 oven/bunTo upgrade Bun
bun upgradebun test --isolate and bun test --parallel
In the next version of Bun
โ Bun (@bunjavascript) April 16, 2026
bun test gets experimental support for per-file test isolation, and we made it fast. pic.twitter.com/va8GKDh3fy
Two new flags for bun test that dramatically speed up large test suites:
--isolate runs each test file in a fresh global environment within the same process. Between files, Bun drains microtasks, closes all sockets, cancels timers, kills subprocesses, and creates a clean global object. A VM-level transpilation cache means shared dependencies are only parsed once โ subsequent files reuse the cached source, skipping redundant transpilation entirely.
--parallel[=N] distributes test files across up to N worker processes (defaults to CPU count). Files are partitioned for cache locality, and idle workers steal work from the busiest remaining queue. Workers automatically run with --isolate between files. Output remains identical to serial execution โ per-test console.log/console.error output is buffered and flushed atomically, so files never interleave.
# Run tests with isolation (fresh global per file)
bun test --isolate ./tests
# Run tests in parallel across all CPU cores
bun test --parallel ./tests
# Run tests in parallel with 8 workers
bun test --parallel=8 ./tests
Both flags work with existing options including --bail, --randomize, --dots, JUnit reporting, LCOV coverage, and snapshots. All transpiler/resolver flags (--define, --loader, --tsconfig-override, --conditions, etc.) are forwarded to workers. JEST_WORKER_ID and BUN_TEST_WORKER_ID in bun test --parallel are also set as environment variables.
bun test --shard=M/N for splitting tests across CI jobs
Split test files across multiple CI runners with the new --shard flag, matching the syntax used by Jest, Vitest, and Playwright.
# In a GitHub Actions matrix with 3 jobs:
bun test --shard=1/3
bun test --shard=2/3
bun test --shard=3/3
Test files are sorted by path for determinism and distributed round-robin across shards, keeping each shard balanced to within one file of each other. The shard index is 1-based (1 <= index <= count).
Composes naturally with other flags:
--changedโ sharding is applied after the changed-files filter--randomizeโ shuffle happens after shard selection, within the shard
If a shard ends up with zero files (e.g. 2 test files with --shard=5/5), it exits 0 gracefully rather than erroring with "No tests found!". Invalid inputs like 0/3, 4/3, or 1/0 produce a clear error message and exit non-zero.
--shard=2/3: running 3/10 test files
f01.test.ts:
(pass) t
f04.test.ts:
(pass) t
f07.test.ts:
(pass) t
bun test --changed
bun test now supports a --changed flag that only runs test files affected by your git changes. This works by building the full import graph of your test files and filtering down to only those that transitively depend on a file that git reports as changed.
# Run tests affected by uncommitted changes (unstaged + staged + untracked)
bun test --changed
# Run tests affected by changes since a specific commit, branch, or tag
bun test --changed=HEAD~1
bun test --changed=main
# Combine with --watch to re-filter on every restart
bun test --changed --watch
When combined with --watch, editing any local source file โ even one not currently imported by the selected tests โ triggers a re-run. Each restart re-queries git, so the filtered set always tracks the working tree.
The graph analysis scans imports without entering node_modules and without linking or emitting code, so the overhead is minimal. If no changed files are found, --watch keeps the process alive while bun test --changed without --watch exits cleanly.
bun install streams tarballs to disk
In the next version of Bun
โ Jarred Sumner (@jarredsumner) April 18, 2026
bun install streams tarballs to disk
In a large repo, this reduced memory by 17x pic.twitter.com/3WYMLJ5RoJ
bun install now extracts package tarballs while they are still downloading, instead of buffering the entire .tgz and decompressed .tar in memory before extraction. Only the in-flight HTTP chunks plus libarchive's fixed per-archive buffers are needed โ the full archive is never materialized in memory.
Integrity hashing runs incrementally over the compressed bytes and is verified before the extracted tree is promoted into the cache. Error handling and retries remain unchanged.
Streaming extraction is enabled by default. If you encounter issues, it can be disabled by setting BUN_FEATURE_FLAG_DISABLE_STREAMING_INSTALL=1.
Faster bun install with isolated linker
In a peer-heavy monorepo, bun install --linker=isolated;
| Before | After |
|---|---|
| 20.5s | 2.4s |
The final .bun/ store layout is byte-identical to previous versions. Previously hanging installs now complete in seconds.
Thanks to @robobun for the contribution!
Source maps use up to 8x less memory
Bun's internal source map representation has been rearchitected. Instead of decoding VLQ mappings into a full in-memory list on first access, Bun now writes a compact bit-packed binary format directly during transpilation and reads it in place โ no whole-file decode step, no VLQ round-trip.
The new format exploits the structure of transpiler output (most mappings share the same source index, and generated/original columns frequently match) to store mappings at ~2.4 bytes per mapping, down from 20 bytes previously.
Memory usage (TypeScript compiler, _tsc.js, 563k mappings):
| Representation | Resident after first .stack |
|---|---|
Mapping.List (Bun v1.3.12) | ~11.3 MB (20 B/mapping) |
| LEB128 stream | 2.92 MB (5.4 B/mapping) |
| Bit-packed windows | 1.29 MB (2.4 B/mapping) |
Decoding now costs close to 0. Lookups cost about 6% more. Encoding gets faster.
| Benchmark | This release | v1.3.12 | ฮ |
|---|---|---|---|
error-capturestack.mjs (mitata, multi-window) | 1.37โ1.41 ยตs | 1.27โ1.32 ยตs | +6โ8% |
plain while(1) new Error().stack loop | 657 ns | ~810 ns | โ19% |
| 5-frame multi-window synthetic (1500-line file) | 818 ns | 769 ns | +6% |
first .stack on a 150k-line module | ~0.1 ms | ~5 ms | โ98% |
| RSS load โ first stack (150k-line module) | +0.06 MB | +2.3 MB |
The main tradeoff is that after compression the new format is ~20% larger than the VLQ-encoded compressed equivalent โ but compressing sourcemaps is unnecessary for server-side JavaScript.
bun build --compile binaries also benefit: the blob is embedded directly and loaded as a zero-copy view at runtime, shrinking compiled binaries by ~1.8 MB for large source maps.
Bun's runtime uses 5% less memory
In the next version of Bun
โ Bun (@bunjavascript) April 14, 2026
Bun's JavaScript runtime uses 5% less memory pic.twitter.com/kXp6zFC6Yi
Bun's memory allocators have been upgraded:
- mimalloc moves from v2 to v3 (along with several bugfixes in our internal fork)
- We implemented libpas scavenger support for both Windows & Linux, reclaiming memory faster
Together these reduce baseline memory usage and fix a class of hangs and crashes in long-running processes across macOS, Linux, and Windows.
Upgraded JavaScriptCore engine
Bun's underlying JavaScript engine (WebKit's JavaScriptCore) has been upgraded, merging 1,316 upstream commits. This brings a wide range of performance improvements and bug fixes.
Performance improvements from upstream
- Inline cache for
array.length = Nโ setting array length is now IC-cached - Inline cache for
undefined,true,false,nullas property keys - String length folding in DFG/FTL โ the compiler can now constant-fold
.lengthon known strings toUpperCaseintrinsic โtoUpperCase()is now JIT-intrinsifiedString#indexOfsingle-character fast path in DFG/FTL- Redundant
movremoved from await/yield bytecode - Cached default date formatters โ
Date.toLocaleString()and friends are faster on repeat calls - Wider bulk copy in GC-safe memcpy/memmove โ faster garbage collector memory operations
- SIMD-accelerated
equalIgnoringASCIICaseโ faster case-insensitive string comparisons - SIMD fast path for identifier parsing
Thanks to @sosukesuzuki for doing the upgrade!
Faster addEventListener, dispatchEvent, and DOM events
Cherry-picked ~270 audited upstream WebKit commits into Bun's forked WebCore bindings layer, bringing performance wins to Bun's event system and promise internals.
File streaming improvements
When using new Response(Bun.file(path)) or routes: { "/route": new Response(Bun.file(path)) } in Bun.serve(), file responses on SSL and Windows now stream incrementally instead of buffering the entire file into memory. Previously, this was only supported when using HTTP and only for static routes.
This significantly reduces memory usage for large file responses in those environments. Previously, this was only supported for static routes.
Range Request Support in Bun.serve()
Bun.serve() now supports Range requests for file-backed responses, both in static routes: entries and in fetch & dynamic handler responses. Incoming Range: bytes=... headers on whole-file 200 responses are automatically handled, returning 206 Partial Content with the appropriate Content-Range header, or 416 Range Not Satisfiable when the range is invalid.
const server = Bun.serve({
port: 3000,
routes: {
"/video.mp4": new Response(Bun.file("./video.mp4")),
},
fetch(req) {
return new Response(Bun.file("./large-file.bin"));
},
});
// Clients can now request byte ranges:
const res = await fetch("http://localhost:3000/video.mp4", {
headers: { Range: "bytes=0-1023" },
});
console.log(res.status); // 206
console.log(res.headers.get("Content-Range")); // "bytes 0-1023/..."
Suffix ranges (bytes=-500), open-ended ranges (bytes=1024-), and all standard forms from RFC 9110 are supported. Multi-range requests fall through to a full-body response.
Up to 5.5x faster gzip compression with zlib-ng
Bun's zlib dependency has been upgraded from the Cloudflare zlib fork (last updated Oct 2023) to zlib-ng 2.3.3 โ the same library used by Node.js 24+ and Chromium. zlib-ng is actively maintained and provides runtime-dispatched SIMD acceleration across AVX-512, AVX2, SSE2, NEON, SVE, and RISC-V vector extensions for CRC32, Adler32, longest-match, and chunk-copy operations.
This is a drop-in improvement โ no API changes, no code changes required.
| Operation | Before | After | Speedup |
|---|---|---|---|
gzipSync html-128K L1 | 275 ยตs | 107 ยตs | 2.59ร |
gzipSync html-1M L1 | 2.23 ms | 892 ยตs | 2.50ร |
gzipSync json-128K L6 | 897 ยตs | 483 ยตs | 1.86ร |
deflate 123K L6 (async) | 373 ยตs | 68 ยตs | 5.48ร |
gunzipSync html-1M | 561 ยตs | 522 ยตs | 1.07ร |
gunzipSync binary-128K | 31.6 ยตs | 26.7 ยตs | 1.18ร |
createGzip stream L1 1M | 3.76 ms | 2.68 ms | 1.40ร |
createGunzip stream 1M | 1.24 ms | 1.18 ms | 1.05ร |
fetch() 11KB gzip decode | 42.9 ยตs | 41.6 ยตs | parity |
Compression is significantly faster across the board, with decompression seeing modest gains. The only trade-off is ~2ยตs higher per-stream initialization cost from larger internal state structs, which is amortized away on payloads โฅ4KB.
Faster array iteration in Bun's internals
Array iteration in Bun's internal C++/Zig code is now up to 1.43ร faster for common cases. When iterating over a JavaScript array that uses simple Int32 or Contiguous storage (the common case), Bun now reads elements directly from JSC's butterfly memory instead of calling getIndex() per element.
This optimization is applied inside JSArrayIterator.next(), so every internal call site benefits automatically โ including expect().toContain(), expect().toBeOneOf(), new Blob([...]), and more.
| Benchmark | Before | After | Speedup |
|---|---|---|---|
expect(arr).toContain(last) (1000 ints) | 11,493 ns | 8,031 ns | 1.43ร |
expect(x).toBeOneOf(arr) (1000 ints) | 13,736 ns | 10,643 ns | 1.29ร |
new Blob([100 strings + 100 buffers]) | 9,703 ns | 8,301 ns | 1.17ร |
new Blob([1000 strings]) | 56,817 ns | 49,630 ns | 1.14ร |
The fast path safely revalidates the butterfly pointer before each read, falling back to the generic path if the array is mutated during iteration (e.g. by a getter or toString side effect).
Thanks to @sosukesuzuki for the contribution!
SHA3 support in WebCrypto and node:crypto
Bun now supports SHA3-224, SHA3-256, SHA3-384, and SHA3-512 hash algorithms across both the Web Crypto API and node:crypto.
This works with crypto.createHash, crypto.createHmac, crypto.getHashes, crypto.subtle.digest, and crypto.subtle.sign/verify with HMAC.
import crypto from "crypto";
// node:crypto
const hash = crypto.createHash("sha3-256");
hash.update("Hello, world!");
console.log(hash.digest("hex"));
// => "f345a219da005ebe9c1a1eaad97bbf38a10c8473e41d0af7fb617caa0c6aa722"
const hmac = crypto.createHmac("sha3-256", "secret-key");
hmac.update("Hello, world!");
console.log(hmac.digest("hex"));
// Web Crypto API
const digest = await crypto.subtle.digest(
"SHA3-256",
new TextEncoder().encode("Hello, world!"),
);
console.log(Buffer.from(digest).toString("hex"));
This also includes an update to BoringSSL, which brings ML-KEM and ML-DSA (NIST FIPS 203/204) post-quantum algorithms into the underlying library for future support.
X25519 deriveBits support in SubtleCrypto
SubtleCrypto.deriveBits() now works with the X25519 algorithm, completing support for X25519-based key agreement in Bun's Web Crypto API.
Previously, calling crypto.subtle.deriveBits() with X25519 keys threw a NotSupportedError. This is now fully implemented per the spec, including proper rejection of small-order public keys per RFC 7748 ยง6.1.
const keyPair = await crypto.subtle.generateKey("X25519", false, [
"deriveBits",
]);
const remoteKeyPair = await crypto.subtle.generateKey("X25519", false, [
"deriveBits",
]);
const sharedSecret = await crypto.subtle.deriveBits(
{ name: "X25519", public: remoteKeyPair.publicKey },
keyPair.privateKey,
256,
);
console.log(new Uint8Array(sharedSecret)); // 32-byte shared secret
Passing null or 0 as the length returns the full 32-byte output:
const bits = await crypto.subtle.deriveBits(
{ name: "X25519", public: remoteKeyPair.publicKey },
keyPair.privateKey,
null, // returns full 32-byte output
);
Thanks to @panva for the contribution!
WebSocket client: support ws+unix:// and wss+unix://
The WebSocket client now supports connecting over Unix domain sockets via the ws+unix:// and wss+unix:// URL schemes, matching the convention used by the popular npm ws package.
// Connect to a Unix domain socket
const ws = new WebSocket("ws+unix:///tmp/app.sock");
// With a request path (split on first ':', same as the npm `ws` package)
const ws = new WebSocket("ws+unix:///tmp/app.sock:/api/stream?x=1");
// TLS over a Unix socket
const ws = new WebSocket("wss+unix:///tmp/app.sock", {
tls: { rejectUnauthorized: false },
});
- The
Hostheader defaults tolocalhost, matching Node'shttp.request({ socketPath })and thewspackage. - Proxies are automatically skipped for Unix socket URLs.
wss+unix://runs a full TLS handshake over the domain socket.
Standalone HTML now inlines file-loader assets imported from JS
When using bun build --compile --target browser on an HTML entry point, assets imported from JavaScript via the file loader (e.g. import logo from "./logo.svg") are now correctly inlined as data: URIs in the standalone HTML output.
Previously, only assets referenced directly in HTML (like <link rel="icon" href="./logo.svg">) were inlined. JS-imported assets were emitted as relative file paths (e.g. ./logo-kygw735p.svg), but no sidecar file was ever written โ resulting in broken images and missing resources.
Now, all file-loader assets are inlined regardless of whether they're referenced from HTML or JS:
// src/entry.ts
import logo from "./logo.svg";
import reactLogo from "./react.svg";
const img = document.createElement("img");
img.src = logo; // now a data:image/svg+xml;base64,... URI
bun build --compile --target browser --outdir ./dist ./src/index.htmlls dist/index.html # fully self-contained, no sidecar filesbunx claude and bunx @anthropic-ai/claude-code fix
bunx claude now works as a shorthand for bunx @anthropic-ai/claude-code, matching the existing bunx tsc โ typescript alias.
Bugfixes
Node.js compatibility improvements
- Fixed:
Workerlifecycle crashes โ callingworker.terminate(), sending messages to a worker, or letting a worker exit naturally could crash - Fixed:
tls.connect({ host, port })matches Node.js behavior whenservernameis not explicitly passed - Fixed: rare crash involving the
passphraseoption in TLS configuration - Fixed:
process.ppidwas cached at startup and never updated. It now callsgetppid()on every access, matching Node.js behavior, so orphan-detection patterns (if (process.ppid === 1)) work correctly. (@Jarred-Sumner) - Fixed:
--cpu-profoutput now matches the.cpuprofileformat used by Node.js and Chrome DevTools โlineNumber/columnNumberreflect the function definition site and per-line samples are reported viapositionTicks, so profiles load correctly in Chrome DevTools, VS Code, and speedscope. (@Jarred-Sumner) - Fixed:
socket.setTimeout()innode:netincorrectly firing thetimeoutevent even while the socket was actively receiving data. The inactivity timer was only being reset on writes, not reads, which caused spurious timeouts for sockets consuming HTTP response bodies, database query results, or long-lived pipes. This also fixesmongoosetimeout errors that occurred with Bun but not Node.js. - Fixed:
node:http2.createServer(h2c) compatibility with strict HTTP/2 peers (curl, Node.jshttp2.connect, Envoy proxy) โ the server was advertising an invalidENABLE_PUSHsetting and emitting a malformed end-of-stream sequence that nghttp2-based clients reject. This also likely fixes@grpc/grpc-jsservers running on Bun behind Envoy. Thanks to @robobun for the contribution! - Fixed: rare crash in
StringDecoder.prototype.write() - Fixed: crash during process exit when native N-API modules (sqlite3, duckdb, kuzu, node-llama-cpp) wrap parent objects before children โ finalizers now run in LIFO order matching Node.js (@dylan-conway)
- Fixed: node:dgram
addSourceSpecificMembershipanddropSourceSpecificMembershipsocket options were inverted for both IPv4 and IPv6 - Fixed:
os.freemem()on Linux returning significantly lower values than Node.js by readingMemAvailablefrom/proc/meminfoinstead of usingsysinfo.freeram, which excludes reclaimable page cache - Fixed: Rare
fs.watch()deadlock - Fixed: Rare
path.win32.resolvecrash - Fixed: Rare
fs.writeSynccrash on Windows - Fixed: a race condition in root certificate initialization involving workers
- Fixed:
stat/lstat/fstat/fstatat/statfssyscalls on macOS could surface spuriousEINTRerrors to JavaScript (e.g. when accessing iCloud/FileProvider directories or autofs mounts) instead of automatically retrying (@dylan-conway) - Fixed:
export { "a b c" } from './b.mjs're-export clauses with string-literal names (containing spaces or other non-identifier characters) produced invalid output during single-file transpilation, causing aSyntaxErrorat runtime - Fixed:
Error.captureStackTrace()crashing whenError.stackTraceLimitis set to a non-numeric value (e.g."foo") or deleted - Fixed:
preventDefault()not working on a non-passive event listener that follows a passive listener on the same event (@Jarred-Sumner) - Fixed:
[Clamp]integer conversion in WebIDL bindings now rounds half-to-even per spec (@Jarred-Sumner) - Fixed: Data races in
BroadcastChannelwhen using workers that could cause crashes during channel registration/unregistration and worker termination.
Bun APIs
- Fixed:
Bun.inspect()andconsole.log()showing[class Function]instead of the real class name for DOM andnode:stream/webconstructors likeReadableStreamBYOBReader,URL,Request,Response,Blob, andEvent. (@Jarred-Sumner) - Fixed:
server.stop()hanging indefinitely in certain cases - Fixed: File descriptor leaks in static file routes in certain cases
- Fixed: Error handler returning a file-backed
Responsebeing silently dropped - Fixed: Duplicate
Content-Lengthheaders on file responses in certain cases - Fixed:
Bun.SQLMySQL adapter hanging indefinitely on multi-statement queries against ManticoreSearch's MySQL interface, a regression introduced in Bun v1.2.12. - Fixed: crash in
Bun.YAML.parsein rare cases - Fixed: crash when reading sliced non-regular file blobs
- Fixed: panic when
fstatreports an extremely large file size duringBun.file()reads, and improved error propagation for out-of-memory conditions in the same code path - Fixed: hypothetical crash in
Bun.RedisClientwhen the client was garbage collected after a command threw during argument validation - Fixed:
Bun.pathToFileURLcrash on Windows in rare cases - Fixed:
MKADDRESSBOOKCardDAV HTTP method being silently dropped byBun.serveand rewritten toGETbyfetch(). The method is now recognized across all HTTP handling paths. - Fixed:
AbortSignalmemory leak in WebSocket upgrade request when using theBun.serve({ fetch, websocket })catch-all path - Fixed: ~1 KB memory leak per
--hotreload cycle - Fixed: memory leak in
Glob.scan()andGlob.scanSync()
Web APIs
- Fixed: calling
controller.abort()on afetch()queued behind the maximum simultaneous requests limit left the promise pending forever. Aborted queued requests now reject immediately withAbortErrorwithout consuming a connection slot. (@Jarred-Sumner) - Fixed: memory leak in
AbortSignal.timeout()in certain cases - Fixed:
fetch()hanging forever when usingtls: { checkServerIdentity }in certain error cases - Fixed: Rare crash when aborting a
fetch()request - Fixed:
fetch()withHTTP_PROXYincorrectly injected:80or:443into the proxy request URI for URLs without an explicit port, breaking strict proxies like Charles, mitmproxy, and corporate middleboxes. Now matchescurland Node.js behavior per RFC 7230 ยง5.3.2. - Fixed: Rare crash when calling
ws.close()on awss://WebSocket connecting through an HTTP CONNECT proxy during the TLS handshake (@dylan-conway) - Fixed: Tiny memory leak affecting WebSocket client send buffers and other internal queues that fully drain between writes
- Fixed: Hypothetical crash when resolving a promise
- Fixed: Hypothetical crash when converting records with numeric-index string keys, and resizable/growable-shared
ArrayBufferorTypedArrayin certain web apis
bun install
- Fixed:
bunx @anthropic-ai/claude-codefailing with "could not determine executable to run"
JavaScript bundler
- Fixed:
bun build --compileproducing broken binaries in certain cases on macOS ARM64 since Bun v1.3.12 - Fixed: HTML import
etagnot updating when referenced JS/CSS chunks changed, causing browsers to 304-cache stale HTML that pointed at old chunk filenames โ resulting in blank pages until hard refresh (@dylan-conway) - Fixed:
bun build --compileproducing broken executables on NixOS/Guix hosts โ the interpreter path rewrite is now skipped when running on a Nix/Guix-managed host. - Fixed: Crash in
Bun.buildwhen a bundler plugin'sonResolvehandler races with a failed sibling import in the same file. NowBun.buildcorrectly throws with the resolve errors. - Fixed: a crash caused by the resolver attempting to auto-install invalid npm package names (e.g. strings containing spaces, newlines, or braces) passed to
mock.module(),Bun.resolveSync(),import(), orrequire.resolve()
CSS Parser
- Fixed: CSS bundler incorrectly stripping top-level
@layerordering declarations (e.g.@layer theme, base, components, utilities;) when bundling, which broke Tailwind CSS and other cascade layer-dependent stylesheets. This was a regression from v1.3.0.
bun test
- Fixed:
toMatchSnapshot()failing with--rerun-each,retry, orrepeatsbecause the snapshot counter was not reset between iterations, causing subsequent runs to look for non-existent snapshot keys (e.g."test name 2"instead of"test name 1") and erroring with "Snapshot creation is disabled" in CI (@chrislloyd)
Windows
- Fixed: a race condition in the runtime transpiler that could cause a crash on Windows when multiple threads were transpiling concurrently (@dylan-conway)
- Fixed: crash on Windows caused by
NaN/+infvalues reaching the internal timer when the GC scheduler computed degenerate delay values (@dylan-conway) - Fixed: Rare crash in
readFilecrash on Windows when reading files larger than 2GB - Fixed: Rare crash in
fs.readdiron Windows when a third-party filesystem or filter driver (e.g. network redirector, virtual FS, or AV minifilter) returned a malformed directory entry. - Fixed: resource exhaustion in libpas that could cause crashes in long-running processes on Windows,
pthread_onceimplementation issues, and a missingmprotectgate in memory decommit. (@Jarred-Sumner)
JavaScript engine
- Fixed: a RegExp correctness issue where
exec/testdidn't reload the internal RegExp afterToLength(lastIndex)coercion (@sosukesuzuki) - Fixed:
import { "*" as x }not being treated as a namespace import (@sosukesuzuki) - Fixed: TypedArray iterator
.next()behavior with detached buffers (test262 compliance) (@sosukesuzuki) - Fixed:
Date.toLocaleString()crash when called from a Worker (@sosukesuzuki) - Fixed: multiple YARR (RegExp engine) backtracking and capture bugs (@sosukesuzuki)
- Fixed: class instance field
evalcontext not propagating through arrow functions and nested scopes (@sosukesuzuki) - Fixed: DFG constant folding crash and
isWithinPowerOfTwoproducing unsound results forBitAndwith negative masks (@sosukesuzuki)
Internal / Runtime
- Fixed: pausing at a breakpoint or
debugger;statement pegged one CPU core at 100% for the entire pause. The paused thread now sleeps until the debugger sends a message. Closing the debugger's WebSocket while paused also now correctly resumes the program. - Fixed: a deadlock between the memory allocator and Bun's thread pool that could permanently freeze long-running compiled executables on macOS arm64 after sustained async I/O. (@Jarred-Sumner)
- Fixed: 8 memory-accounting bugs in libpas
- Fixed: several potential crashes where early-exit control flow could leave internal state partially initialized โ affecting HTTP decompression,
Request/Responsebody handling,Bun.buildplugin errors, and named pipe listeners on Windows. - Fixed: a potential crash in the socket layer where dereferencing a context's loop pointer after unlinking could fail if the context was freed during the unlink (@cirospaciari)
- Fixed: Hypothetical crash involving the event loop timer sweep iterator
- Improved: HTTP chunked encoding parser hardening