Create a new instance of an S3 bucket so that credentials can be managed from a single instance instead of being passed to every method.
constructor
S3Client.constructor
The default options to use for the S3 client. Can be overriden by passing options to the methods.
A new S3Client instance
Keep S3 credentials in a single instance
const bucket = new Bun.S3Client({
accessKeyId: "your-access-key",
secretAccessKey: "your-secret-key",
bucket: "my-bucket",
endpoint: "https://s3.us-east-1.amazonaws.com",
sessionToken: "your-session-token",
});
// S3Client is callable, so you can do this:
const file = bucket.file("my-file.txt");
// or this:
await file.write("Hello Bun!");
await file.text();
// To delete the file:
await bucket.delete("my-file.txt");
// To write a file without returning the instance:
await bucket.write("my-file.txt", "Hello Bun!");
Referenced types
interface S3Options
Configuration options for S3 operations
- accessKeyId?: string
The access key ID for authentication. Defaults to
S3_ACCESS_KEY_ID
orAWS_ACCESS_KEY_ID
environment variables. - acl?: 'private' | 'public-read' | 'public-read-write' | 'aws-exec-read' | 'authenticated-read' | 'bucket-owner-read' | 'bucket-owner-full-control' | 'log-delivery-write'
The Access Control List (ACL) policy for the file. Controls who can access the file and what permissions they have.
// Setting public read access const file = s3.file("public-file.txt", { acl: "public-read", bucket: "my-bucket" });
- bucket?: string
The S3 bucket name. Defaults to
S3_BUCKET
orAWS_BUCKET
environment variables.// Using explicit bucket const file = s3.file("my-file.txt", { bucket: "my-bucket" });
- endpoint?: string
The S3-compatible service endpoint URL. Defaults to
S3_ENDPOINT
orAWS_ENDPOINT
environment variables.// AWS S3 const file = s3.file("my-file.txt", { endpoint: "https://s3.us-east-1.amazonaws.com" });
- partSize?: number
The size of each part in multipart uploads (in bytes).
- Minimum: 5 MiB
- Maximum: 5120 MiB
- Default: 5 MiB
// Configuring multipart uploads const file = s3.file("large-file.dat", { partSize: 10 * 1024 * 1024, // 10 MiB parts queueSize: 4 // Upload 4 parts in parallel }); const writer = file.writer(); // ... write large file in chunks
- queueSize?: number
Number of parts to upload in parallel for multipart uploads.
- Default: 5
- Maximum: 255
Increasing this value can improve upload speeds for large files but will use more memory.
- region?: string
The AWS region. Defaults to
S3_REGION
orAWS_REGION
environment variables.const file = s3.file("my-file.txt", { bucket: "my-bucket", region: "us-west-2" });
- retry?: number
Number of retry attempts for failed uploads.
- Default: 3
- Maximum: 255
// Setting retry attempts const file = s3.file("my-file.txt", { retry: 5 // Retry failed uploads up to 5 times });
- secretAccessKey?: string
The secret access key for authentication. Defaults to
S3_SECRET_ACCESS_KEY
orAWS_SECRET_ACCESS_KEY
environment variables. - sessionToken?: string
Optional session token for temporary credentials. Defaults to
S3_SESSION_TOKEN
orAWS_SESSION_TOKEN
environment variables.// Using temporary credentials const file = s3.file("my-file.txt", { accessKeyId: tempAccessKey, secretAccessKey: tempSecretKey, sessionToken: tempSessionToken });
- storageClass?: 'STANDARD' | 'DEEP_ARCHIVE' | 'EXPRESS_ONEZONE' | 'GLACIER' | 'GLACIER_IR' | 'INTELLIGENT_TIERING' | 'ONEZONE_IA' | 'OUTPOSTS' | 'REDUCED_REDUNDANCY' | 'SNOW' | 'STANDARD_IA'
By default, Amazon S3 uses the STANDARD Storage Class to store newly created objects.
// Setting explicit Storage class const file = s3.file("my-file.json", { storageClass: "STANDARD_IA" });
- type?: string
The Content-Type of the file. Automatically set based on file extension when possible.
// Setting explicit content type const file = s3.file("data.bin", { type: "application/octet-stream" });
- virtualHostedStyle?: boolean
Use virtual hosted style endpoint. default to false, when true if
endpoint
is informed it will ignore thebucket
// Using virtual hosted style const file = s3.file("my-file.txt", { virtualHostedStyle: true, endpoint: "https://my-bucket.s3.us-east-1.amazonaws.com" });
class S3Client
A configured S3 bucket instance for managing files. The instance is callable to create S3File instances and provides methods for common operations.
// Basic bucket setup
const bucket = new S3Client({
bucket: "my-bucket",
accessKeyId: "key",
secretAccessKey: "secret"
});
// Get file instance
const file = bucket.file("image.jpg");
// Common operations
await bucket.write("data.json", JSON.stringify({hello: "world"}));
const url = bucket.presign("file.pdf");
await bucket.unlink("old.txt");
- path: string,): Promise<void>;
Delete a file from the bucket. Alias for S3Client.unlink.
@param pathThe path to the file in the bucket
@param optionsAdditional S3 options to override defaults
@returnsA promise that resolves when deletion is complete
// Simple delete await bucket.delete("old-file.txt"); // With error handling try { await bucket.delete("file.dat"); console.log("File deleted"); } catch (err) { console.error("Delete failed:", err); }
- path: string,): Promise<boolean>;
Check if a file exists in the bucket. Uses HEAD request to check existence.
@param pathThe path to the file in the bucket
@param optionsAdditional S3 options to override defaults
@returnsA promise that resolves to true if the file exists, false otherwise
// Check existence if (await bucket.exists("config.json")) { const file = bucket.file("config.json"); const config = await file.json(); } // With error handling try { if (!await bucket.exists("required.txt")) { throw new Error("Required file missing"); } } catch (err) { console.error("Check failed:", err); }
- @param path
The path to the file in the bucket
@param optionsAdditional S3 options to override defaults
@returnsAn S3File instance
const file = bucket.file("image.jpg"); await file.write(imageData); const configFile = bucket.file("config.json", { type: "application/json", acl: "private" });
- list(options?: Pick<S3Options, 'accessKeyId' | 'secretAccessKey' | 'sessionToken' | 'region' | 'bucket' | 'endpoint'>
Returns some or all (up to 1,000) of the objects in a bucket with each request.
You can use the request parameters as selection criteria to return a subset of the objects in a bucket.
@param inputOptions for listing objects in the bucket
@param optionsAdditional S3 options to override defaults
@returnsA promise that resolves to the list response
// List (up to) 1000 objects in the bucket const allObjects = await bucket.list(); // List (up to) 500 objects under `uploads/` prefix, with owner field for each object const uploads = await bucket.list({ prefix: 'uploads/', maxKeys: 500, fetchOwner: true, }); // Check if more results are available if (uploads.isTruncated) { // List next batch of objects under `uploads/` prefix const moreUploads = await bucket.list({ prefix: 'uploads/', maxKeys: 500, startAfter: uploads.contents!.at(-1).key fetchOwner: true, }); }
- path: string,): string;
Generate a presigned URL for temporary access to a file. Useful for generating upload/download URLs without exposing credentials.
@param pathThe path to the file in the bucket
@param optionsOptions for generating the presigned URL
@returnsA presigned URL string
// Download URL const downloadUrl = bucket.presign("file.pdf", { expiresIn: 3600 // 1 hour }); // Upload URL const uploadUrl = bucket.presign("uploads/image.jpg", { method: "PUT", expiresIn: 3600, type: "image/jpeg", acl: "public-read" }); // Long-lived public URL const publicUrl = bucket.presign("public/doc.pdf", { expiresIn: 7 * 24 * 60 * 60, // 7 days acl: "public-read" });
- size(path: string,): Promise<number>;
Get the size of a file in bytes. Uses HEAD request to efficiently get size.
@param pathThe path to the file in the bucket
@param optionsAdditional S3 options to override defaults
@returnsA promise that resolves to the file size in bytes
// Get size const bytes = await bucket.size("video.mp4"); console.log(`Size: ${bytes} bytes`); // Check if file is large if (await bucket.size("data.zip") > 100 * 1024 * 1024) { console.log("File is larger than 100MB"); }
- @param path
The path to the file in the bucket
@param optionsAdditional S3 options to override defaults
@returnsA promise that resolves to the file stats
const stat = await bucket.stat("my-file.txt");
- @param path
The path to the file in the bucket
@param optionsAdditional S3 options to override defaults
@returnsA promise that resolves when deletion is complete
// Simple delete await bucket.unlink("old-file.txt"); // With error handling try { await bucket.unlink("file.dat"); console.log("File deleted"); } catch (err) { console.error("Delete failed:", err); }
- path: string,data: string | ArrayBuffer | SharedArrayBuffer | Blob | BunFile | Request | Response | File | ArrayBufferView<ArrayBufferLike> | S3File,): Promise<number>;
Writes data directly to a path in the bucket. Supports strings, buffers, streams, and web API types.
@param pathThe path to the file in the bucket
@param dataThe data to write to the file
@param optionsAdditional S3 options to override defaults
@returnsThe number of bytes written
// Write string await bucket.write("hello.txt", "Hello World"); // Write JSON with type await bucket.write( "data.json", JSON.stringify({hello: "world"}), {type: "application/json"} ); // Write from fetch const res = await fetch("https://example.com/data"); await bucket.write("data.bin", res); // Write with ACL await bucket.write("public.html", html, { acl: "public-read", type: "text/html" });
- path: string,): Promise<void>;
Delete a file from the bucket. Alias for S3Client.unlink.
@param pathThe path to the file in the bucket
@param optionsS3 credentials and configuration options
@returnsA promise that resolves when deletion is complete
// Simple delete await S3Client.delete("old-file.txt", credentials); // With error handling try { await S3Client.delete("file.dat", credentials); console.log("File deleted"); } catch (err) { console.error("Delete failed:", err); }
- path: string,): Promise<boolean>;
Check if a file exists in the bucket. Uses HEAD request to check existence.
@param pathThe path to the file in the bucket
@param optionsS3 credentials and configuration options
@returnsA promise that resolves to true if the file exists, false otherwise
// Check existence if (await S3Client.exists("config.json", credentials)) { const file = bucket.file("config.json"); const config = await file.json(); } // With error handling try { if (!await S3Client.exists("required.txt", credentials)) { throw new Error("Required file missing"); } } catch (err) { console.error("Check failed:", err); }
- path: string,
Creates an S3File instance for the given path.
@param pathThe path to the file in the bucket
@param optionsS3 credentials and configuration options
@returnsAn S3File instance
const file = S3Client.file("image.jpg", credentials); await file.write(imageData); const configFile = S3Client.file("config.json", { ...credentials, type: "application/json", acl: "private" });
- options?: Pick<S3Options, 'accessKeyId' | 'secretAccessKey' | 'sessionToken' | 'region' | 'bucket' | 'endpoint'>
Returns some or all (up to 1,000) of the objects in a bucket with each request.
You can use the request parameters as selection criteria to return a subset of the objects in a bucket.
@param inputOptions for listing objects in the bucket
@param optionsS3 credentials and configuration options
@returnsA promise that resolves to the list response
// List (up to) 1000 objects in the bucket const allObjects = await S3Client.list(null, credentials); // List (up to) 500 objects under `uploads/` prefix, with owner field for each object const uploads = await S3Client.list({ prefix: 'uploads/', maxKeys: 500, fetchOwner: true, }, credentials); // Check if more results are available if (uploads.isTruncated) { // List next batch of objects under `uploads/` prefix const moreUploads = await S3Client.list({ prefix: 'uploads/', maxKeys: 500, startAfter: uploads.contents!.at(-1).key fetchOwner: true, }, credentials); }
- path: string,): string;
Generate a presigned URL for temporary access to a file. Useful for generating upload/download URLs without exposing credentials.
@param pathThe path to the file in the bucket
@param optionsS3 credentials and presigned URL configuration
@returnsA presigned URL string
// Download URL const downloadUrl = S3Client.presign("file.pdf", { ...credentials, expiresIn: 3600 // 1 hour }); // Upload URL const uploadUrl = S3Client.presign("uploads/image.jpg", { ...credentials, method: "PUT", expiresIn: 3600, type: "image/jpeg", acl: "public-read" }); // Long-lived public URL const publicUrl = S3Client.presign("public/doc.pdf", { ...credentials, expiresIn: 7 * 24 * 60 * 60, // 7 days acl: "public-read" });
- path: string,): Promise<number>;
Get the size of a file in bytes. Uses HEAD request to efficiently get size.
@param pathThe path to the file in the bucket
@param optionsS3 credentials and configuration options
@returnsA promise that resolves to the file size in bytes
// Get size const bytes = await S3Client.size("video.mp4", credentials); console.log(`Size: ${bytes} bytes`); // Check if file is large if (await S3Client.size("data.zip", credentials) > 100 * 1024 * 1024) { console.log("File is larger than 100MB"); }
- path: string,
Get the stat of a file in an S3-compatible storage service.
@param pathThe path to the file in the bucket
@param optionsS3 credentials and configuration options
@returnsA promise that resolves to the file stats
const stat = await S3Client.stat("my-file.txt", credentials);
- @param path
The path to the file in the bucket
@param optionsS3 credentials and configuration options
@returnsA promise that resolves when deletion is complete
// Simple delete await S3Client.unlink("old-file.txt", credentials); // With error handling try { await S3Client.unlink("file.dat", credentials); console.log("File deleted"); } catch (err) { console.error("Delete failed:", err); }
- path: string,data: string | ArrayBuffer | SharedArrayBuffer | Blob | BunFile | Request | Response | File | ArrayBufferView<ArrayBufferLike> | S3File,): Promise<number>;
Writes data directly to a path in the bucket. Supports strings, buffers, streams, and web API types.
@param pathThe path to the file in the bucket
@param dataThe data to write to the file
@param optionsS3 credentials and configuration options
@returnsThe number of bytes written
// Write string await S3Client.write("hello.txt", "Hello World", credentials); // Write JSON with type await S3Client.write( "data.json", JSON.stringify({hello: "world"}), { ...credentials, type: "application/json" } ); // Write from fetch const res = await fetch("https://example.com/data"); await S3Client.write("data.bin", res, credentials); // Write with ACL await S3Client.write("public.html", html, { ...credentials, acl: "public-read", type: "text/html" });