Creates an S3File instance for the given path.
Symbol
S3Client.file
The path to the file in the bucket
Additional S3 options to override defaults
An S3File instance
const file = bucket.file("image.jpg");
await file.write(imageData);
const configFile = bucket.file("config.json", {
type: "application/json",
acl: "private"
});
Referenced types
interface S3Options
Configuration options for S3 operations
- accessKeyId?: string
The access key ID for authentication. Defaults to
S3_ACCESS_KEY_ID
orAWS_ACCESS_KEY_ID
environment variables. - acl?: 'private' | 'public-read' | 'public-read-write' | 'aws-exec-read' | 'authenticated-read' | 'bucket-owner-read' | 'bucket-owner-full-control' | 'log-delivery-write'
The Access Control List (ACL) policy for the file. Controls who can access the file and what permissions they have.
// Setting public read access const file = s3.file("public-file.txt", { acl: "public-read", bucket: "my-bucket" });
- bucket?: string
The S3 bucket name. Defaults to
S3_BUCKET
orAWS_BUCKET
environment variables.// Using explicit bucket const file = s3.file("my-file.txt", { bucket: "my-bucket" });
- endpoint?: string
The S3-compatible service endpoint URL. Defaults to
S3_ENDPOINT
orAWS_ENDPOINT
environment variables.// AWS S3 const file = s3.file("my-file.txt", { endpoint: "https://s3.us-east-1.amazonaws.com" });
- partSize?: number
The size of each part in multipart uploads (in bytes).
- Minimum: 5 MiB
- Maximum: 5120 MiB
- Default: 5 MiB
// Configuring multipart uploads const file = s3.file("large-file.dat", { partSize: 10 * 1024 * 1024, // 10 MiB parts queueSize: 4 // Upload 4 parts in parallel }); const writer = file.writer(); // ... write large file in chunks
- queueSize?: number
Number of parts to upload in parallel for multipart uploads.
- Default: 5
- Maximum: 255
Increasing this value can improve upload speeds for large files but will use more memory.
- region?: string
The AWS region. Defaults to
S3_REGION
orAWS_REGION
environment variables.const file = s3.file("my-file.txt", { bucket: "my-bucket", region: "us-west-2" });
- retry?: number
Number of retry attempts for failed uploads.
- Default: 3
- Maximum: 255
// Setting retry attempts const file = s3.file("my-file.txt", { retry: 5 // Retry failed uploads up to 5 times });
- secretAccessKey?: string
The secret access key for authentication. Defaults to
S3_SECRET_ACCESS_KEY
orAWS_SECRET_ACCESS_KEY
environment variables. - sessionToken?: string
Optional session token for temporary credentials. Defaults to
S3_SESSION_TOKEN
orAWS_SESSION_TOKEN
environment variables.// Using temporary credentials const file = s3.file("my-file.txt", { accessKeyId: tempAccessKey, secretAccessKey: tempSecretKey, sessionToken: tempSessionToken });
- storageClass?: 'STANDARD' | 'DEEP_ARCHIVE' | 'EXPRESS_ONEZONE' | 'GLACIER' | 'GLACIER_IR' | 'INTELLIGENT_TIERING' | 'ONEZONE_IA' | 'OUTPOSTS' | 'REDUCED_REDUNDANCY' | 'SNOW' | 'STANDARD_IA'
By default, Amazon S3 uses the STANDARD Storage Class to store newly created objects.
// Setting explicit Storage class const file = s3.file("my-file.json", { storageClass: "STANDARD_IA" });
- type?: string
The Content-Type of the file. Automatically set based on file extension when possible.
// Setting explicit content type const file = s3.file("data.bin", { type: "application/octet-stream" });
- virtualHostedStyle?: boolean
Use virtual hosted style endpoint. default to false, when true if
endpoint
is informed it will ignore thebucket
// Using virtual hosted style const file = s3.file("my-file.txt", { virtualHostedStyle: true, endpoint: "https://my-bucket.s3.us-east-1.amazonaws.com" });
interface S3File
Represents a file in an S3-compatible storage service. Extends the Blob interface for compatibility with web APIs.
- readonly bucket?: string
The bucket name containing the file.
const file = s3.file("s3://my-bucket/file.txt"); console.log(file.bucket); // "my-bucket"
- readonly name?: string
The name or path of the file in the bucket.
const file = s3.file("folder/image.jpg"); console.log(file.name); // "folder/image.jpg"
- readonly readable: ReadableStream
Gets a readable stream of the file's content. Useful for processing large files without loading them entirely into memory.
// Basic streaming read const stream = file.stream(); for await (const chunk of stream) { console.log('Received chunk:', chunk); }
- unlink: () => Promise<void>
Alias for delete() method. Provided for compatibility with Node.js fs API naming.
await file.unlink();
Returns a promise that resolves to the contents of the blob as an ArrayBuffer
Returns a promise that resolves to the contents of the blob as a Uint8Array (array of bytes) its the same as
new Uint8Array(await blob.arrayBuffer())
Deletes the file from S3.
@returnsPromise that resolves when deletion is complete
// Basic deletion await file.delete();
Checks if the file exists in S3. Uses HTTP HEAD request to efficiently check existence without downloading.
@returnsPromise resolving to true if file exists, false otherwise
// Basic existence check if (await file.exists()) { console.log("File exists in S3"); }
Read the data from the blob as a FormData object.
This first decodes the data from UTF-8, then parses it as a
multipart/form-data
body or aapplication/x-www-form-urlencoded
body.The
type
property of the blob is used to determine the format of the body.This is a non-standard addition to the
Blob
API, to make it conform more closely to theBodyMixin
API.Read the data from the blob as a JSON object.
This first decodes the data from UTF-8, then parses it as JSON.
Generates a presigned URL for the file. Allows temporary access to the file without exposing credentials.
@param optionsConfiguration for the presigned URL
@returnsPresigned URL string
// Basic download URL const url = file.presign({ expiresIn: 3600 // 1 hour });
Creates a new S3File representing a slice of the original file. Uses HTTP Range headers for efficient partial downloads.
@param beginStarting byte offset
@param endEnding byte offset (exclusive)
@param contentTypeOptional MIME type for the slice
@returnsA new S3File representing the specified range
// Reading file header const header = file.slice(0, 1024); const headerText = await header.text();
Returns a promise that resolves to the contents of the blob as a string
- write(data: string | ArrayBuffer | SharedArrayBuffer | Request | Response | Blob | BunFile | ArrayBufferView<ArrayBufferLike> | S3File, options?: S3Options): Promise<number>
Uploads data to S3. Supports various input types and automatically handles large files.
@param dataThe data to upload
@param optionsUpload configuration options
@returnsPromise resolving to number of bytes written
// Writing string data await file.write("Hello World", { type: "text/plain" });
Creates a writable stream for uploading data. Suitable for large files as it uses multipart upload.
@param optionsConfiguration for the upload
@returnsA NetworkSink for writing data
// Basic streaming write const writer = file.writer({ type: "application/json" }); writer.write('{"hello": '); writer.write('"world"}'); await writer.end();