Creates a writable stream for uploading data. Suitable for large files as it uses multipart upload.
Symbol
S3File.writer
Configuration for the upload
A NetworkSink for writing data
// Basic streaming write
const writer = file.writer({
type: "application/json"
});
writer.write('{"hello": ');
writer.write('"world"}');
await writer.end();
Referenced types
interface S3Options
Configuration options for S3 operations
- accessKeyId?: string
The access key ID for authentication. Defaults to
S3_ACCESS_KEY_ID
orAWS_ACCESS_KEY_ID
environment variables. - acl?: 'private' | 'public-read' | 'public-read-write' | 'aws-exec-read' | 'authenticated-read' | 'bucket-owner-read' | 'bucket-owner-full-control' | 'log-delivery-write'
The Access Control List (ACL) policy for the file. Controls who can access the file and what permissions they have.
// Setting public read access const file = s3.file("public-file.txt", { acl: "public-read", bucket: "my-bucket" });
- bucket?: string
The S3 bucket name. Defaults to
S3_BUCKET
orAWS_BUCKET
environment variables.// Using explicit bucket const file = s3.file("my-file.txt", { bucket: "my-bucket" });
- endpoint?: string
The S3-compatible service endpoint URL. Defaults to
S3_ENDPOINT
orAWS_ENDPOINT
environment variables.// AWS S3 const file = s3.file("my-file.txt", { endpoint: "https://s3.us-east-1.amazonaws.com" });
- partSize?: number
The size of each part in multipart uploads (in bytes).
- Minimum: 5 MiB
- Maximum: 5120 MiB
- Default: 5 MiB
// Configuring multipart uploads const file = s3.file("large-file.dat", { partSize: 10 * 1024 * 1024, // 10 MiB parts queueSize: 4 // Upload 4 parts in parallel }); const writer = file.writer(); // ... write large file in chunks
- queueSize?: number
Number of parts to upload in parallel for multipart uploads.
- Default: 5
- Maximum: 255
Increasing this value can improve upload speeds for large files but will use more memory.
- region?: string
The AWS region. Defaults to
S3_REGION
orAWS_REGION
environment variables.const file = s3.file("my-file.txt", { bucket: "my-bucket", region: "us-west-2" });
- retry?: number
Number of retry attempts for failed uploads.
- Default: 3
- Maximum: 255
// Setting retry attempts const file = s3.file("my-file.txt", { retry: 5 // Retry failed uploads up to 5 times });
- secretAccessKey?: string
The secret access key for authentication. Defaults to
S3_SECRET_ACCESS_KEY
orAWS_SECRET_ACCESS_KEY
environment variables. - sessionToken?: string
Optional session token for temporary credentials. Defaults to
S3_SESSION_TOKEN
orAWS_SESSION_TOKEN
environment variables.// Using temporary credentials const file = s3.file("my-file.txt", { accessKeyId: tempAccessKey, secretAccessKey: tempSecretKey, sessionToken: tempSessionToken });
- storageClass?: 'STANDARD' | 'DEEP_ARCHIVE' | 'EXPRESS_ONEZONE' | 'GLACIER' | 'GLACIER_IR' | 'INTELLIGENT_TIERING' | 'ONEZONE_IA' | 'OUTPOSTS' | 'REDUCED_REDUNDANCY' | 'SNOW' | 'STANDARD_IA'
By default, Amazon S3 uses the STANDARD Storage Class to store newly created objects.
// Setting explicit Storage class const file = s3.file("my-file.json", { storageClass: "STANDARD_IA" });
- type?: string
The Content-Type of the file. Automatically set based on file extension when possible.
// Setting explicit content type const file = s3.file("data.bin", { type: "application/octet-stream" });
- virtualHostedStyle?: boolean
Use virtual hosted style endpoint. default to false, when true if
endpoint
is informed it will ignore thebucket
// Using virtual hosted style const file = s3.file("my-file.txt", { virtualHostedStyle: true, endpoint: "https://my-bucket.s3.us-east-1.amazonaws.com" });
interface NetworkSink
Fast incremental writer for files and pipes.
This uses the same interface as ArrayBufferSink, but writes to a file or pipe.
Flush the internal buffer, committing the data to the network.
@returnsNumber of bytes flushed or a Promise resolving to the number of bytes
For FIFOs & pipes, this lets you decide whether Bun's process should remain alive until the pipe is closed.
By default, it is automatically managed. While the stream is open, the process remains alive and once the other end hangs up or the stream closes, the process exits.
If you previously called unref, you can call this again to re-enable automatic management.
Internally, it will reference count the number of times this is called. By default, that number is 1
If the file is not a FIFO or pipe, ref and unref do nothing. If the pipe is already closed, this does nothing.
Start the file sink with provided options.
@param optionsConfiguration options for the file sink
For FIFOs & pipes, this lets you decide whether Bun's process should remain alive until the pipe is closed.
If you want to allow Bun's process to terminate while the stream is open, call this.
If the file is not a FIFO or pipe, ref and unref do nothing. If the pipe is already closed, this does nothing.
Write a chunk of data to the network.
If the network is not writable yet, the data is buffered.
@param chunkThe data to write
@returnsNumber of bytes written