write

Bun

Symbol

S3Client.write

write(path: string, data: string | ArrayBuffer | SharedArrayBuffer | Request | Response | Blob | File | BunFile | ArrayBufferView<ArrayBufferLike> | S3File, options?: S3Options): Promise<number>

Writes data directly to a path in the bucket. Supports strings, buffers, streams, and web API types.

@param path

The path to the file in the bucket

@param data

The data to write to the file

@param options

Additional S3 options to override defaults

@returns

The number of bytes written

// Write string
    await bucket.write("hello.txt", "Hello World");

    // Write JSON with type
    await bucket.write(
      "data.json",
      JSON.stringify({hello: "world"}),
      {type: "application/json"}
    );

    // Write from fetch
    const res = await fetch("https://example.com/data");
    await bucket.write("data.bin", res);

    // Write with ACL
    await bucket.write("public.html", html, {
      acl: "public-read",
      type: "text/html"
    });

Referenced types

class ArrayBuffer

Represents a raw buffer of binary data, which is used to store data for the different typed arrays. ArrayBuffers cannot be read from or written to directly, but can be passed to a typed array or DataView Object to interpret the raw buffer as needed.

  • readonly [Symbol.toStringTag]: string
  • readonly byteLength: number

    Read-only. The length of the ArrayBuffer (in bytes).

  • resize(newByteLength?: number): void

    Resizes the ArrayBuffer to the specified size (in bytes).

    MDN

    resize(byteLength: number): ArrayBuffer

    Resize an ArrayBuffer in-place.

  • slice(begin: number, end?: number): ArrayBuffer

    Returns a section of an ArrayBuffer.

  • transfer(newByteLength?: number): ArrayBuffer

    Creates a new ArrayBuffer with the same byte content as this buffer, then detaches this buffer.

    MDN

  • transferToFixedLength(newByteLength?: number): ArrayBuffer

    Creates a new non-resizable ArrayBuffer with the same byte content as this buffer, then detaches this buffer.

    MDN

interface SharedArrayBuffer

class Request

This Fetch API interface represents a resource request.

MDN Reference

  • readonly body: null | ReadableStream<Uint8Array<ArrayBufferLike>>
  • readonly bodyUsed: boolean
  • readonly cache: RequestCache

    Returns the cache mode associated with request, which is a string indicating how the request will interact with the browser's cache when fetching.

    MDN Reference

  • readonly credentials: RequestCredentials

    Returns the credentials mode associated with request, which is a string indicating whether credentials will be sent with the request always, never, or only when sent to a same-origin URL.

    MDN Reference

  • readonly destination: RequestDestination

    Returns the kind of resource requested by request, e.g., "document" or "script".

    MDN Reference

  • readonly headers: Headers

    Returns a Headers object consisting of the headers associated with request. Note that headers added in the network layer by the user agent will not be accounted for in this object, e.g., the "Host" header.

    MDN Reference

  • readonly integrity: string

    Returns request's subresource integrity metadata, which is a cryptographic hash of the resource being fetched. Its value consists of multiple hashes separated by whitespace. [SRI]

    MDN Reference

  • readonly keepalive: boolean

    Returns a boolean indicating whether or not request can outlive the global in which it was created.

    MDN Reference

  • readonly method: string

    Returns request's HTTP method, which is "GET" by default.

    MDN Reference

  • readonly mode: RequestMode

    Returns the mode associated with request, which is a string indicating whether the request will use CORS, or will be restricted to same-origin URLs.

    MDN Reference

  • readonly redirect: RequestRedirect

    Returns the redirect mode associated with request, which is a string indicating how redirects for the request will be handled during fetching. A request will follow redirects by default.

    MDN Reference

  • readonly referrer: string

    Returns the referrer of request. Its value can be a same-origin URL if explicitly set in init, the empty string to indicate no referrer, and "about:client" when defaulting to the global's default. This is used during fetching to determine the value of the Referer header of the request being made.

    MDN Reference

  • readonly referrerPolicy: ReferrerPolicy

    Returns the referrer policy associated with request. This is used during fetching to compute the value of the request's referrer.

    MDN Reference

  • readonly signal: AbortSignal

    Returns the signal associated with request, which is an AbortSignal object indicating whether or not request has been aborted, and its abort event handler.

    MDN Reference

  • readonly url: string
  • bytes(): Promise<Uint8Array<ArrayBufferLike>>
  • json(): Promise<any>
  • text(): Promise<string>

class Response

This Fetch API interface represents the response to a request.

MDN Reference

class Blob

A file-like object of immutable, raw data. Blobs represent data that isn't necessarily in a JavaScript-native format. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system.

MDN Reference

  • readonly size: number
  • readonly type: string
  • Returns a promise that resolves to the contents of the blob as an ArrayBuffer

  • bytes(): Promise<Uint8Array<ArrayBufferLike>>

    Returns a promise that resolves to the contents of the blob as a Uint8Array (array of bytes) its the same as new Uint8Array(await blob.arrayBuffer())

  • formData(): Promise<FormData>

    Read the data from the blob as a FormData object.

    This first decodes the data from UTF-8, then parses it as a multipart/form-data body or a application/x-www-form-urlencoded body.

    The type property of the blob is used to determine the format of the body.

    This is a non-standard addition to the Blob API, to make it conform more closely to the BodyMixin API.

  • json(): Promise<any>

    Read the data from the blob as a JSON object.

    This first decodes the data from UTF-8, then parses it as JSON.

  • slice(start?: number, end?: number, contentType?: string): Blob
  • stream(): ReadableStream<Uint8Array<ArrayBufferLike>>

    Returns a readable stream of the blob's contents

  • text(): Promise<string>

    Returns a promise that resolves to the contents of the blob as a string

class File

Provides information about files and allows JavaScript in a web page to access their content.

MDN Reference

  • readonly lastModified: number
  • readonly name: string
  • readonly size: number
  • readonly type: string
  • Returns a promise that resolves to the contents of the blob as an ArrayBuffer

  • bytes(): Promise<Uint8Array<ArrayBufferLike>>

    Returns a promise that resolves to the contents of the blob as a Uint8Array (array of bytes) its the same as new Uint8Array(await blob.arrayBuffer())

  • formData(): Promise<FormData>

    Read the data from the blob as a FormData object.

    This first decodes the data from UTF-8, then parses it as a multipart/form-data body or a application/x-www-form-urlencoded body.

    The type property of the blob is used to determine the format of the body.

    This is a non-standard addition to the Blob API, to make it conform more closely to the BodyMixin API.

  • json(): Promise<any>

    Read the data from the blob as a JSON object.

    This first decodes the data from UTF-8, then parses it as JSON.

  • slice(start?: number, end?: number, contentType?: string): Blob
  • stream(): ReadableStream<Uint8Array<ArrayBufferLike>>

    Returns a readable stream of the blob's contents

  • text(): Promise<string>

    Returns a promise that resolves to the contents of the blob as a string

interface BunFile

Blob powered by the fastest system calls available for operating on files.

This Blob is lazy. That means it won't do any work until you read from it.

  • size will not be valid until the contents of the file are read at least once.
  • type is auto-set based on the file extension when possible
const file = Bun.file("./hello.json");
console.log(file.type); // "application/json"
console.log(await file.text()); // '{"hello":"world"}'
  • lastModified: number

    A UNIX timestamp indicating when the file was last modified.

  • readonly name?: string

    The name or path of the file, as specified in the constructor.

  • readonly size: number
  • readonly type: string
  • Returns a promise that resolves to the contents of the blob as an ArrayBuffer

  • bytes(): Promise<Uint8Array<ArrayBufferLike>>

    Returns a promise that resolves to the contents of the blob as a Uint8Array (array of bytes) its the same as new Uint8Array(await blob.arrayBuffer())

  • delete(): Promise<void>

    Deletes the file (same as unlink)

  • exists(): Promise<boolean>

    Does the file exist?

    This returns true for regular files and FIFOs. It returns false for directories. Note that a race condition can occur where the file is deleted or renamed after this is called but before you open it.

    This does a system call to check if the file exists, which can be slow.

    If using this in an HTTP server, it's faster to instead use return new Response(Bun.file(path)) and then an error handler to handle exceptions.

    Instead of checking for a file's existence and then performing the operation, it is faster to just perform the operation and handle the error.

    For empty Blob, this always returns true.

  • formData(): Promise<FormData>

    Read the data from the blob as a FormData object.

    This first decodes the data from UTF-8, then parses it as a multipart/form-data body or a application/x-www-form-urlencoded body.

    The type property of the blob is used to determine the format of the body.

    This is a non-standard addition to the Blob API, to make it conform more closely to the BodyMixin API.

  • json(): Promise<any>

    Read the data from the blob as a JSON object.

    This first decodes the data from UTF-8, then parses it as JSON.

  • slice(begin?: number, end?: number, contentType?: string): BunFile

    Offset any operation on the file starting at begin and ending at end. end is relative to 0

    Similar to TypedArray.subarray. Does not copy the file, open the file, or modify the file.

    If begin > 0, () will be slower on macOS

    @param begin

    start offset in bytes

    @param end

    absolute offset in bytes (relative to 0)

    @param contentType

    MIME type for the new BunFile

    slice(begin?: number, contentType?: string): BunFile

    Offset any operation on the file starting at begin

    Similar to TypedArray.subarray. Does not copy the file, open the file, or modify the file.

    If begin > 0, Bun.write() will be slower on macOS

    @param begin

    start offset in bytes

    @param contentType

    MIME type for the new BunFile

    slice(contentType?: string): BunFile

    Slice the file from the beginning to the end, optionally with a new MIME type.

    @param contentType

    MIME type for the new BunFile

  • stat(): Promise<Stats>

    Provides useful information about the file.

  • stream(): ReadableStream<Uint8Array<ArrayBufferLike>>

    Returns a readable stream of the blob's contents

  • text(): Promise<string>

    Returns a promise that resolves to the contents of the blob as a string

  • write(data: string | ArrayBuffer | SharedArrayBuffer | Request | Response | BunFile | ArrayBufferView<ArrayBufferLike>, options?: { highWaterMark: number }): Promise<number>

    Write data to the file. This is equivalent to using Bun.write with a BunFile.

    @param data

    The data to write.

    @param options

    The options to use for the write.

  • writer(options?: { highWaterMark: number }): FileSink

    Incremental writer for files and pipes.

type ArrayBufferView<TArrayBuffer extends ArrayBufferLike = ArrayBufferLike> = NodeJS.TypedArray<TArrayBuffer> | DataView<TArrayBuffer>

interface S3File

Represents a file in an S3-compatible storage service. Extends the Blob interface for compatibility with web APIs.

  • readonly bucket?: string

    The bucket name containing the file.

    const file = s3.file("s3://my-bucket/file.txt");
       console.log(file.bucket); // "my-bucket"
    
  • readonly name?: string

    The name or path of the file in the bucket.

    const file = s3.file("folder/image.jpg");
    console.log(file.name); // "folder/image.jpg"
    
  • readonly readable: ReadableStream

    Gets a readable stream of the file's content. Useful for processing large files without loading them entirely into memory.

    // Basic streaming read
        const stream = file.stream();
        for await (const chunk of stream) {
          console.log('Received chunk:', chunk);
        }
    
  • readonly size: number
  • readonly type: string
  • Returns a promise that resolves to the contents of the blob as an ArrayBuffer

  • bytes(): Promise<Uint8Array<ArrayBufferLike>>

    Returns a promise that resolves to the contents of the blob as a Uint8Array (array of bytes) its the same as new Uint8Array(await blob.arrayBuffer())

  • delete(): Promise<void>

    Deletes the file from S3.

    @returns

    Promise that resolves when deletion is complete

    // Basic deletion
        await file.delete();
    
  • exists(): Promise<boolean>

    Checks if the file exists in S3. Uses HTTP HEAD request to efficiently check existence without downloading.

    @returns

    Promise resolving to true if file exists, false otherwise

    // Basic existence check
       if (await file.exists()) {
         console.log("File exists in S3");
       }
    
  • formData(): Promise<FormData>

    Read the data from the blob as a FormData object.

    This first decodes the data from UTF-8, then parses it as a multipart/form-data body or a application/x-www-form-urlencoded body.

    The type property of the blob is used to determine the format of the body.

    This is a non-standard addition to the Blob API, to make it conform more closely to the BodyMixin API.

  • json(): Promise<any>

    Read the data from the blob as a JSON object.

    This first decodes the data from UTF-8, then parses it as JSON.

  • presign(options?: S3FilePresignOptions): string

    Generates a presigned URL for the file. Allows temporary access to the file without exposing credentials.

    @param options

    Configuration for the presigned URL

    @returns

    Presigned URL string

    // Basic download URL
        const url = file.presign({
          expiresIn: 3600 // 1 hour
        });
    
  • slice(begin?: number, end?: number, contentType?: string): S3File

    Creates a new S3File representing a slice of the original file. Uses HTTP Range headers for efficient partial downloads.

    @param begin

    Starting byte offset

    @param end

    Ending byte offset (exclusive)

    @param contentType

    Optional MIME type for the slice

    @returns

    A new S3File representing the specified range

    // Reading file header
        const header = file.slice(0, 1024);
        const headerText = await header.text();
    
    slice(begin?: number, contentType?: string): S3File
    slice(contentType?: string): S3File
  • stat(): Promise<S3Stats>

    Get the stat of a file in an S3-compatible storage service.

    @returns

    Promise resolving to S3Stat

  • text(): Promise<string>

    Returns a promise that resolves to the contents of the blob as a string

  • write(data: string | ArrayBuffer | SharedArrayBuffer | Request | Response | Blob | BunFile | ArrayBufferView<ArrayBufferLike> | S3File, options?: S3Options): Promise<number>

    Uploads data to S3. Supports various input types and automatically handles large files.

    @param data

    The data to upload

    @param options

    Upload configuration options

    @returns

    Promise resolving to number of bytes written

    // Writing string data
        await file.write("Hello World", {
          type: "text/plain"
        });
    
  • Creates a writable stream for uploading data. Suitable for large files as it uses multipart upload.

    @param options

    Configuration for the upload

    @returns

    A NetworkSink for writing data

    // Basic streaming write
        const writer = file.writer({
          type: "application/json"
        });
        writer.write('{"hello": ');
        writer.write('"world"}');
        await writer.end();
    

interface S3Options

Configuration options for S3 operations

  • accessKeyId?: string

    The access key ID for authentication. Defaults to S3_ACCESS_KEY_ID or AWS_ACCESS_KEY_ID environment variables.

  • acl?: 'private' | 'public-read' | 'public-read-write' | 'aws-exec-read' | 'authenticated-read' | 'bucket-owner-read' | 'bucket-owner-full-control' | 'log-delivery-write'

    The Access Control List (ACL) policy for the file. Controls who can access the file and what permissions they have.

    // Setting public read access
        const file = s3.file("public-file.txt", {
          acl: "public-read",
          bucket: "my-bucket"
        });
    
  • bucket?: string

    The S3 bucket name. Defaults to S3_BUCKET or AWS_BUCKET environment variables.

    // Using explicit bucket
        const file = s3.file("my-file.txt", { bucket: "my-bucket" });
    
  • endings?: EndingType
  • endpoint?: string

    The S3-compatible service endpoint URL. Defaults to S3_ENDPOINT or AWS_ENDPOINT environment variables.

    // AWS S3
        const file = s3.file("my-file.txt", {
          endpoint: "https://s3.us-east-1.amazonaws.com"
        });
    
  • partSize?: number

    The size of each part in multipart uploads (in bytes).

    • Minimum: 5 MiB
    • Maximum: 5120 MiB
    • Default: 5 MiB
    // Configuring multipart uploads
        const file = s3.file("large-file.dat", {
          partSize: 10 * 1024 * 1024, // 10 MiB parts
          queueSize: 4  // Upload 4 parts in parallel
        });
    
        const writer = file.writer();
        // ... write large file in chunks
    
  • queueSize?: number

    Number of parts to upload in parallel for multipart uploads.

    • Default: 5
    • Maximum: 255

    Increasing this value can improve upload speeds for large files but will use more memory.

  • region?: string

    The AWS region. Defaults to S3_REGION or AWS_REGION environment variables.

    const file = s3.file("my-file.txt", {
          bucket: "my-bucket",
          region: "us-west-2"
        });
    
  • retry?: number

    Number of retry attempts for failed uploads.

    • Default: 3
    • Maximum: 255
    // Setting retry attempts
        const file = s3.file("my-file.txt", {
          retry: 5 // Retry failed uploads up to 5 times
        });
    
  • secretAccessKey?: string

    The secret access key for authentication. Defaults to S3_SECRET_ACCESS_KEY or AWS_SECRET_ACCESS_KEY environment variables.

  • sessionToken?: string

    Optional session token for temporary credentials. Defaults to S3_SESSION_TOKEN or AWS_SESSION_TOKEN environment variables.

    // Using temporary credentials
        const file = s3.file("my-file.txt", {
          accessKeyId: tempAccessKey,
          secretAccessKey: tempSecretKey,
          sessionToken: tempSessionToken
        });
    
  • storageClass?: 'STANDARD' | 'DEEP_ARCHIVE' | 'EXPRESS_ONEZONE' | 'GLACIER' | 'GLACIER_IR' | 'INTELLIGENT_TIERING' | 'ONEZONE_IA' | 'OUTPOSTS' | 'REDUCED_REDUNDANCY' | 'SNOW' | 'STANDARD_IA'

    By default, Amazon S3 uses the STANDARD Storage Class to store newly created objects.

    // Setting explicit Storage class
        const file = s3.file("my-file.json", {
          storageClass: "STANDARD_IA"
        });
    
  • type?: string

    The Content-Type of the file. Automatically set based on file extension when possible.

    // Setting explicit content type
        const file = s3.file("data.bin", {
          type: "application/octet-stream"
        });
    
  • virtualHostedStyle?: boolean

    Use virtual hosted style endpoint. default to false, when true if endpoint is informed it will ignore the bucket

    // Using virtual hosted style
        const file = s3.file("my-file.txt", {
          virtualHostedStyle: true,
          endpoint: "https://my-bucket.s3.us-east-1.amazonaws.com"
        });