file

Bun

Symbol

S3Client.file

file(path: string, options?: S3Options): S3File

Creates an S3File instance for the given path.

@param path

The path to the file in the bucket

@param options

Additional S3 options to override defaults

@returns

An S3File instance

const file = bucket.file("image.jpg");
    await file.write(imageData);

    const configFile = bucket.file("config.json", {
      type: "application/json",
      acl: "private"
    });

Referenced types

interface S3Options

Configuration options for S3 operations

  • accessKeyId?: string

    The access key ID for authentication. Defaults to S3_ACCESS_KEY_ID or AWS_ACCESS_KEY_ID environment variables.

  • acl?: 'private' | 'public-read' | 'public-read-write' | 'aws-exec-read' | 'authenticated-read' | 'bucket-owner-read' | 'bucket-owner-full-control' | 'log-delivery-write'

    The Access Control List (ACL) policy for the file. Controls who can access the file and what permissions they have.

    // Setting public read access
        const file = s3.file("public-file.txt", {
          acl: "public-read",
          bucket: "my-bucket"
        });
    
  • bucket?: string

    The S3 bucket name. Defaults to S3_BUCKET or AWS_BUCKET environment variables.

    // Using explicit bucket
        const file = s3.file("my-file.txt", { bucket: "my-bucket" });
    
  • endings?: EndingType
  • endpoint?: string

    The S3-compatible service endpoint URL. Defaults to S3_ENDPOINT or AWS_ENDPOINT environment variables.

    // AWS S3
        const file = s3.file("my-file.txt", {
          endpoint: "https://s3.us-east-1.amazonaws.com"
        });
    
  • partSize?: number

    The size of each part in multipart uploads (in bytes).

    • Minimum: 5 MiB
    • Maximum: 5120 MiB
    • Default: 5 MiB
    // Configuring multipart uploads
        const file = s3.file("large-file.dat", {
          partSize: 10 * 1024 * 1024, // 10 MiB parts
          queueSize: 4  // Upload 4 parts in parallel
        });
    
        const writer = file.writer();
        // ... write large file in chunks
    
  • queueSize?: number

    Number of parts to upload in parallel for multipart uploads.

    • Default: 5
    • Maximum: 255

    Increasing this value can improve upload speeds for large files but will use more memory.

  • region?: string

    The AWS region. Defaults to S3_REGION or AWS_REGION environment variables.

    const file = s3.file("my-file.txt", {
          bucket: "my-bucket",
          region: "us-west-2"
        });
    
  • retry?: number

    Number of retry attempts for failed uploads.

    • Default: 3
    • Maximum: 255
    // Setting retry attempts
        const file = s3.file("my-file.txt", {
          retry: 5 // Retry failed uploads up to 5 times
        });
    
  • secretAccessKey?: string

    The secret access key for authentication. Defaults to S3_SECRET_ACCESS_KEY or AWS_SECRET_ACCESS_KEY environment variables.

  • sessionToken?: string

    Optional session token for temporary credentials. Defaults to S3_SESSION_TOKEN or AWS_SESSION_TOKEN environment variables.

    // Using temporary credentials
        const file = s3.file("my-file.txt", {
          accessKeyId: tempAccessKey,
          secretAccessKey: tempSecretKey,
          sessionToken: tempSessionToken
        });
    
  • storageClass?: 'STANDARD' | 'DEEP_ARCHIVE' | 'EXPRESS_ONEZONE' | 'GLACIER' | 'GLACIER_IR' | 'INTELLIGENT_TIERING' | 'ONEZONE_IA' | 'OUTPOSTS' | 'REDUCED_REDUNDANCY' | 'SNOW' | 'STANDARD_IA'

    By default, Amazon S3 uses the STANDARD Storage Class to store newly created objects.

    // Setting explicit Storage class
        const file = s3.file("my-file.json", {
          storageClass: "STANDARD_IA"
        });
    
  • type?: string

    The Content-Type of the file. Automatically set based on file extension when possible.

    // Setting explicit content type
        const file = s3.file("data.bin", {
          type: "application/octet-stream"
        });
    
  • virtualHostedStyle?: boolean

    Use virtual hosted style endpoint. default to false, when true if endpoint is informed it will ignore the bucket

    // Using virtual hosted style
        const file = s3.file("my-file.txt", {
          virtualHostedStyle: true,
          endpoint: "https://my-bucket.s3.us-east-1.amazonaws.com"
        });
    

interface S3File

Represents a file in an S3-compatible storage service. Extends the Blob interface for compatibility with web APIs.

  • readonly bucket?: string

    The bucket name containing the file.

    const file = s3.file("s3://my-bucket/file.txt");
       console.log(file.bucket); // "my-bucket"
    
  • readonly name?: string

    The name or path of the file in the bucket.

    const file = s3.file("folder/image.jpg");
    console.log(file.name); // "folder/image.jpg"
    
  • readonly readable: ReadableStream

    Gets a readable stream of the file's content. Useful for processing large files without loading them entirely into memory.

    // Basic streaming read
        const stream = file.stream();
        for await (const chunk of stream) {
          console.log('Received chunk:', chunk);
        }
    
  • readonly size: number
  • readonly type: string
  • Returns a promise that resolves to the contents of the blob as an ArrayBuffer

  • bytes(): Promise<Uint8Array<ArrayBufferLike>>

    Returns a promise that resolves to the contents of the blob as a Uint8Array (array of bytes) its the same as new Uint8Array(await blob.arrayBuffer())

  • delete(): Promise<void>

    Deletes the file from S3.

    @returns

    Promise that resolves when deletion is complete

    // Basic deletion
        await file.delete();
    
  • exists(): Promise<boolean>

    Checks if the file exists in S3. Uses HTTP HEAD request to efficiently check existence without downloading.

    @returns

    Promise resolving to true if file exists, false otherwise

    // Basic existence check
       if (await file.exists()) {
         console.log("File exists in S3");
       }
    
  • formData(): Promise<FormData>

    Read the data from the blob as a FormData object.

    This first decodes the data from UTF-8, then parses it as a multipart/form-data body or a application/x-www-form-urlencoded body.

    The type property of the blob is used to determine the format of the body.

    This is a non-standard addition to the Blob API, to make it conform more closely to the BodyMixin API.

  • json(): Promise<any>

    Read the data from the blob as a JSON object.

    This first decodes the data from UTF-8, then parses it as JSON.

  • presign(options?: S3FilePresignOptions): string

    Generates a presigned URL for the file. Allows temporary access to the file without exposing credentials.

    @param options

    Configuration for the presigned URL

    @returns

    Presigned URL string

    // Basic download URL
        const url = file.presign({
          expiresIn: 3600 // 1 hour
        });
    
  • slice(begin?: number, end?: number, contentType?: string): S3File

    Creates a new S3File representing a slice of the original file. Uses HTTP Range headers for efficient partial downloads.

    @param begin

    Starting byte offset

    @param end

    Ending byte offset (exclusive)

    @param contentType

    Optional MIME type for the slice

    @returns

    A new S3File representing the specified range

    // Reading file header
        const header = file.slice(0, 1024);
        const headerText = await header.text();
    
    slice(begin?: number, contentType?: string): S3File
    slice(contentType?: string): S3File
  • stat(): Promise<S3Stats>

    Get the stat of a file in an S3-compatible storage service.

    @returns

    Promise resolving to S3Stat

  • text(): Promise<string>

    Returns a promise that resolves to the contents of the blob as a string

  • write(data: string | ArrayBuffer | SharedArrayBuffer | Request | Response | Blob | BunFile | ArrayBufferView<ArrayBufferLike> | S3File, options?: S3Options): Promise<number>

    Uploads data to S3. Supports various input types and automatically handles large files.

    @param data

    The data to upload

    @param options

    Upload configuration options

    @returns

    Promise resolving to number of bytes written

    // Writing string data
        await file.write("Hello World", {
          type: "text/plain"
        });
    
  • Creates a writable stream for uploading data. Suitable for large files as it uses multipart upload.

    @param options

    Configuration for the upload

    @returns

    A NetworkSink for writing data

    // Basic streaming write
        const writer = file.writer({
          type: "application/json"
        });
        writer.write('{"hello": ');
        writer.write('"world"}');
        await writer.end();