Delete a file from the bucket. Alias for S3Client.unlink.
Symbol
S3Client.delete
The path to the file in the bucket
Additional S3 options to override defaults
A promise that resolves when deletion is complete
// Simple delete
await bucket.delete("old-file.txt");
// With error handling
try {
await bucket.delete("file.dat");
console.log("File deleted");
} catch (err) {
console.error("Delete failed:", err);
}
Referenced types
interface S3Options
Configuration options for S3 operations
- accessKeyId?: string
The access key ID for authentication. Defaults to
S3_ACCESS_KEY_ID
orAWS_ACCESS_KEY_ID
environment variables. - acl?: 'private' | 'public-read' | 'public-read-write' | 'aws-exec-read' | 'authenticated-read' | 'bucket-owner-read' | 'bucket-owner-full-control' | 'log-delivery-write'
The Access Control List (ACL) policy for the file. Controls who can access the file and what permissions they have.
// Setting public read access const file = s3.file("public-file.txt", { acl: "public-read", bucket: "my-bucket" });
- bucket?: string
The S3 bucket name. Defaults to
S3_BUCKET
orAWS_BUCKET
environment variables.// Using explicit bucket const file = s3.file("my-file.txt", { bucket: "my-bucket" });
- endpoint?: string
The S3-compatible service endpoint URL. Defaults to
S3_ENDPOINT
orAWS_ENDPOINT
environment variables.// AWS S3 const file = s3.file("my-file.txt", { endpoint: "https://s3.us-east-1.amazonaws.com" });
- partSize?: number
The size of each part in multipart uploads (in bytes).
- Minimum: 5 MiB
- Maximum: 5120 MiB
- Default: 5 MiB
// Configuring multipart uploads const file = s3.file("large-file.dat", { partSize: 10 * 1024 * 1024, // 10 MiB parts queueSize: 4 // Upload 4 parts in parallel }); const writer = file.writer(); // ... write large file in chunks
- queueSize?: number
Number of parts to upload in parallel for multipart uploads.
- Default: 5
- Maximum: 255
Increasing this value can improve upload speeds for large files but will use more memory.
- region?: string
The AWS region. Defaults to
S3_REGION
orAWS_REGION
environment variables.const file = s3.file("my-file.txt", { bucket: "my-bucket", region: "us-west-2" });
- retry?: number
Number of retry attempts for failed uploads.
- Default: 3
- Maximum: 255
// Setting retry attempts const file = s3.file("my-file.txt", { retry: 5 // Retry failed uploads up to 5 times });
- secretAccessKey?: string
The secret access key for authentication. Defaults to
S3_SECRET_ACCESS_KEY
orAWS_SECRET_ACCESS_KEY
environment variables. - sessionToken?: string
Optional session token for temporary credentials. Defaults to
S3_SESSION_TOKEN
orAWS_SESSION_TOKEN
environment variables.// Using temporary credentials const file = s3.file("my-file.txt", { accessKeyId: tempAccessKey, secretAccessKey: tempSecretKey, sessionToken: tempSessionToken });
- storageClass?: 'STANDARD' | 'DEEP_ARCHIVE' | 'EXPRESS_ONEZONE' | 'GLACIER' | 'GLACIER_IR' | 'INTELLIGENT_TIERING' | 'ONEZONE_IA' | 'OUTPOSTS' | 'REDUCED_REDUNDANCY' | 'SNOW' | 'STANDARD_IA'
By default, Amazon S3 uses the STANDARD Storage Class to store newly created objects.
// Setting explicit Storage class const file = s3.file("my-file.json", { storageClass: "STANDARD_IA" });
- type?: string
The Content-Type of the file. Automatically set based on file extension when possible.
// Setting explicit content type const file = s3.file("data.bin", { type: "application/octet-stream" });
- virtualHostedStyle?: boolean
Use virtual hosted style endpoint. default to false, when true if
endpoint
is informed it will ignore thebucket
// Using virtual hosted style const file = s3.file("my-file.txt", { virtualHostedStyle: true, endpoint: "https://my-bucket.s3.us-east-1.amazonaws.com" });