s3

package module
v0.1.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jun 29, 2025 License: Apache-2.0 Imports: 19 Imported by: 1

README

kelindar/s3
Go Version PkgGoDev Go Report Card License Coverage

Slim AWS S3 client

A lightweight, high-performance AWS S3 client library for Go that implements the standard fs.FS interface, allowing you to work with S3 buckets as if they were local filesystems.

Attribution: This library is extracted from Sneller's lightweight S3 client. Most of the credit goes to the Sneller team for the original implementation and design.

Features

  • Standard fs.FS Interface: Compatible with any Go code that accepts fs.FS
  • Lightweight: Minimal dependencies, focused on performance
  • Range Reads: Efficient partial file reading with HTTP range requests
  • Multi-part Uploads: Support for large file uploads
  • Pattern Matching: Built-in glob pattern support for file listing
  • Context Support: Full context cancellation support
  • Lazy Loading: Optional HEAD-only requests until actual read
  • Multiple Auth Methods: Environment variables, IAM roles, manual keys

Use When:

  • ✅ Building applications that need to treat S3 as a filesystem (compatible with fs.FS)
  • ✅ Requiring lightweight, minimal-dependency S3 operations
  • ✅ Working with large files that benefit from range reads and multipart uploads

Not For:

  • ❌ Applications requiring the full AWS SDK feature set (SQS, DynamoDB, etc.)
  • ❌ Requiring advanced S3 features (bucket policies, lifecycle, object locking, versioning, etc.)
  • ❌ Projects that need official AWS support and enterprise features

Quick Start

package main

import (
    "context"
    "fmt"
    "io"
    "io/fs"
    
    "github.com/kelindar/s3"
    "github.com/kelindar/s3/aws"
)

func main() {
    // Create signing key from ambient credentials
    key, err := aws.AmbientKey("s3", s3.DeriveForBucket("my-bucket"))
    if err != nil {
        panic(err)
    }

    // Create Bucket instance
    bucket := s3.NewBucket(key, "my-bucket")
    
    // Upload a file
    etag, err := bucket.Write(context.Background(), "hello.txt", []byte("Hello, World!"))
    if err != nil {
        panic(err)
    }
    fmt.Printf("Uploaded with ETag: %s\n", etag)
    
    // Read the file back
    file, err := bucket.Open("hello.txt")
    if err != nil {
        panic(err)
    }
    defer file.Close()
    
    content, err := io.ReadAll(file)
    if err != nil {
        panic(err)
    }
    fmt.Printf("Content: %s\n", content)
}

This is the recommended way to use the library, as it automatically discovers credentials from the environment, IAM roles, and other sources. It supports the following sources:

  • Environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
  • IAM roles (EC2, ECS, Lambda)
  • AWS credentials file (~/.aws/credentials)
  • Web identity tokens
key, err := aws.AmbientKey("s3", s3.DeriveForBucket("my-bucket"))

Manual Credentials

If you prefer to manage credentials manually, you can derive a signing key directly:

key := aws.DeriveKey(
    "",                    // baseURI (empty for AWS S3)
    "your-access-key",     // AWS Access Key ID
    "your-secret-key",     // AWS Secret Key
    "us-east-1",          // AWS Region
    "s3",                 // Service
)

Bucket Options

You can customize the behavior of the bucket by setting options:

bucket := s3.NewBucket(key, "my-bucket")
bucket.Client = httpClient   // Optional: Custom HTTP client
bucket.Lazy = true           // Optional: Use HEAD instead of GET for Open()

File Operations

If you need to work with files, the library provides standard fs.FS operations. Here's an example of uploading, reading, and checking for file existence:

// Upload a file
etag, err := bucket.Write(context.Background(), "path/to/file.txt", []byte("content"))

// Read a file
file, err := bucket.Open("path/to/file.txt")
if err != nil {
    panic(err)
}
defer file.Close()
content, err := io.ReadAll(file)

// Check if file exists
_, err := bucket.Open("path/to/file.txt")
if errors.Is(err, fs.ErrNotExist) {
    fmt.Println("File does not exist")
}

Directory Operations

If you need to work with directories, the library provides standard fs.ReadDirFS operations. Here's an example of listing directory contents and walking the directory tree:

// List directory contents
entries, err := fs.ReadDir(bucket, "path/to/directory")
for _, entry := range entries {
    fmt.Printf("%s (dir: %t)\n", entry.Name(), entry.IsDir())
}

// Walk directory tree
err = fs.WalkDir(bucket, ".", func(path string, d fs.DirEntry, err error) error {
    if err != nil {
        return err
    }
    fmt.Printf("Found: %s\n", path)
    return nil
})

Pattern Matching

The library supports pattern matching using the fsutil.WalkGlob function. Here's an example of finding all .txt files:

import (
    "github.com/kelindar/s3/fsutil"
)

// Find all .txt files
err := fsutil.WalkGlob(bucket, "", "*.txt", func(path string, f fs.File, err error) error {
    if err != nil {
        return err
    }
    defer f.Close()
    fmt.Printf("Text file: %s\n", path)
    return nil
})

Range Reads

If you need to read a specific range of bytes from a file, you can use the OpenRange function. In the following example, we read the first 1KB of a file:

// Read first 1KB of a file
reader, err := bucket.OpenRange("large-file.dat", "", 0, 1024)
if err != nil {
    panic(err)
}
defer reader.Close()

data, err := io.ReadAll(reader)

Multi-part Upload

For large files, you can use the WriteFrom method which automatically handles multipart uploads. This method is more convenient than manually managing upload parts:

// Open a large file
file, err := os.Open("large-file.dat")
if err != nil {
    panic(err)
}
defer file.Close()

// Get file size
stat, err := file.Stat()
if err != nil {
    panic(err)
}

// Upload using multipart upload (automatically used for files > 5MB)
err = bucket.WriteFrom(context.Background(), "large-file.dat", file, stat.Size())
if err != nil {
    panic(err)
}

The WriteFrom method automatically:

  • Determines optimal part size based on file size
  • Uploads parts in parallel for better performance
  • Handles multipart upload initialization and completion
  • Respects context cancellation for upload control

Working with Subdirectories

You can work with subdirectories by creating a sub-filesystem using the Sub method. In the following example, we create a sub-filesystem for the data/2023/ prefix and list all files within that prefix:

import "io/fs"

// Create a sub-filesystem for a specific prefix
subFS, err := bucket.Sub("data/2023/")
if err != nil {
    panic(err)
}

// Now work within that prefix
files, err := fs.ReadDir(subFS, ".")

Error Handling

The library uses standard Go fs package errors. You can check for specific errors using the errors.Is function:

import (
    "errors"
    "fmt"
    "io/fs"
)

file, err := bucket.Open("nonexistent.txt")
if errors.Is(err, fs.ErrNotExist) {
    fmt.Println("File not found")
} else if errors.Is(err, fs.ErrPermission) {
    fmt.Println("Access denied")
}

Testing

Set environment variables for integration tests:

export AWS_TEST_BUCKET=your-test-bucket
go test ./...

License

Licensed under the Apache License, Version 2.0. See LICENSE file for details.

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Documentation

Overview

Package s3 implements a lightweight client of the AWS S3 API.

The Reader type can be used to view S3 objects as an io.Reader or io.ReaderAt.

Index

Constants

View Source
const (
	MinPartSize = 5 * 1024 * 1024
	MaxParts    = 10000 // AWS limit
)

Default upload configuration values

Variables

View Source
var (
	// ErrInvalidBucket is returned from calls that attempt
	// to use a bucket name that isn't valid according to
	// the S3 specification.
	ErrInvalidBucket = errors.New("invalid bucket name")
	// ErrETagChanged is returned from read operations where
	// the ETag of the underlying file has changed since
	// the file handle was constructed. (This package guarantees
	// that file read operations are always consistent with respect
	// to the ETag originally associated with the file handle.)
	ErrETagChanged = errors.New("file ETag changed")
)
View Source
var DefaultClient = http.Client{
	Transport: &http.Transport{
		ResponseHeaderTimeout: 60 * time.Second,

		MaxIdleConnsPerHost: 5,

		DisableCompression: true,

		DialContext: (&net.Dialer{
			Timeout: 2 * time.Second,
		}).DialContext,
	},
}

DefaultClient is the default HTTP client used for requests made from this package.

Functions

func BucketRegion

func BucketRegion(k *aws.SigningKey, bucket string) (string, error)

BucketRegion returns the region associated with the given bucket.

func DeriveForBucket

func DeriveForBucket(bucket string) aws.DeriveFn

DeriveForBucket can be passed to aws.AmbientCreds as a DeriveFn that automatically re-derives keys so that they apply to the region in which the given bucket lives.

func URL

func URL(k *aws.SigningKey, bucket, object string) (string, error)

URL returns a signed URL for a bucket and object that can be used directly with http.Get.

func ValidBucket

func ValidBucket(bucket string) bool

ValidBucket returns whether or not bucket is a valid bucket name.

See https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html

Note: ValidBucket does not allow '.' characters, since bucket names containing dots are not accessible over HTTPS. (AWS docs say "not recommended for uses other than static website hosting.")

Types

type Bucket

type Bucket struct {
	Client *http.Client // HTTP client used for requests, if nil then DefaultClient is used
	Lazy   bool         // If true, causes the initial Open call to use a HEAD operation rather than a GET operation.
	// contains filtered or unexported fields
}

Bucket implements fs.FS, fs.ReadDirFS, and fs.SubFS.

func NewBucket

func NewBucket(key *aws.SigningKey, bucket string) *Bucket

NewBucket creates a new Bucket instance.

func (*Bucket) Delete added in v0.0.2

func (b *Bucket) Delete(ctx context.Context, fullpath string) error

Delete removes the object at fullpath.

func (*Bucket) Open

func (b *Bucket) Open(name string) (fs.File, error)

Open implements fs.FS.Open

The returned fs.File will be either a *File or a *Prefix depending on whether name refers to an object or a common path prefix that leads to multiple objects. If name does not refer to an object or a path prefix, then Open returns an error matching fs.ErrNotExist.

func (*Bucket) OpenRange

func (b *Bucket) OpenRange(name, etag string, start, width int64) (io.ReadCloser, error)

OpenRange produces an io.ReadCloser that reads data from the file given by [name] with the etag given by [etag] starting at byte [start] and continuing for [width] bytes. If [etag] does not match the ETag of the object, then ErrETagChanged will be returned.

func (*Bucket) ReadDir

func (b *Bucket) ReadDir(name string) ([]fs.DirEntry, error)

ReadDir implements fs.ReadDirFS

func (*Bucket) Sub

func (b *Bucket) Sub(dir string) (fs.FS, error)

Sub implements fs.SubFS.Sub.

func (*Bucket) VisitDir

func (b *Bucket) VisitDir(name, seek, pattern string, walk fsutil.VisitDirFn) error

VisitDir implements fs.VisitDirFS

func (*Bucket) Write added in v0.0.2

func (b *Bucket) Write(ctx context.Context, key string, contents []byte) (string, error)

Write performs a PutObject operation at the object key 'key' and returns the ETag of the newly-created object.

func (*Bucket) WriteFrom added in v0.0.2

func (b *Bucket) WriteFrom(ctx context.Context, key string, r io.ReaderAt, size int64) error

WriteFrom performs a multipart upload of data from an io.ReaderAt to the specified key.

type File

type File struct {
	Reader // Reader is a reader that points to the associated s3 object.
	// contains filtered or unexported fields
}

File implements fs.File

func NewFile

func NewFile(k *aws.SigningKey, bucket, object, etag string, size int64) *File

NewFile constructs a File that points to the given bucket, object, etag, and file size. The caller is assumed to have correctly determined these attributes in advance; this call does not perform any I/O to verify that the provided object exists or has a matching ETag and size.

func Open

func Open(k *aws.SigningKey, bucket, object string, contents bool) (*File, error)

Open performs a GET on an S3 object and returns the associated File.

func (*File) Close

func (f *File) Close() error

Close implements fs.File.Close

func (*File) Info

func (f *File) Info() (fs.FileInfo, error)

Info implements fs.DirEntry.Info

Info returns exactly the same thing as f.Stat

func (*File) IsDir

func (f *File) IsDir() bool

IsDir implements fs.DirEntry.IsDir. IsDir always returns false.

func (*File) ModTime

func (f *File) ModTime() time.Time

ModTime implements fs.DirEntry.ModTime. This returns the same value as f.Reader.LastModified.

func (*File) Mode

func (f *File) Mode() fs.FileMode

Mode implements fs.FileInfo.Mode

func (*File) Name

func (f *File) Name() string

Name implements fs.FileInfo.Name

func (*File) Open

func (f *File) Open() (fs.File, error)

Open implements fsutil.Opener

func (*File) Path

func (f *File) Path() string

Path returns the full path to the S3 object within its bucket. See also blockfmt.NamedFile

func (*File) Read

func (f *File) Read(p []byte) (int, error)

Read implements fs.File.Read

Note: Read is not safe to call from multiple goroutines simultaneously. Use ReadAt for parallel reads.

Also note: the first call to Read performs an HTTP request to S3 to read the entire contents of the object starting at the current read offset (zero by default, or another offset set via Seek). If you need to read a sub-range of the object, consider using f.Reader.RangeReader

func (*File) Seek

func (f *File) Seek(offset int64, whence int) (int64, error)

Seek implements io.Seeker

Seek rejects offsets that are beyond the size of the underlying object.

func (*File) Size

func (f *File) Size() int64

func (*File) Stat

func (f *File) Stat() (fs.FileInfo, error)

Stat implements fs.File.Stat

func (*File) Sys

func (f *File) Sys() interface{}

Sys implements fs.FileInfo.Sys.

func (*File) Type

func (f *File) Type() fs.FileMode

Type implements fs.DirEntry.Type

Type returns exactly the same thing as f.Mode

type Prefix

type Prefix struct {
	Key    *aws.SigningKey `xml:"-"`      // Key is the signing key used to sign requests.
	Client *http.Client    `xml:"-"`      // Client is the HTTP client used to make requests. If it is nil, then DefaultClient will be used.
	Bucket string          `xml:"-"`      // Bucket is the bucket at the root of the "filesystem"
	Path   string          `xml:"Prefix"` // Path is the path of this prefix, should always be a valid path  (see fs.ValidPath) plus a trailing forward slash to indicate that this is a pseudo-directory prefix.
	// contains filtered or unexported fields
}

Prefix implements fs.File, fs.ReadDirFile, and fs.DirEntry, and fs.FS.

func (*Prefix) Close

func (p *Prefix) Close() error

Close implements fs.File.Close

func (*Prefix) Info

func (p *Prefix) Info() (fs.FileInfo, error)

Info implements fs.DirEntry.Info

func (*Prefix) IsDir

func (p *Prefix) IsDir() bool

IsDir implements fs.FileInfo.IsDir

func (*Prefix) ModTime

func (p *Prefix) ModTime() time.Time

ModTime implements fs.FileInfo.ModTime

Note: currently ModTime returns the zero time.Time, as S3 prefixes don't have a meaningful modification time.

func (*Prefix) Mode

func (p *Prefix) Mode() fs.FileMode

Mode implements fs.FileInfo.Mode

func (*Prefix) Name

func (p *Prefix) Name() string

Name implements fs.DirEntry.Name

func (*Prefix) Open

func (p *Prefix) Open(file string) (fs.File, error)

Open opens the object or pseudo-directory at the provided path. The returned fs.File will be a *File if the combined Prefix and path lead to an object; if the combind prefix and path produce another complete object prefix, then a *Prefix will be returned. If the combined prefix and path do not produce a prefix that is present within the target bucket, then an error matching fs.ErrNotExist is returned.

func (*Prefix) Read

func (p *Prefix) Read(_ []byte) (int, error)

Read implements fs.File.Read.

Read always returns an error.

func (*Prefix) ReadDir

func (p *Prefix) ReadDir(n int) ([]fs.DirEntry, error)

ReadDir implements fs.ReadDirFile

Every returned fs.DirEntry will be either a Prefix or a File struct.

func (*Prefix) Size

func (p *Prefix) Size() int64

Size implements fs.FileInfo.Size

func (*Prefix) Stat

func (p *Prefix) Stat() (fs.FileInfo, error)

Stat implements fs.File.Stat

func (*Prefix) Sys

func (p *Prefix) Sys() interface{}

Sys implements fs.FileInfo.Sys

func (*Prefix) Type

func (p *Prefix) Type() fs.FileMode

Type implements fs.DirEntry.Type

func (*Prefix) VisitDir

func (p *Prefix) VisitDir(name, seek, pattern string, walk fsutil.VisitDirFn) error

VisitDir implements fs.VisitDirFS

type Reader

type Reader struct {
	// Key is the sigining key that
	// Reader uses to make HTTP requests.
	// The key may have to be refreshed
	// every so often (see aws.SigningKey.Expired)
	Key *aws.SigningKey `xml:"-"`

	// Client is the HTTP client used to
	// make HTTP requests. By default it is
	// populated with DefaultClient, but
	// it may be set to any reasonable http client
	// implementation.
	Client *http.Client `xml:"-"`

	// ETag is the ETag of the object in S3
	// as returned by listing or a HEAD operation.
	ETag string `xml:"ETag"`
	// LastModified is the object's LastModified time
	// as returned by listing or a HEAD operation.
	LastModified time.Time `xml:"LastModified"`
	// Size is the object size in bytes.
	// It is populated on Open.
	Size int64 `xml:"Size"`
	// Bucket is the S3 bucket holding the object.
	Bucket string `xml:"-"`
	// Path is the S3 object key.
	Path string `xml:"Key"`
}

Reader presents a read-only view of an S3 object

func Stat

func Stat(k *aws.SigningKey, bucket, object string) (*Reader, error)

Stat performs a HEAD on an S3 object and returns an associated Reader.

func (*Reader) RangeReader

func (r *Reader) RangeReader(off, width int64) (io.ReadCloser, error)

RangeReader produces an io.ReadCloser that reads bytes in the range from [off, off+width)

It is the caller's responsibility to call Close() on the returned io.ReadCloser.

func (*Reader) ReadAt

func (r *Reader) ReadAt(dst []byte, off int64) (int, error)

ReadAt implements io.ReaderAt

func (*Reader) WriteTo

func (r *Reader) WriteTo(w io.Writer) (int64, error)

WriteTo implements io.WriterTo

Directories

Path Synopsis
Package aws is a lightweight implementation of the AWS API signature algorithms.
Package aws is a lightweight implementation of the AWS API signature algorithms.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL