S3 Backend¶
-
class
keg_storage.backends.s3.
S3Storage
(bucket, aws_region, aws_access_key_id=None, aws_secret_access_key=None, aws_profile=None, name='s3')[source]¶
-
class
keg_storage.backends.s3.
S3Reader
(bucket, filename, client)[source]¶
-
class
keg_storage.backends.s3.
S3Writer
(bucket, filename, client, chunk_size=10485760)[source]¶ Writes to S3 are quite a bit more complicated than reads. To support large files, we cannot write in a single operation and the API does not encourage streaming writes so we make use of the multipart API methods.
- The process can be summarized as:
- Create a multipart upload and get an upload key to use with subsequent calls.
- Upload “parts” of the file using the upload key and get back an ID for each part.
- Combine the parts using the upload key and all the part IDs from the above steps.
The chunked nature of the uploads should be mostly invisible to the caller since S3Writer maintains a local buffer.
Because creating a multipart upload itself has an actual cost and there is no guarantee that anything will actually be written, we initialize the multipart upload lazily.
-
abort
()[source]¶ Use if for some reason you want to discard all the data written and not create an S3 object