S3 Backend¶
-
class
keg_storage.backends.s3.
S3Storage
(bucket, aws_region, aws_access_key_id=None, aws_secret_access_key=None, aws_profile=None, name='s3')[source]¶ -
-
download
(path: str, file_obj: IO, *, progress_callback: Optional[Callable[[int], None]] = None)¶ Copies a remote file at path to a file-like object file_obj.
If desired, a progress callback can be supplied. The function should accept an int parameter, which will be the number of bytes downloaded so far.
-
get
(path: str, dest: str) → None¶ Copies a remote file at path to the dest path given on the local filesystem.
-
link_to
(path: str, operation: Union[keg_storage.backends.base.ShareLinkOperation, str], expire: Union[arrow.arrow.Arrow, datetime.datetime], output_path: Optional[str] = None, content_type: Optional[str] = None) → str[source]¶ Returns a URL allowing direct the specified operations to be performed on the given path
-
list
(path)[source]¶ Returns a list of ListEntry`s representing files available under the directory or prefix given in `path.
-
open
(path: str, mode: Union[keg_storage.backends.base.FileMode, str])[source]¶ Returns a instance of RemoteFile for the given path that can be used for reading and/or writing depending on the mode given.
-
put
(path: str, dest: str) → None¶ Copies a local file at path to a remote file at dest.
-
upload
(file_obj: IO, path: str, *, progress_callback: Optional[Callable[[int], None]] = None)¶ Copies the contents of a file-like object file_obj to a remote file at path
If desired, a progress callback can be supplied. The function should accept an int parameter, which will be the number of bytes uploaded so far.
-
-
class
keg_storage.backends.s3.
S3Reader
(bucket, filename, client)[source]¶
-
class
keg_storage.backends.s3.
S3Writer
(bucket, filename, client, chunk_size=10485760)[source]¶ Writes to S3 are quite a bit more complicated than reads. To support large files, we cannot write in a single operation and the API does not encourage streaming writes so we make use of the multipart API methods.
- The process can be summarized as:
- Create a multipart upload and get an upload key to use with subsequent calls.
- Upload “parts” of the file using the upload key and get back an ID for each part.
- Combine the parts using the upload key and all the part IDs from the above steps.
The chunked nature of the uploads should be mostly invisible to the caller since S3Writer maintains a local buffer.
Because creating a multipart upload itself has an actual cost and there is no guarantee that anything will actually be written, we initialize the multipart upload lazily.
-
abort
()[source]¶ Use if for some reason you want to discard all the data written and not create an S3 object