Azure Block Blob

class keg_storage.backends.azure.AzureStorage(account: Optional[str] = None, key: Optional[str] = None, bucket: Optional[str] = None, sas_container_url: Optional[str] = None, sas_blob_url: Optional[str] = None, chunk_size=5242880, name: str = 'azure')[source]
copy(path: str, new_path: str)

Copy the remote file specified by path to new_path.

create_download_url(path: str, expire: Union[arrow.arrow.Arrow, datetime.datetime])[source]

Create an SAS URL that can be used to download a blob without any additional authentication. This url may be accessed directly to download the blob:

requests.get(url)
create_upload_url(path: str, expire: Union[arrow.arrow.Arrow, datetime.datetime])[source]

Create an SAS URL that can be used to upload a blob without any additional authentication. This url can be used in following way to authenticate a client and upload to the pre-specified path:

client = BlobClient.from_blob_url(url) client.upload_blob(data)
delete(path: str)[source]

Delete the remote file specified by path.

download(path: str, file_obj: IO, *, progress_callback: Optional[Callable[[int], None]] = None)

Copies a remote file at path to a file-like object file_obj.

If desired, a progress callback can be supplied. The function should accept an int parameter, which will be the number of bytes downloaded so far.

get(path: str, dest: str) → None

Copies a remote file at path to the dest path given on the local filesystem.

Returns a URL allowing direct the specified operations to be performed on the given path

list(path: str) → List[keg_storage.backends.base.ListEntry][source]

Returns a list of ListEntry`s representing files available under the directory or prefix given in `path.

open(path: str, mode: Union[keg_storage.backends.base.FileMode, str]) → keg_storage.backends.azure.AzureFile[source]

Returns a instance of RemoteFile for the given path that can be used for reading and/or writing depending on the mode given.

put(path: str, dest: str) → None

Copies a local file at path to a remote file at dest.

upload(file_obj: IO, path: str, *, progress_callback: Optional[Callable[[int], None]] = None)

Copies the contents of a file-like object file_obj to a remote file at path

If desired, a progress callback can be supplied. The function should accept an int parameter, which will be the number of bytes uploaded so far.

class keg_storage.backends.azure.AzureReader(mode: keg_storage.backends.base.FileMode, blob_client: azure.storage.blob._blob_client.BlobClient, chunk_size=5242880)[source]

The Azure reader uses byte ranged API calls to fill a local buffer to avoid lots of API overhead for small read sizes.

read(size: int) → bytes[source]

Read and return up to size bytes from the remote file. If the end of the file is reached this should return an empty bytes string.

class keg_storage.backends.azure.AzureWriter(mode: keg_storage.backends.base.FileMode, blob_client: azure.storage.blob._blob_client.BlobClient, chunk_size=5242880)[source]

We are using Azure Block Blobs for all operations. The process for writing them is substantially similar to that of S3 with a couple of differences.

  1. We generate the IDs for the blocks
  2. There is no separate call to instantiate the upload. The first call to put_block will create
    the blob.
close()[source]

Cleanup and deallocate any held resources. This method may be called multiple times on a single instance. If the file was already closed, this method should do nothing.

write(data: bytes) → None[source]

Write the data buffer to the remote file.

class keg_storage.backends.azure.AzureFile(mode: keg_storage.backends.base.FileMode, blob_client: azure.storage.blob._blob_client.BlobClient, chunk_size=5242880)[source]

Base class for Azure file interface. Since read and write operations are very different and integrating the two would introduce a lot of complexity there are distinct subclasses for files opened for reading and writing.