Azure Storage Emulator
This is a simple guide that will help you set up and run the Azure Storage Emulator on your local machine. The Azure Storage Emulator allows you to develop and test your applications that use Azure Storage services without needing an Azure subscription or an Internet connection.
Review the Documentation for more details on how to use the emulator and its features. This document covers a scenario where you want to run the emulator as close to the real Azure Storage service as possible, which means using triple HTTPS endpoints and OAuth simulation.
Installation
You can install and use the emulator in a few different ways, depending on your preferences and environment. The recommended way is to use a container runtime or Kubernetes, but you can also install it natively using Node.js and Caddy HTTP Server.
Using a container runtime
To run the Azure Storage Emulator in a container, follow these steps:
-
Ensure that a container runtime is installed. This repository supports both Docker and Apple
containercommand. -
Create a custom non-public certificate for the emulator. Use the provided
make-cert.shscript to generate a self-signed CA certificate and a server certificate for the specified storage account name. The script will useCA_DIR(default:./storage),CA_NAME(default:Azurite Emulator CA), andSTORAGE_ACCOUNT_NAME(default:azuritelocal) environment variables to determine the storage location for the certificates, the name of the CA, and the storage account name for which the server certificate will be generated. For example:./make-cert.shor
CA_DIR=./myca CA_NAME="My Custom CA" STORAGE_ACCOUNT_NAME=myaccount ./make-cert.shYou can run the script multiple times with different
STORAGE_ACCOUNT_NAMEvalues to generate certificates for multiple storage accounts if needed. Just make sure to use the sameCA_DIRandCA_NAMEfor all of them (or use the defaults) to ensure they are signed by the same CA. -
Build the emulator image using the provided Dockerfile:
./build.shor pull it from the Docker Hub registry using image name:
skoszewski/azurite:latest. -
Run the emulator container:
./start-azurite
You can also use the included example compose.yaml file for running it using docker compose (or any other compose compatible CLI).
Using Node.js and Caddy HTTP Server
To install the Azure Storage Emulator natively on your machine, ensure you have Node.js (with npm) and Caddy HTTP Server installed, and follow these steps:
-
Clone the repository:
git clone https://github.com/azure/azurite -
Navigate to the cloned directory:
cd azurite -
Build the emulator package.
npm ci npm run build npm pack -
Install the package globally using npm:
npm install -g azurite-*.tgz -
Remove the clone directory, it will not be needed anymore:
cd .. rm -rf azurite -
Create an
accounts.envfile in the same directory as therun-server.shscript with the following content:AZURITE_ACCOUNTS=accountname:accountkeyReplace
accountnamewith the desired account name. Use OpenSSL to generate an account key.openssl rand -base64 32You can also generate a deterministic account key using any string as a seed:
echo -n "your-seed-string" | base64 -
Add the following line to your
/etc/hostsfile to map the custom domain names to localhost:127.0.0.1 <accountname>.blob.core.windows.net <accountname>.queue.core.windows.net <accountname>.table.core.windows.net -
Create a certificate for the specified account name using the
make-cert.shas described in the container runtime installation steps. -
Run the server:
./run-server.sh
Accessing the blob storage
RClone
RClone is a command-line program to manage files on cloud storage. You can use it to interact with the Azure Storage Emulator the same way you would with the real Azure Storage service. Edit the rclone.conf file and add the following configuration:
[azurite]
type = azureblob
account = accountname
key = accountkey
or, if you want to use simulated OAuth authentication:
[azurite]
type = azureblob
account = accountname
env_auth = true
Now, you can use rclone commands to interact with the emulator. For example, to list the containers in the blob service:
rclone ls azurite:
Note: On modern Linux distributions and MacOS systems the
rclone.conffile is typically located at~/.config/rclone/rclone.conf.
Terraform
Use the following Terraform Azure RM backend configuration to use the Azure Storage Emulator as the backend for storing Terraform state:
terraform {
backend "azurerm" {
storage_account_name = "accountname"
container_name = "tfstate"
key = "terraform.tfstate"
}
}
and initialize the module:
terraform init -backend-config=access_key=__base64_encoded_account_key__
Note: Be aware that AI Agents may generate or suggest using the
endpointparameter, which will not work. You have to create fake account FQDNs in your/etc/hostsfile as described in the installation steps.
You can use OAuth simulation with Terraform by adding the use_azuread_auth parameter to the backend configuration:
terraform {
backend "azurerm" {
storage_account_name = "accountname"
container_name = "tfstate"
key = "terraform.tfstate"
use_azuread_auth = true
}
}
Command Reference
build.sh
The script builds a container image for the Azure Storage Emulator using the provided Dockerfile. The image includes the Azurite server and Caddy HTTP server, configured to run the emulator with triple HTTPS endpoints and optional OAuth simulation. It does not require Azurite or Caddy to be installed on the host machine, as they are included in the container image.
Accepted flags:
--arch: Specifies the target architecture for the container image. Supported values areamd64andarm64. If not provided, the script will build for the architecture of the host machine. It can be specified twice to build for both architectures.--version: Specifies the version tag for the built container image. The version value must correspond to a valid Azurite GitHub tag.--latest: Uses the latest released version of Azurite from GitHub as the base for the container image. This flag cannot be used together with--version.--registry: Specifies the container registry to which the built image will be pushed. If not provided, the image will only be built locally and not tagged with registry prefix.
Used environment variables:
AZURITE_IMAGE: Overrides the default image name (azurite:latest) for the built container image. This can be useful if you want to use a different naming convention or push to a specific registry. The--registryflag will replace the registry part of the image name, but it will not override the entire name, so you can still use a custom image name with the registry prefix if needed.
start-azurite
The script runs the Azure Storage Emulator using a supported container runtime (Docker or Apple container command). It enables OAuth and starts Caddy by defualt.
Remember: Make backups of the storage directory when the container is not running.
Accepted flags:
--no-oauth- disables OAuth simulation in the emulator. When this flag is set, you have to use the account key for authentication.--no-caddy- disables Caddy server and runs Azurite with its built-in HTTP server. This will result in the emulator being accessible over three HTTPS endpoints (ports: 10000, 10001, 10002). Use this flag if you want to run the emulator and proxy the endpoints yourself. Note that you have to configure your proxy to accept the certificate supplied for the emulator. Caddy usestls_trust_pool file <pem_cert_path>directive.
Used environment variables:
AZURITE_DIR: Specifies the directory on the host machine where the emulator will store its data. This directory will be mounted as/storagein the container, allowing the emulator to persist data across container restarts.AZURITE_IMAGE: Specifies the name of the container image to use when running the emulator. If not set, it defaults toazurite:latest, which is the default tag used by thebuild.shscript.
run-server.sh
The script is the entry point for starting the Azure Storage Emulator natively. It discovers the account name and key from the accounts.env file, checks for the necessary SSL certificates, configures Caddy for HTTPS endpoints, and starts the Azurite server with the appropriate settings.
The script assumes that both Azurite and Caddy are installed and available in the system's PATH. It also assumes that the accounts.env file is properly configured with at least one account name and key, and that the /etc/hosts file contains the necessary entries mapped to 127.0.0.1 for the custom domain names, e.g. accountname.blob.core.windows.net, accountname.queue.core.windows.net, and accountname.table.core.windows.net.
The storage location is determined by the AZURITE_DIR environment variable. Data files are stored in the storage subdirectory. The directory structure pointed to by AZURITE_DIR will be created if it does not exist.
The script will use the certificate for the listed endpoints. Caddy will be configured to use that certificate for all HTTPS endpoints, therefore the certificate must have all the required SANs (Subject Alternative Names) for the endpoints and must be trusted by the system. The emulator will be accessible at the following endpoints:
- Blob service:
https://accountname.blob.core.windows.net - Queue service:
https://accountname.queue.core.windows.net - Table service:
https://accountname.table.core.windows.net
For Debian-based systems, you can use the following commands to add the certificate to the trusted store:
sudo cp storage/ca_cert.pem /usr/local/share/ca-certificates/azurite_ca_cert.crt
sudo update-ca-certificates
For macOS, you can use the Keychain Access application to import the certificate and mark it as trusted. Windows users can use the Certificate Manager to import the certificate into the "Trusted Root Certification Authorities" store.