Author Archives: Adam Zolo

Setting up a free Minecraft Server on Oracle Cloud

Update 2 months later

Somewhat unsurprisingly, given Oracle reputation, they disabled my supposedly always free resources two months later. Turns out, they disable the resources created while you were on trial. However, you can still make a clone of your boot volume, then terminate your previous instance, then recreate your instance from the cloned volume. It works so far.


In this tutorial, we’ll set up a free Minecraft server using a free tier Oracle Cloud.

Setting up an Oracle Free Tier

Signup for Oracle free account. Since this is Oracle, you probably want to check on the billing from time to time to make sure they didn’t start your changing for what they promised to be free.

Create a new compute instance. For max compatibility, we’ll use Ubuntu 18.04. You can try your luck with a newer version.

For the shape, select Ampere with 4 OCPU and 6GB Ram. This should still qualify for the free tier and should still be plenty for a Minecraft server.

Add your public SSH keys in the “Add SSH keys” section.

Create the instance.

After it’s created, go to the instance details, find “Primary VNIC” section, and open the subnet link (or create a new one).

Open Default Security List (or create a new one if one doesn’t exist yet)

Add Ingress Rules to open TCP/UDP ports 19132 for Bedrock and 25565 for Java edition (or both). Use CIDR for Source Type, 0.0.0.0/0 for Source CIDR, 19132 for Destination port. Repeat for TCP. Repeat for 25565 if planning to use Java edition.

SSH to your server

Upgrade all packages:

sudo apt-get update
sudo apt-get upgrade

Let’s reset the firewall rules and open the ssh and Minecraft ports:

sudo iptables -P INPUT ACCEPT
sudo iptables -P FORWARD ACCEPT
sudo iptables -P OUTPUT ACCEPT
sudo iptables -F
sudo iptables-save
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow 22/tcp
sudo ufw allow 19132/udp
sudo ufw allow 19132/tcp
sudo ufw allow 25565/udp
sudo ufw allow 25565/tcp
sudo ufw enable
sudo ufw status

Optionally install zsh and vim:

sudo apt install zsh
sudo apt install vim

Bedrock

SSH to your server: ssh [email protected]_address

We’ll use this script to setup the server: https://github.com/TheRemote/MinecraftBedrockServer

curl https://raw.githubusercontent.com/TheRemote/MinecraftBedrockServer/master/SetupMinecraft.sh | bash

Use the following commands to start/restart the service (you specified the service name when you ran the installation script):

sudo systemctl stop minecraft_service
sudo systemctl start minecraft_service
sudo systemctl restart minecraft_service

screen -R
# To disconnect (do not do ctrl-c or it may kill the minecraft service):
ctrl-A-D

Java Edition

This section is only for Java edition. Don’t use it if you need the Bedrock version.

We’ll use this script to install the Java edition Paper server: https://github.com/TheRemote/RaspberryPiMinecraft.

SSH to your server and run curl https://raw.githubusercontent.com/TheRemote/RaspberryPiMinecraft/master/SetupMinecraft.sh | bash. All default settings should be fine.

If you want Bedrock users to use your server, we can install Geyser plugin:

We’ll download the latest Geyser-Spigot server from https://ci.opencollab.dev/job/GeyserMC/job/Floodgate/job/master/.

On the server run:

cd minecraft/plugins/
curl -O https://ci.opencollab.dev/job/GeyserMC/job/Geyser/job/master/lastSuccessfulBuild/artifact/bootstrap/spigot/target/Geyser-Spigot.jar

If you also want Bedrock users to be able to login with their Microsoft account without requiring a separate Java account, we can also install a floodgate plugin:

curl -O https://ci.opencollab.dev/job/GeyserMC/job/Floodgate/job/master/lastSuccessfulBuild/artifact/spigot/build/libs/floodgate-spigot.jar

Restart the Minecraft server with sudo systemctl restart minecraft.service

DNS Records with Cloudflare

Add an A record for your server IP. For example, if you own example.com and want to connect to your server using mct.example.com, then add an A record for mct pointing to your server IP.

Add 2 srv records for each port (19132 and 25565):

Use mct for Name, _minecraft for service, 0 for both Priority and Weight, UDP for protocol, mct.example.com for the target. mct is just the subdomain that you can change to whatever you want. Repeat for TCP and then for each port (19132 and 25565)

The Cost of PostgreSQL Foreign Keys

Acknowledgments: shout out to Steven Jones at Syncro for helping me better understand how Foreign Keys work in PostgreSQL.

Foreign keys are great for maintaining data integrity. However, they are not free.

As mentioned in this blog post by Shaun Thomas:

In PostgreSQL, every foreign key is maintained with an invisible system-level trigger added to the source table in the reference. At least one trigger must go here, as operations that modify the source data must be checked that they do not violate the constraint.

https://bonesmoses.org/2014/05/14/foreign-keys-are-not-free/

What this means is that when you modify your parent table, even if you don’t touch any of the referred keys, these triggers are still fired.

In the original post from 2014, the overhead made the updates up to 95% slower with 20 foreign keys. Let’s see if things changed since then.

These are slightly updated scripts we’ll use for testing

CREATE OR REPLACE FUNCTION fnc_create_check_fk_overhead(key_count INT)
RETURNS VOID AS
$$
DECLARE
  i INT;
BEGIN
  CREATE TABLE test_fk
  (
    id   BIGINT PRIMARY KEY,
    junk VARCHAR
  );

  INSERT INTO test_fk
  SELECT generate_series(1, 100000), repeat(' ', 20);

  CLUSTER test_fk_pkey ON test_fk;

  FOR i IN 1..key_count LOOP
    EXECUTE 'CREATE TABLE test_fk_ref_' || i || 
            ' (test_fk_id BIGINT REFERENCES test_fk (id))';
						
  END LOOP;

END;
$$ LANGUAGE plpgsql VOLATILE;


CREATE OR REPLACE FUNCTION fnc_check_fk_overhead(key_count INT)
RETURNS VOID AS
$$
DECLARE
  i INT;
BEGIN
  FOR i IN 1..100000 LOOP
    UPDATE test_fk SET junk = '    blah                '
     WHERE id = i;
  END LOOP;

END;
$$ LANGUAGE plpgsql VOLATILE;


CREATE OR REPLACE FUNCTION clean_up_overhead(key_count INT)
RETURNS VOID AS
$$
DECLARE
  i INT;
BEGIN
  DROP TABLE test_fk CASCADE;

  FOR i IN 1..key_count LOOP
    EXECUTE 'DROP TABLE test_fk_ref_' || i;
  END LOOP;
END;
$$ LANGUAGE plpgsql VOLATILE;

To validate that the overhead is caused strictly by the presence of the foreign keys, and not from the cost of looking up the child records, after the first benchmark, we’ll modify the first function and add indexes on each foreign key:

CREATE OR REPLACE FUNCTION fnc_create_check_fk_overhead(key_count INT)
RETURNS VOID AS
$$
DECLARE
  i INT;
BEGIN
  CREATE TABLE test_fk
  (
    id   BIGINT PRIMARY KEY,
    junk VARCHAR
  );

  INSERT INTO test_fk
  SELECT generate_series(1, 100000), repeat(' ', 20);

  CLUSTER test_fk_pkey ON test_fk;

  FOR i IN 1..key_count LOOP
    EXECUTE 'CREATE TABLE test_fk_ref_' || i || 
            ' (test_fk_id BIGINT REFERENCES test_fk (id))';
						
		EXECUTE 'CREATE index test_fk_ref_index_' || i ||
            ' on test_fk_ref_' || i || '(test_fk_id)';
  END LOOP;

END;
$$ LANGUAGE plpgsql VOLATILE;

We’ll run on on Mac i9, 2.3 GHz 8-Core, 64 GB Ram

PostgreSQL version 12

-- without an index

select fnc_create_check_fk_overhead(0);
SELECT fnc_check_fk_overhead(0); -- 2.6-2.829
select clean_up_overhead(0)


select fnc_create_check_fk_overhead(20);
SELECT fnc_check_fk_overhead(20); -- 3.186-3.5. ~20% drop
select clean_up_overhead(20)


-- after updating our initial function to add an index for each foreign key:
select fnc_create_check_fk_overhead(0);
SELECT fnc_check_fk_overhead(0); -- 2.6-2.8
select clean_up_overhead(0)

select fnc_create_check_fk_overhead(20);
SELECT fnc_check_fk_overhead(20); -- 3.1 same ~20% drop
select clean_up_overhead(20)


As we see from the benchmark, the drop in update performance on a parent table is about 20% after adding 20 tables with a foreign key to the parent. It’s not quite as bad as 95% in the original post, but the overhead is still clearly there.

Docker MySQL with a Custom SQL Script for Development

The setup is similar to setting up MariaDB.

Start with standard docker-compose file. If using custom SQL mode, specify the necessary options in the command options:

version: "3.7"
services:
    mysql:
        build:
            context: .
            dockerfile: dev.dockerfile
        restart: always
        command: --sql_mode="STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE"
        environment:
            MYSQL_ROOT_PASSWORD: root_password
            MYSQL_DATABASE: dev
            MYSQL_USER: dev_user
            MYSQL_PASSWORD: dev_password
        ports:
            - 3306:3306

Add dev.dockerfile:

FROM mysql:8.0.17

ADD init.sql /docker-entrypoint-initdb.d/ddl.sql


Finally, add your init.sql file. Let’s give all privileges to our dev_user and switch the default caching_sha2_password to mysql_native_password (don’t do it unless you rely on older packages that require the less secure au

Finally, add your init.sql file. Let’s give all privileges to our dev_user and switch the default caching_sha2_password to mysql_native_password (don’t do it unless you rely on older packages that require the less secure mysql_native_password authentication method):

GRANT ALL PRIVILEGES ON *.* TO 'dev_user'@'%';
ALTER USER 'dev_user'@'%' IDENTIFIED WITH mysql_native_password BY 'dev_password';

If you want to access the database container from other containers, while running them separately, you can specify host.docker.internal as the host address of your database.

Dockerizing MariaDB with a Custom SQL Script in Development

Start with standard docker-compose file.

version: "3.7"
services:
    mariadb:
        build:
            context: .
            dockerfile: dev.dockerfile
        restart: always
        environment:
            MYSQL_ROOT_PASSWORD: password
            MYSQL_DATABASE: db_name
            MYSQL_USER: sql_user
            MYSQL_PASSWORD: password
        ports:
            - 3306:3306

Add dev.dockerfile:

FROM mariadb:latest

ADD init.sql /docker-entrypoint-initdb.d/ddl.sql

Finally, add your init.sql file. Let’s give all privileges to our sql_user:

GRANT ALL PRIVILEGES ON *.* TO 'sql_user'@'%';

Now, run docker-compose build, then docker-compose up.

Access from another container

If you want to access the database container from other containers, while running them separately, you can specify host.docker.internal as the address of your database.

If you’re on linux, then you need docker engine >= 20.03, and you need to add to your docker-compose file:

  my_app:
    extra_hosts:
      - "host.docker.internal:host-gateway"

If you’re are on Mac ^^ will break your setup unless you are at least on Docker Desktop for Mac 3.3.0. See Support host.docker.internal DNS name to host · Issue #264 · docker/for-linux (github.com) for details.

docker-compose build and deployment for Angular

In this tutorial we’ll make docker-compose files for angular and write a simple deploy script to build and deploy the images from your local machine.

Development

Let’s start with the dev environment. First, add .dockerignore file in the root of your project:

.git
.gitignore
.vscode
docker-compose*.yml
Dockerfile
node_modules

Create .docker directory in the root of your project. Add dev.dockerfile:

FROM node:10

RUN mkdir /home/node/app && chown node:node /home/node/app
RUN mkdir /home/node/app/node_modules && chown node:node /home/node/app/node_modules
WORKDIR  /home/node/app
USER node
COPY --chown=node:node package.json package-lock.json ./
RUN npm ci --quiet
COPY --chown=node:node . .

We are using node 10 image and using a less privileged node user. npm ci “is similar to npm install, except it’s meant to be used in automated environments such as test platforms, continuous integration, and deployment — or any situation where you want to make sure you’re doing a clean install of your dependencies.” – npm-ci | npm Docs (npmjs.com)

Create docker-compose.yml file in the root of your project:

# docker-compose
version: '3.7'
services:
services:
  app:
    container_name: 'your-container-name'
    build:
      context: .
      dockerfile: .docker/dev.dockerfile
    command: sh -c "npm start"
    ports:
      - 4200:4200
    working_dir: /home/node/app
    volumes:
      - ./:/home/node/app
      - node_modules:/home/node/app/node_modules
volumes:
  node_modules:


With this setup, the node_modules will be overridden when we build a new container. Basically, this means you may have to run docker-compose run app npm install when you need to update your packages. Rebuilding the image is not going to do it for you.

For alternative setups, check out this stackoverflow answer.

In you package.json you should have the definition of the npm start command:

"scripts": {
    "ng": "ng",
    "start": "ng serve --host 0.0.0.0",
    "build": "ng build"
  },

Run docker-compose build and docker-compose up.

Deployment

Docker Setup

Let’s add production.dockerfile to .docker directory:

# Stage 1
FROM node:10 as node

RUN mkdir /home/node/app && chown node:node /home/node/app
RUN mkdir /home/node/app/node_modules && chown node:node /home/node/app/node_modules
WORKDIR  /home/node/app
USER node
COPY --chown=node:node package.json package-lock.json ./
RUN npm ci --quiet
COPY --chown=node:node . .

# max_old_space_size is optional but can help when you have a lot of modules
RUN node --max_old_space_size=4096 node_modules/.bin/ng build --prod

# Stage 2
# Using a light-weight nginx image
FROM nginx:alpine

COPY --from=node /home/node/app/dist /usr/share/nginx/html
COPY --from=node /home/node/app/.docker/nginx.conf /etc/nginx/conf.d/default.conf

Add docker-compose.production.yml file:

version: '3.7'
services:
services:
  app:
    build:
      context: .
      dockerfile: .docker/production.dockerfile
    image: production-image
    container_name: production-container
    ports:
      - 80:80

Deploy script

We are going to ssh into our destination server and copy the updated image directly. Using a repository has a lot of advantages over this approach, but if you need something simple this will work:

#!/bin/sh

# Build the image locally, upload to your production box and start the new container based on the latest image

{
    echo "Create an image"
    docker-compose -f docker-compose.yml -f docker-compose.production.yml build

    echo "Upload the latest image"
    echo $(date +"%T")
    docker save production-image:latest | ssh -C [email protected]_server_ip docker load

    echo "Stop and restart containers"
    ssh -C [email protected]_server_ip "echo Stopping container at $(date +'%T'); \
        docker stop production-container || true; \
        docker rm production-container || true; \
        docker container run -d --restart unless-stopped -p 80:80 --name production-container production-image:latest; \
        echo Restarted container at $(date +'%T'); \
        docker image prune -f || true"

    echo "Finished"
    echo $(date +"%T")
} || {
    # catch
    echo "Something went wrong"
}

We are starting a new container based on the latest uploaded image on our destination host and mapping the host port 80 to the container port 80.

Helpful resources:

Proxy Sentry JS requests to the self-hosted server behind a firewall

Tech: Rails

Problem: you have a self-hosted Sentry server behind a firewall and you want to report your frontend errors.

One way to accomplish it is by modifying Sentry dsn to send it to your backend and then proxying them to the Sentry server.

First, let’s set up a new route:

post 'frontend_errors/api/:project_id/store', to: 'frontend_errors#create'

It has to follow a specific pattern to work with the Sentry frontend library. The only thing you can change in the above is frontend_errors – pick whatever name you want. The code above will expect you to have a FrontendErrorsController.

Now, the FrontEndErrorsController needs to redirect to your actual Sentry server in the format that Sentry expects. Let’s create a new class to handle it:

class SentryProxy
  # This could be different based on your Sentry version.
  # Look into raven-sentry gem codebase if this doesn't work
  # Look for http_transport.rb files - https://github.com/getsentry/sentry-ruby/blob/f6625bd12fa5ef86e4ce6a1515e8a8171cea9ece/sentry-ruby/lib/sentry/transport/http_transport.rb
  PROTOCOL_VERSION = '5'
  USER_AGENT = "raven-ruby/#{Raven::VERSION}"

  def initialize(body:, sentry_dsn:)
    @body = body
    @sentry_dsn = sentry_dsn
  end

  def post_to_sentry
    return if @sentry_dsn.blank?

    sentry_connection.post do |faraday|
      faraday.body = @body
    end
  end

  private

  def sentry_connection
    Faraday.new(url: sentry_post_url) do |faraday|
      faraday.headers['X-Sentry-Auth'] = generate_auth_header
      faraday.headers[:user_agent] = "sentry-ruby/#{Raven::VERSION}"
      faraday.adapter(Faraday.default_adapter)
    end
  end

  def sentry_post_url
    key, url = @sentry_dsn.split('@')
    path, project_id = url.split('/')
    http_prefix, _keys = key.split('//')

    "#{http_prefix}//#{path}/api/#{project_id}/store/"
  end

  def generate_auth_header
    now = Time.now.to_i.to_s
    public_key, secret_key = @sentry_dsn.split('//').second.split('@').first.split(':')

    fields = {
      'sentry_version' => PROTOCOL_VERSION,
      'sentry_client' => USER_AGENT,
      'sentry_timestamp' => now,
      'sentry_key' => public_key,
      'sentry_secret' => secret_key
    }
    'Sentry ' + fields.map { |key, value| "#{key}=#{value}" }.join(', ')
  end
end

Now in your controller you can call it like this (assumes you can get your sentry_dsn on the backend):

def create
  SentryProxy.new(body: request.body.read, sentry_dsn: sentry_dsn).post_to_sentry

  head(:no_content)
end

And to make sure your frontend is properly configured, first import Sentry frontend libraries, then initialize them using:

 Sentry.init({
    dsn: `${window.location.protocol}//[email protected]${window.location.host}/frontend_errors/0`});

public_key is supposed to be… your public key. You have to supply it in the dsn even if you’re getting the dsn key on the backend, otherwise, the Sentry frontend library will throw errors. 0 is the project id – the same idea, you have to supply it for the Sentry frontend to properly parse it. It doesn’t have to be real, as we’re reconstructing the Sentry url on the backend, and you can get proper keys/project id on the backend.

This should do it. Now you can configure Sentry frontend library to capture all errors, capture specific exceptions or messages.

Using the same redis instance for Rails cache and non-cache entries

Redis docs: https://redis.io/topics/lru-cache

OS: Ubuntu 18.04 LTS

When you need to use redis for cache and non-cache entries (e.g., ActionCable, Sidekiq…), the recommended approach is to create a separate redis instance. However, if you want a simpler setup, or just can’t get another instance for reasons, there is an option to use the same redis instance for multiple uses.

We need to make sure that Redis will not evict our important data (e.g., Sidekiq), while at the same time evicting old cache entries. We could use any of the volatile eviction policies:

  • volatile-lru – remove least recently used keys where expiry is set
  • volatile-random – removes keys at random where expiry is set
  • volatile-ttl – evict keys with an expire set, and try to evict keys with a shorter time to live (TTL) first
  • volatile-lfu (starting with Redis 4.0) – evict using approximated LFU among the keys with an expire set.

To set up the eviction policy on your redis instance, edit your /etc/systemd/system/redis.conf and set these parameters:

maxmemory 100mb
maxmemory-policy volatile-lfu

Then in your Rails config update your store to use redis cache store, if not using already:

  config.cache_store = :redis_cache_store, {
    url: ENV.fetch('REDIS_URL', 'redis://localhost:6379'),
    expires_in: 24.hours
  }

GPG Key Encryption in Ruby/Rails

To import the public key in ruby:

EncryptionError = Class.new(StandardError)

result, stderr, status = Open3.capture3("gpg --import #{@key_path}")
raise EncryptionError.new(stderr_data) unless status.success?

To encrypt data with a public key for a given recipient:

pgp_encrypt_command = "gpg -ear #{recipient} --always-trust --trust-model always --local-user #{recipient} --default-key #{recipient}"

encrypted_data, stderr_data, status = Open3.capture3(pgp_encrypt_command, stdin_data: data)
    raise EncryptionError.new(stderr_data) unless status.success?

Using Azurite with Active Storage

Install Azurite in your preferred way: npm install azurite

Install Microsoft Azure Storage Explorer

Create some directory to run azurite from: `~/azurite`

Add storage.yml configuration for azurite (using the default dev account and key):

azurite_emulator:
  service: AzureStorage
  storage_account_name: 'devstoreaccount1'
  storage_access_key: 'Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=='
  container: 'container-name'
  storage_blob_host: 'http://127.0.0.1:10000/devstoreaccount1'

Update development.rb to use azurite_emulator:

config.active_storage.service = :azurite_emulator

Start azurite from the directory you created for azurite: azurite --location ~/azurite --debug ~/azurite/debug.log

Start Azure Storage Explorer, connect to local emulator, and create container-name blob container – the same container name you specified in the storage.yml file.

Start uploading to Azurite.

Note for Rails 5.2

Some changes have not been backported as of this post, and you have to monkey-patch ActiveStorage file as described here – http://www.garytaylor.blog/index.php/2019/01/30/rails-active-storage-and-azure-beyond-config/ – this allows us to work with azurite locally.

If you want to use the newer azure-storage-blob instead of the deprecated azure-storage and you’re on Rails 5.2, you have to do a bit more monkey-patching – otherwise, you’ll start getting No such file to load — azure/storage.rb“:

Add two empty files: lib/azure/storage/core/auth/shared_access_signature.rb, and lib/azure/storage.rb

Add this to config/initializers/active_storage_6_patch.rb (this is the current master version of the ActiveStorage module):

require "azure/storage/blob"
require 'active_storage/service/azure_storage_service'
module ActiveStorage
  # Wraps the Microsoft Azure Storage Blob Service as an Active Storage service.
  # See ActiveStorage::Service for the generic API documentation that applies to all services.
  class Service::AzureStorageService < Service
    attr_reader :client, :container, :signer

    def initialize(storage_account_name:, storage_access_key:, container:, public: false, **options)
      @client = Azure::Storage::Blob::BlobService.create(storage_account_name: storage_account_name, storage_access_key: storage_access_key, **options)
      @signer = Azure::Storage::Common::Core::Auth::SharedAccessSignature.new(storage_account_name, storage_access_key)
      @container = container
      @public = public
    end

    def upload(key, io, checksum: nil, filename: nil, content_type: nil, disposition: nil, **)
      instrument :upload, key: key, checksum: checksum do
        handle_errors do
          content_disposition = content_disposition_with(filename: filename, type: disposition) if disposition && filename

          client.create_block_blob(container, key, IO.try_convert(io) || io, content_md5: checksum, content_type: content_type, content_disposition: content_disposition)
        end
      end
    end

    def download(key, &block)
      if block_given?
        instrument :streaming_download, key: key do
          stream(key, &block)
        end
      else
        instrument :download, key: key do
          handle_errors do
            _, io = client.get_blob(container, key)
            io.force_encoding(Encoding::BINARY)
          end
        end
      end
    end

    def download_chunk(key, range)
      instrument :download_chunk, key: key, range: range do
        handle_errors do
          _, io = client.get_blob(container, key, start_range: range.begin, end_range: range.exclude_end? ? range.end - 1 : range.end)
          io.force_encoding(Encoding::BINARY)
        end
      end
    end

    def delete(key)
      instrument :delete, key: key do
        client.delete_blob(container, key)
      rescue Azure::Core::Http::HTTPError => e
        raise unless e.type == "BlobNotFound"
        # Ignore files already deleted
      end
    end

    def delete_prefixed(prefix)
      instrument :delete_prefixed, prefix: prefix do
        marker = nil

        loop do
          results = client.list_blobs(container, prefix: prefix, marker: marker)

          results.each do |blob|
            client.delete_blob(container, blob.name)
          end

          break unless marker = results.continuation_token.presence
        end
      end
    end

    def exist?(key)
      instrument :exist, key: key do |payload|
        answer = blob_for(key).present?
        payload[:exist] = answer
        answer
      end
    end

    def url_for_direct_upload(key, expires_in:, content_type:, content_length:, checksum:)
      instrument :url, key: key do |payload|
        generated_url = signer.signed_uri(
          uri_for(key), false,
          service: "b",
          permissions: "rw",
          expiry: format_expiry(expires_in)
        ).to_s

        payload[:url] = generated_url

        generated_url
      end
    end

    def headers_for_direct_upload(key, content_type:, checksum:, filename: nil, disposition: nil, **)
      content_disposition = content_disposition_with(type: disposition, filename: filename) if filename

      { "Content-Type" => content_type, "Content-MD5" => checksum, "x-ms-blob-content-disposition" => content_disposition, "x-ms-blob-type" => "BlockBlob" }
    end

    private
      def private_url(key, expires_in:, filename:, disposition:, content_type:, **)
        signer.signed_uri(
          uri_for(key), false,
          service: "b",
          permissions: "r",
          expiry: format_expiry(expires_in),
          content_disposition: content_disposition_with(type: disposition, filename: filename),
          content_type: content_type
        ).to_s
      end

      def public_url(key, **)
        uri_for(key).to_s
      end


      def uri_for(key)
        client.generate_uri("#{container}/#{key}")
      end

      def blob_for(key)
        client.get_blob_properties(container, key)
      rescue Azure::Core::Http::HTTPError
        false
      end

      def format_expiry(expires_in)
        expires_in ? Time.now.utc.advance(seconds: expires_in).iso8601 : nil
      end

      # Reads the object for the given key in chunks, yielding each to the block.
      def stream(key)
        blob = blob_for(key)

        chunk_size = 5.megabytes
        offset = 0

        raise ActiveStorage::FileNotFoundError unless blob.present?

        while offset < blob.properties[:content_length]
          _, chunk = client.get_blob(container, key, start_range: offset, end_range: offset + chunk_size - 1)
          yield chunk.force_encoding(Encoding::BINARY)
          offset += chunk_size
        end
      end

      def handle_errors
        yield
      rescue Azure::Core::Http::HTTPError => e
        case e.type
        when "BlobNotFound"
          raise ActiveStorage::FileNotFoundError
        when "Md5Mismatch"
          raise ActiveStorage::IntegrityError
        else
          raise
        end
      end
  end
end