Deployment

Deploy Morty on your infrastructure

This page will cover all technical requirements and details in order to successfully set-up a ready-to-use Morty instance. Morty is very lightweight, but has some minimum requirements as outlined below.

⚠️

The guide provided is intended for use in production-like environments. A separate guide specifically for development environments will be made available in the near future.

Prerequisites

To install Morty, you need to have access to a virtual machine or a bare-metal server with a Linux-based (Debian 11+ for example) distribution and the x86_64 architecture. Also, please ensure that your version of GLIBC is >= 2.29.

Remote server

You need to have Docker (opens in a new tab) and Firecracker (opens in a new tab) installed on the host. You can install those tools using the respective official documentations, or by using the following commands:

Install Docker (opens in a new tab):

curl -fsSL https://get.docker.com | sudo sh

Install Firecracker (opens in a new tab):

mkdir -p firecracker
curl -sL -o firecracker.tgz https://github.com/firecracker-microvm/firecracker/releases/download/v1.3.1/firecracker-v1.3.1-x86_64.tgz
tar -xvf firecracker.tgz -C firecracker
sudo mv ./firecracker/release-v1.3.1-x86_64/firecracker-v1.3.1-x86_64 /usr/bin/firecracker

Local machine

Morty CLI (opens in a new tab) is available for all architectures.

Install Morty CLI (opens in a new tab):

curl -fsSL https://morty-faas.github.io/install-cli.sh | sudo sh

Install RIK

The first step is to deploy a RIK (opens in a new tab) cluster on your server. RIK (opens in a new tab) is the orchestrator that will be used by Morty to schedule Firecracker (opens in a new tab) microVMs.

Download binaries

You can download the latest RIK binaries from the releases page of the RIK repository (opens in a new tab). Here, we will install the version v1.0.0 of RIK.

# Download the release artifacts
curl -sL "https://github.com/rik-org/rik/releases/download/v1.1.0-rc2/rik-v1.1.0-rc2-x86_64.tar.gz" --output rik.tar.gz
mkdir -p rik
tar -xvf rik.tar.gz -C rik
 
# Remove files that will not be useful here
rm -rf rik.tar.gz rik/LICENSE rik/README.md rik/rikctl

Now, if you run tree in your current directory, you should have something like this into the rik folder:

tree
 
-- rik
    |-- controller
    |-- riklet
    |-- scheduler
 
1 directory, 3 files

Move those binaries into /usr/bin :

sudo mv ./rik/* /usr/bin

Install components

Now that you have gathered all the RIK components, you need to install them on your system. To do so, we will use systemd services here to run our stack. To install our different components, we will use a service.tpl file to make the installation easier. Create this file in your current working directory and place the following content inside :

# service.tpl
[Unit]
AssertPathExists=${BIN}
 
[Service]
WorkingDirectory=~
EnvironmentFile=${ENV_FILE}
ExecStart=${BIN} ${ARGS}
Restart=always
NoNewPrivileges=true
RestartSec=3
 
[Install]
Alias=${NAME}
WantedBy=default.target

RIK Scheduler

To install the scheduler, use the following command :

# Generate the service file based on the template
NAME="rik-scheduler" ARGS="" BIN="/usr/bin/scheduler" envsubst < service.tpl | sudo tee /etc/systemd/system/rik-scheduler.service
 
# Enable your service
sudo systemctl enable rik-scheduler
sudo systemctl start rik-scheduler

RIK Controller

To install the controller, use the following command :

# Generate the service file based on the template
NAME="rik-controller" ARGS="" BIN="/usr/bin/controller" envsubst < service.tpl | sudo tee /etc/systemd/system/rik-controller.service
 
# Enable and start your service
sudo systemctl enable rik-controller
sudo systemctl start rik-controller

RIKLET

⚠️

The riklet will create some iptables rules at boot time. We strongly recommend to backup your iptables rules in case you need to restore them :

sudo iptables-save > iptables.rules.old

To install the riklet, we will first need to determine the name and the IP of the network interface that is connected to internet. Run the following command to retrieve it :

export IFACE=$(ip route get 8.8.8.8 | head -n1 | awk '{print $5}')
export IFACE_IP=$(ip -f inet addr show "${IFACE}" | awk '/inet / {print $2}' | cut -d "/" -f 1)

Next, you need to download a custom linux kernel. It is needed to run Firecracker microVMs:

sudo mkdir -p /etc/riklet/
sudo curl -sL "https://morty-faas.github.io/morty-kernel.bin" --output /etc/riklet/morty-kernel.bin

Finally, you can run the following command to generate the service file :

# Generate the service file based on the template
NAME=riklet BIN=/usr/bin/riklet ARGS="--master-ip localhost:4995 --kernel-path /etc/riklet/morty-kernel.bin --iface ${IFACE}" envsubst < service.tpl | sudo tee /etc/systemd/system/riklet.service
 
# Enable and start your service
sudo systemctl enable riklet
sudo systemctl start riklet

Verify the installation

Run the following commands to verify that all the services are running:

systemctl is-active rik-scheduler
systemctl is-active rik-controller
systemctl is-active riklet
 
# You should have the following output:
active
active
active

Now, your RIK cluster API is up and running on http://localhost:5000 (opens in a new tab). You need now to install Morty.

Install Morty

Now that the orchestrator is successfully installed, we can install and configure Morty.

Download binaries

You can download the latest Morty binaries from their releases pages on GitHub: controller and registry (opens in a new tab). Here, we will install the v1.1.0-RC2 of those components.

# Download the release artifacts
curl -sL "https://github.com/morty-faas/morty/releases/download/v1.1.0-RC2/morty-controller-v1.1.0-RC2-linux-amd64.tar.gz" --output controller.tar.gz
curl -sL "https://github.com/morty-faas/morty/releases/download/v1.1.0-RC2/morty-registry-v1.1.0-RC2-linux-amd64.tar.gz" --output registry.tar.gz
 
mkdir -p morty
tar -xvf controller.tar.gz -C morty
tar -xvf registry.tar.gz -C morty

Now, if you run tree in your current directory, you should have something like this into the rik folder:

tree
 
-- morty
    |-- morty-controller
    |-- morty-registry
 
1 directory, 2 files

Move those binaries into /usr/bin :

sudo mv ./morty/* /usr/bin

Install components

Dependencies

First of all, we need to deploy a MinIO (opens in a new tab) and a Redis (opens in a new tab) instance to store functions images and function state. To keep this guide as simple as possible, we will use Docker Compose (opens in a new tab).

⚠️

For production environments, consider adding security to dependencies such as authentication, encryption etc.

Create a docker-compose.yml file, and place the following content :

# docker-compose.yml
 
version: "3.8"
services:
  redis:
    container_name: morty.redis
    image: redis:7.0.10-bullseye
    restart: always
    ports:
      - "6379:6379"
    command: redis-server --save 20 1 --loglevel warning
    volumes:
      - redis:/data
 
  minio:
    image: bitnami/minio:2023.4.20
    container_name: morty.minio
    restart: "unless-stopped"
    volumes:
      - minio-data:/data
    environment:
      # Do not use those credentials in production
      MINIO_ROOT_USER: minioadmin
      MINIO_ROOT_PASSWORD: minioadmin
      MINIO_DEFAULT_BUCKETS: "functions"
    ports:
      - "9000:9000"
      - "9001:9001"
    healthcheck:
      test: curl -f "http://localhost:9000/minio/health/live
      interval: 30s
      timeout: 20s
      retries: 3
 
volumes:
  minio-data:
  redis:

Now, you can start Redis and MinIO using the following command:

docker compose up -d

Check that your containers are running:

docker ps | grep morty
 
# You should have a similar output
e42670c92ace   redis:7.0.10-bullseye        "docker-entrypoint.s…"   3 minutes ago   Up 3 minutes               0.0.0.0:6379->6379/tcp, :::6379->6379/tcp                       morty.redis
4fd84e9e2923   bitnami/minio:2023.4.20      "/opt/bitnami/script…"   3 minutes ago   Up 3 minutes (unhealthy)   0.0.0.0:9000-9001->9000-9001/tcp, :::9000-9001->9000-9001/tcp   morty.minio

Controller

To install the Morty controller, you need first to create a configuration file in /etc/morty/controller.yaml:

sudo mkdir -p /etc/morty
cat <<EOF | sudo tee /etc/morty/controller.yaml
port: 8080
metricsPort: 8090
 
orchestrator:
  rik:
    cluster: http://${IFACE_IP}:5000
 
state:
  redis:
    addr: localhost:6379
EOF

Now, you can generate the service file for the Morty controller:

# Generate the service file based on the template
NAME="morty-controller" BIN="/usr/bin/morty-controller" ARGS="" envsubst < service.tpl | sudo tee /etc/systemd/system/morty-controller.service
 
# Enable your service
sudo systemctl enable morty-controller
sudo systemctl start morty-controller

Registry

Like the controller, you need to create a configuration file to install the registry. Create a file /etc/morty-registry/registry.yaml:

sudo mkdir -p /etc/morty-registry
cat <<EOF | sudo tee /etc/morty-registry/registry.yaml
port: 8081
storage:
  s3:
    endpoint: http://localhost:9000
EOF

Also, create an environment in /etc/morty-registry/env.sh file for the process. This step is temporary and will be removed in future versions.

cat <<EOF | sudo tee /etc/morty-registry/env.sh
AWS_ACCESS_KEY_ID=minioadmin
AWS_SECRET_ACCESS_KEY=minioadmin
EOF

Now, you can generate and start your service:

# Generate the service file based on the template
NAME="morty-registry" ARGS="" BIN="/usr/bin/morty-registry" ENV_FILE="/etc/morty-registry/env.sh" envsubst < service.tpl | sudo tee /etc/systemd/system/morty-registry.service
 
# Enable your service
sudo systemctl enable morty-registry
sudo systemctl start morty-registry

Verify the installation

Run the following commands to verify that all the services are running:

systemctl is-active morty-controller
systemctl is-active morty-registry
 
# You should have the following output:
active
active

Check that your Morty instance is up and running

Congratulations, if you read this line, it probably means that you had successfully deployed Morty ! But now, we need to test the instance by creating a simple function using the Morty CLI (opens in a new tab).

On your local machine, configure a new context to be able to communicate with your fresh instance. Don't forget to replace $IP with the IP address of your server:

morty config add-context my-morty --controller http://$IP:8080 --registry http://$IP:8081

Now, you can create a very basic function from the runtimes:

# Create the function based on Go runtime
morty fn init --runtime go-1.19 my-first-function
 
# Build the function before invoking it
morty fn build my-first-function

Finally, you can invoke your function :

morty fn invoke my-first-function

If everything is ok, you should have the following output:

{ "message": "Hello, World!" }

Conclusion

Congratulations ! You have successfully installed Morty and ran your first function !