Install HA Services

HA Cluster setup:

Summary

Set up a mamori deployment with robust fault managment so that the failure of a single node does not bring down the entire system. This requires multiple application servers and a fully redundant central Postgres cluster. Setting up the postgres cluster is beyond the scope of this document. If deploying on a managed cloud provider it may be possible to deploy against a managed postgres instance in which case the cloud provider will take care of ensuring the uptime of the database. The same can be said of the load balancer. In this document we provide instructions for deploying an instance of HAProxy and nginx to provide load balancing but if you have other options feel free to investigate them.

The following is a rough sketch of the final setup:

  User facing   |   Application servers         |  Shared services - potentially multiple machines
                |        +-----------+          |         +-----------------+
                |        |           |          |         |   Postgres      |
                |        |  Node 1   +--------------------+    mosquitto    |
+------------+  |   +----+           |          |         |    influxdb     |
|            +------+    +-----------+          |         +-+---------------+
|            |  |                               |           |
|   Load     +----------   ...                  |           |
|  Balancer  |  |          ...                  |           |
|            +----------                        |           |
|            |  |        +-----------+          |           |
|            +--------+  |           |          |           |
+------------+  |     |  |  Node N   +----------------------+
                |     +--+           |          |
                |        +-----------+          |
                |                               |

Assumptions:

  1. The nodes in the mamori HA cluster are on a private subnet not visible to end clients or have firewalls preventing end clients from accessing them
  2. The load balancer or gateway machine has access to all nodes in the cluster and can be accessed from end client machines either by firewall rules or multiple network adapters.
  3. The load balancer is going to run nginx to handle HTTPS traffic and send requests to the cluster nodes
  4. The load balancer is going to run haproxy to handle database proxy connections

If you are running in an environment that offers other forms of load balancing such as AWS then that may be a better option for your setup but we have not investigated this possibility.

Procedure:

  1. Create a VM with a postgres 14 (or higher) instance or cluster. We can provide instructions on how to build a postgres cluster for HA and fault tolerance.

    This database instance must accept network connections using the MD5 method of authentication.

    A pre-packaged docker based install of postgres 14 is available from Mamori along with instructions on how to install it, however a native postgres instance is preferred.

    • if you need to set the postgres password you can use something like the following:
sudo -u postgres psql -c "ALTER USER postgres PASSWORD 'Mamori2023';"
  1. Once you have a postgres instance (or cluster) create three databases, mamorisys, audit and xcs
sudo -u postgres psql -c "create database mamorisys;"
sudo -u postgres psql -c "create database audit;"
sudo -u postgres psql -c "create database xcs;"
  1. Verify you can connect to the mamorisys database from an another machine using something like the following command:
PGPASSWORD=Mamori2023 psql --host 192.168.238.138 --port 5432 mamorisys postgres -c "select version()"
  1. Obtain the cluster docker image - the URL below is an example for development purposes and should not be used for production instances. The production image will be made available on request.
wget https://mamori-io.sgp1.digitaloceanspaces.com/install-media/dev/mamori_cluster_docker.tgz
  1. Create a new VM for the initial Mamori instance (node1) and install docker
curl https://get.docker.com/ | sh
  1. Create the mamori docker container on node1 (optionally update the timezone as required in the docker create command)
docker load < mamori_cluster_docker.tgz

docker create \
        --network host \
        --restart always \
        --privileged \
        --log-opt max-size=10m --log-opt max-file=10 \
        -v /var/run/docker.sock:/var/run/docker.sock \
        -v mamori-var:/opt/mamori/var \
        -v mamori-nginx-conf:/etc/nginx \
        -v /proc:/host/proc:ro \
        -e TZ=Etc/UTC \
        --name mamori mamori /sbin/my_init
  1. Configure the mamori container on node1 to connect to the shared database. This is done by starting a temporary docker container using the same volumes as the mamori container. The temporary container will be destroyed once the shell session finishes.
docker run --rm -it --volumes-from mamori mamori bash
/opt/mamori/mamori/bin/join_cluster 192.168.238.138 5432 postgres Mamori2023
exit
  1. Start the mamori instance on node1
docker start mamori
  1. The mamori instance will boot and create all the necessary database objects in the shared database. It should take less than a minute to boot and the progress can be followed using the following command:
docker exec -it mamori tail -F /opt/mamori/var/log/mamori_fqod.log
  1. Once the instance has booted verify that it is up and working by connecting to it via a web browser. NOTE: the instance will be listening on port 80 as SSL will be handled by the load balancer so you will need to access it using the http url scheme rather than https. You should be able to log in to the mamori hub as the root user using the default password of Mamori2022

  2. Now set up a load balancer of your choice to perform SSL termination. Below is an example of how to do this using nginx in a docker container provided by Mamori. Again a native install of nginx is generally better if available.

mkdir -p /opt/mamori/nginx/conf/mamori
mkdir -p /opt/mamori/nginx/ssl
openssl dhparam 4096 -out /etc/nginx/ssl/dhparam.pem
wget https://mamori-io.sgp-digitaloceanspaces.com/docker-images/nginx.tgz
docker load < nginx.tgz
docker create --restart always --network host --log-opt max-size=10m --log-opt max-file=5 -v /opt/mamori/nginx/conf:/etc/nginx/conf.d -v /opt/mamori/nginx/ssl:/etc/nginx/ssl --name nginx nginx:latest
  • place your SSL cert and key in /opt/mamori/nginx/ssl/nginx.crt and /opt/mamori/nginx/ssl/nginx.key or generate a self signed cert as below:
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /opt/mamori/nginx/ssl/nginx.key -out /opt/mamori/nginx/ssl/nginx.crt
  • place the following config into /opt/mamori/nginx/conf/load-balancer.conf updating the server entry in the hub upstream to contain the address of node1
upstream hub {
    ip_hash;
    server node1:80;
}

server {
       server_name sandbox.mamori.io;

       listen 80 default_server;
       listen [::]:80 default_server;

       root /var/www/html;

       index some-file-that-does-not-exist;
       error_page 403 @gotohttps;
       error_page 404 @gotohttps;

       location / {
                try_files $uri $uri/ =404;
       }

       location @gotohttps {
                rewrite ^ https://$host$request_uri permanent;
       }
}

map $http_upgrade $connection_upgrade {
    default upgrade;
    ''      close;
}


server {
  server_name sandbox.mamori.com;

  # SSL configuration
  #
  listen [::]:443 default_server ssl http2;
  listen      443 default_server ssl http2;

  client_max_body_size 100M;


  # Certificate(s) and private key
  ssl_certificate /etc/nginx/ssl/nginx.crt;
  ssl_certificate_key /etc/nginx/ssl/nginx.key;

  # generate random dhparam
  # openssl dhparam 4096 -out /etc/nginx/ssl/dhparam.pem
  ssl_dhparam /etc/nginx/ssl/dhparam.pem;

  ssl_protocols TLSv-3 TLSv-2 TLSv-1 TLSv1;
  ssl_prefer_server_ciphers on;
  ssl_ciphers EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA512:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:ECDH+AESGCM:ECDH+AES256:DH+AESGCM:DH+AES256:RSA+AESGCM:!aNULL:!eNULL:!LOW:!RC4:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS;

  ssl_session_cache shared:TLS:2m;
  ssl_buffer_size 4k;

  # OCSP stapling
  ssl_stapling on;
  ssl_stapling_verify on;
  resolver ---1 -0.0.1 [2606:4700:4700::1111] [2606:4700:4700::1001]; # Cloudflare

  # Set HSTS to 365 days
  add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains; preload' always;

  location / {
    proxy_pass http://hub;
    proxy_redirect off;
    proxy_http_version -1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;
    proxy_set_header Host      $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Force-HTTPS true;
    proxy_set_header x-forwarded-proto https;
  }

  include /etc/nginx/conf.d/mamori/*.conf;
}

NOTE: if not using nginx the load balancer/ssl terminator must set the X-Real-IP header to the IP address of the client machine otherwise the Mamori hub will report the load balancer as the source IP of any web console connections.

  1. Once the load balancer is up and running it should be possible to visit the load balance via https and see the Mamori login page.

NOTE: this is still only a single node install at this stage. To update to be a full HA cluster it is necessary to install an MQTT server.

  1. Create a new VM or use the database VM in a test environment and install mosquitto. Below is an example of how to do this using a the official Apache Mosquitto docker image
mkdir -p /opt/mamori/mosquitto/data
mkdir -p /opt/mamori/mosquitto/log
  • place the following into /opt/mamori/mosquitto/mosquitto.conf
persistence true
persistence_location /mosquitto/data/
log_dest file /mosquitto/log/mosquitto.log
bind_address 0.0.0.0
allow_anonymous true
  • create and start the mosquitto container
docker create --name mosquitto --restart always --network host --log-opt max-size=10m --log-opt max-file=5 -v /opt/mamori/mosquitto/mosquitto.conf:/mosquitto/config/mosquitto.conf -v /opt/mamori/mosquitto/data:/mosquitto/data -v /opt/mamori/mosquitto/log:/mosquitto/log eclipse-mosquitto
docker start mosquitto
  1. Configure the cluster to know about the mosquitto instance by running the following command on the instance node (node1) adjusting the IP address for that of the machine running mosquitto:
docker exec -it mamori msql "call SET_SERVER_PROPERTY('mqtt_server', 'tcp://192.168.238.138:1883')"
docker exec -it mamori msql "sv restart mamori_fqod"
  1. On the load balancer machine install HAProxy and configure it to pass db proxy connections down to the cluster.

The following is a sample HA Proxy configuration that assumes the Oracle health check scripts have been installed to /opt/mamori/health_check

global
        log /dev/log    local0
        log /dev/log    local1 notice
        external-check
        insecure-fork-wanted
        stats socket /run/haproxy/admin.sock mode 660 level admin
        stats timeout 30s
        user haproxy
        group haproxy
        daemon
        # Default SSL material locations
        ca-base /etc/ssl/certs
        crt-base /etc/ssl/private

        # See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
        ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
        ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
        ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets

defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000

        timeout tunnel 1h
        timeout client-fin 30s

        errorfile 400 /etc/haproxy/errors/400.http
        errorfile 403 /etc/haproxy/errors/403.http
        errorfile 408 /etc/haproxy/errors/408.http
        errorfile 500 /etc/haproxy/errors/500.http
        errorfile 502 /etc/haproxy/errors/502.http
        errorfile 503 /etc/haproxy/errors/503.http
        errorfile 504 /etc/haproxy/errors/504.http

listen  stats
        bind            127.0.0.1:1936
        mode            http
        no log
        maxconn 10

        timeout queue   100s

        stats enable
        stats hide-version
        stats refresh 30s
        stats show-node
        # stats auth admin:password
        stats uri  /haproxy?stats

listen  mamori-hub
        bind    *:1527
        mode    tcp
        option  tcplog
        option  logasap
        server  mamorihub1      node1:1527 no-check send-proxy-v2
#        server  mamorihub2      node2:1527 no-check send-proxy-v2

listen  oracle-proxy
        bind    *:1521
        mode    tcp
        option  tcplog
        option  logasap
        option  external-check
        external-check path "/usr/bin:/bin"
        external-check command /opt/mamori/health_check/test_oracle.sh
        server  mamorihub1      node1:1521 send-proxy-v2 check inter 5s
#        server  mamorihub2      node2:1521 send-proxy-v2 check inter 5s

listen  postgres-proxy
        bind    *:5432
        mode    tcp
        option  tcplog
        option  logasap
        server  mamorihub1      node1:5432 no-check send-proxy-v2
#        server  mamorihub2      node2:5432 no-check send-proxy-v2

listen  legacy-mysql-proxy-5
        bind    *:3305
        mode    tcp
        option  tcplog
        option  logasap
        server  mamorihub1      node1:4305 no-check send-proxy-v2
#        server  mamorihub2      node2:4305 no-check send-proxy-v2

listen  legacy-mysql-proxy-8
        bind    *:3306
        mode    tcp
        option  tcplog
        option  logasap
        server  mamorihub1      node1:4306 no-check send-proxy-v2
#        server  mamorihub2      node2:4306 no-check send-proxy-v2

listen  sqlserver-proxy
        bind    *:1433
        mode    tcp
        option  tcplog
        option  logasap
        server  mamorihub1      node1:1433 no-check send-proxy-v2
#        server  mamorihub2      node2:1433 no-check send-proxy-v2

listen  ssh-proxy
        bind    *:22
        mode    tcp
        option  tcplog
        option  logasap
        server  mamorihub1      m1:1122 no-check send-proxy-v2
        server  mamorihub2      m2:1122 no-check send-proxy-v2

listen  mongo-proxy
        bind    *:27017
        mode    tcp
        option  tcplog
        option  logasap
        server  mamorihub1      node1:28017 no-check send-proxy-v2
#        server  mamorihub2      node2:28017 no-check send-proxy-v2

listen  http-proxy
        bind    *:8089
        mode    tcp
        option  tcplog
        option  logasap
        server  mamorihub1      node1:8089 no-check send-proxy-v2
#        server  mamorihub2      node2:8089 no-check send-proxy-v2
  1. On node1 let it know to expect HAProxy connections for the database proxies:
docker exec -it mamori msql "call SET_SERVER_PROPERTY('haproxy', 'true')"
docker exec -it mamori msql "sv restart mamori_fqod"

Wireguard support (optional)

  1. install ipvsadm
apt install ipvsadm
  1. run the following bash script updated to reflect your desired wireguard config
#!/bin/bash

VIP=10.240.0.36
PORT=51871
NODE1=10.240.0.11

# clear the rules
ipvsadm -C
# create the server
ipvsadm -A -u $VIP:$PORT -s sh
# add a back end node
ipvsadm -a -u $VIP:$PORT -r $NODE1 -m

Finally

To add new nodes to the cluster either clone the instance node (node1) or repeat steps 5 through 8 and then add the node to the load balancer config and restart or reload the load balancer

Edit this page on GitHub Updated at Wed, Mar 13, 2024