Skip to content

Deploying a Resilio Management Console HA Cluster in OCI

This guide provides instructions for deploying a Resilio Management Console High Availability (HA) cluster on Linux virtual machines running either RHEL- based (Rocky Linux, AlmaLinux, CentOS, etc.) or Debian-based (Ubuntu, Debian) distributions. The guide covers OCI-specific configurations.

Deploying Resilio Mc Ha Cluster Oci

Prerequisites

  • Two Linux VMs.
  • Root or sudo privileges on each VM.
  • A shared storage solution (OCI Block Volume for HA setups).
  • Network connectivity between the cluster nodes.

Step 1: Configure Block Volume (OCI-Specific)

Create a block volume in OCI with preferred settings to provide shared storage between VMs for the Management Console.

Attach Block Storage (OCI-Specific)

  • Attach block storage to both VMs during or after creation.
  • You can use the storage OCID if the volume is not visible in the drop-down list.
  • Select attachment type: CustomISCSIUse Oracle Cloud Agent to connect to iSCSI-attached volumes automatically.
  • Select the device path to ensure consistency.
  • Set access to Read/write - shareable.

Resilio Mc Ha Cluster Oci Attach Block Volume

Step 2: Install Dependencies

Ensure all necessary packages are installed.

RHEL-Based Distributions:

sudo dnf update -y  
sudo dnf install -y epel-release  
sudo dnf install -y wget curl tar unzip nano firewalld

Debian-Based Distributions:

sudo apt update -y sudo apt install -y wget curl tar unzip nano firewalld

Step 3: Configure Firewall Rules

Allow traffic on necessary ports:

Resilio Management Console Ports:

sudo firewall-cmd --add-port=8443/tcp --permanent #Admin Console  
sudo firewall-cmd --add-port=8444/tcp --permanent # Resilio Agent control traffic  
sudo firewall-cmd --add-port=8445/tcp --permanent # Resilio Agent events and logs  
sudo firewall-cmd --add-port=8446/tcp --permanent # API gateway  
sudo firewall-cmd --add-port=3000/tcp --permanent # Tracker service (TCP/UDP)  
sudo firewall-cmd --add-port=1080/tcp --permanent # Connection to Resilio Proxy

Pacemaker Cluster Ports:

sudo firewall-cmd --add-port=2224/tcp --permanent # Corosync and Pacemaker  
sudo firewall-cmd --add-port=3121/tcp --permanent # pacemaker_remoted  
sudo firewall-cmd --add-port=5405/udp --permanent # Cluster communication  
sudo firewall-cmd --reload

For Debian-based distributions , use ufw:

sudo ufw allow 8443/tcp  
sudo ufw allow 8444/tcp  
sudo ufw allow 8445/tcp  
sudo ufw allow 8446/tcp  
sudo ufw allow 3000/tcp  
sudo ufw allow 3000/udp  
sudo ufw allow 1080/tcp  
sudo ufw allow 2224/tcp  
sudo ufw allow 3121/tcp  
sudo ufw allow 5405/udp  
sudo ufw reload

Resilio Mc Ha Cluster Oci Ingress Rules

Step 4: Load Balancer

Create a Network Load Balancer.

Choose the same cloud network as for VMs.

Resilio Mc Ha Cluster Oci Load Balancer 1

Listen to all UDP/TCP ports. UDP is required if a built-in tracker will be used.

Resilio Mc Ha Cluster Oci Load Balancer 2

Select all created VMs as backend.

Resilio Mc Ha Cluster Oci Load Balancer 3

Add a health check policy to make GET requests to MC. MC with a high load requires a higher timeout.

Resilio Mc Ha Cluster Oci Load Balancer 4

Step 5: Storage Configuration

Before storage can be used, it must be formatted.

Check that storage is available for both VMs

sudo blkid /dev/oracleoci/oraclevdm

On one of the VMs, format the storage device.

sudo mkfs.xfs /dev/oracleoci/oraclevdm

Do the following on one of the VMs

sudo mkdir -p /mnt/resilio-mc-storage

sudo mount /dev/oracleoci/oraclevdm /mnt/resilio-mc-storage

sudo mkdir -p /mnt/resilio-mc-storage/var

sudo chown ubuntu /mnt/resilio-mc-storage/var

sudo umount /mnt/resilio-mc-storage

Step 6: Install Management Console

Step 7: Configure Resilio Management Console Service

Edit a systemd service file:

sudo nano /lib/systemd/system/resilio-connect-server.service

Edit the following content to direct to the shared mountpoint:

--appdata /var/opt/resilio-connect-management-console


**After=network.target iscsi.service remote-fs.target**  
**Requires=iscsi.service remote-fs.target**


[Unit]  
Description=Resilio Connect Management Console service  
Documentation=https://connect.resilio.com  
**After=network.target iscsi.service remote-fs.target**  
**Requires=iscsi.service remote-fs.target**

[Service]  
Type=simple  
User=root  
Group=root  
UMask=0002  
Restart=on-failure  
TimeoutSec=600  
ExecStart=/home/ubuntu/resilio-connect-server/srvctrl run --appdata /mnt/resilio-mc-storage/var  
ExecStop=kill -s SIGTERM $MAINPID

[Install]  
WantedBy=multi-user.target

Save and exit, then reload systemd:

sudo systemctl daemon-reload   
sudo systemctl enable --now resilio-connect-server.service

Step 8: Cluster Configuration

Install Pacemaker and Corosync

sudo apt install pacemaker corosync resource-agents-base resource-agents-extra   
sudo systemctl enable --now corosync pacemaker

Configure Cluster Nodes

Edit /etc/corosync/corosync.conf on the primary VM.

Replace bindnetaddr with a private IP.

For example, if the local interface is 192.168.5.92 with netmask 255.255.255.0, set bindnetaddr to 192.168.5.0. If the local interface is 192.168.5.92 with netmask 255.255.255.192, set bindnetaddr to 192.168.5.64, and so forth.

If you have multiple interfaces, use the interface you would like corosync to communicate over.

Set private IP for node 2 (second VM).

totem {  
version: 2  
cluster_name: resilio_mc_cluster  
crypto_cipher: none  
crypto_hash: none  
transport: udpu  
bindnetaddr: 10.0.0.0  
}

logging {  
fileline: off  
to_stderr: yes  
to_logfile: yes  
logfile: /var/log/corosync/corosync.log  
to_syslog: yes  
debug: off  
logger_subsys {  
subsys: QUORUM  
debug: off  
}  
}

quorum {  
provider: corosync_votequorum  
two_node: 1  
}

nodelist {  
node {  
name: Node001  
nodeid: 1  
ring0_addr: <primary-vm-private-ip>  
}  
node {  
name: Node002  
nodeid: 2  
ring0_addr: <secondary-vm-private-ip>  
}  
}

Restart services

sudo systemctl restart corosync  
sudo systemctl restart pacemaker

Make sure there is a user for cluster hacluster. It must be automatically preconfigured.

id hacluster

Set a password on both nodes

sudo passwd hacluster

Authenticate the cluster and enter the password when requested on both nodes.

sudo pcs host auth <primary-vm-private-ip> <secondary-vm-private-ip> -u hacluster

Create cluster

sudo pcs cluster setup mccluster <primary-vm-private-ip> <secondary-vm-private-ip>

Start and enable it

sudo pcs cluster start --all   
sudo pcs cluster enable --all

Make sure cluster is running and both nodes are online when executed on both VMs

sudo pcs status

Configure cluster resources (block storage and service)

sudo pcs resource create resilio-mc-storage ocf:heartbeat:Filesystem device="/dev/oracleoci/oraclevdm" directory="/mnt/resilio-mc-storage" fstype="xfs" op monitor interval=10s on-fail=restart


sudo pcs resource create resilio-mc-app systemd:resilio-connect-management-console op monitor interval=10s on-fail=restart

Set dependencies. Some may give noncritical errors.

sudo pcs resource group add resilio-mc resilio-mc-storage resilio-mc-app  
sudo pcs constraint colocation add resilio-mc-app with resilio-mc-storage  
sudo pcs constraint order start resilio-mc-storage then resilio-mc-app

Set failover policy

sudo pcs property set stonith-enabled=false  
sudo pcs property set no-quorum-policy=ignore

Use this command to see cluster status

sudo pcs status

And for cluster logs

journalctl -xe -u pacemaker

Step 9: Test and Validate Cluster Setup

  • Access the Resilio Management Console via https://<VM-IP>:8443