# Load Balancing

The load balancer receives the requests from the clients and forwards them to the most suitable server. In this way, you ensure that Enginsight runs optimally even with a large number of incoming requests.

<figure><img src="https://97980696-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LTMe1v0eboWCAUTQHbT-887967055%2Fuploads%2FWkvEpmRA0YBHJzwOON2Y%2FLoadBalancing.webp?alt=media&#x26;token=a95b9e5b-425a-4abd-ac06-e459cc64caa9" alt=""><figcaption><p>By means of a 2nd app server and the outsourcing of services to a separate VM, requests are now distributed evenly and performance is increased.</p></figcaption></figure>

{% hint style="info" %}
We recommend using the load balancer at the latest from 500 hosts to be managed.
{% endhint %}

To use load balancing, 3 additional virtual machines must be provisioned:

1. VM for the load balancer&#x20;
2. VM for the services&#x20;
3. VM for the 2nd app server

{% hint style="info" %}
For smooth operation, we recommend using Nginx as a reverse proxy.
{% endhint %}

## Preparation of the Virtual Machines

### Load balancer VM

In the first step, prepare the VM for the load balancer. This is used for certificate handling and forwarding of your requests.

{% hint style="warning" %}
It is important to note that Docker is **not** installed on the VM running Nginx. This is the only way to ensure smooth operation.
{% endhint %}

1. Provide a VM for the load balancer.
2. Install the Nginx using `sudo apt install nginx`. To do this, simply take the configuration and adjust it accordingly.
3. To do this, simply copy the configuration and adjust it accordingly.\
   Open the configuration with the following command:\
   `sudo nano /etc/nginx/sites-available/ngs.conf`

   If you are not using our ISO, the default configuration can also be adjusted under\
   `sudo nano /etc/nginx/sites-available/default`

```
 map $http_upgrade $connection_upgrade {
    default upgrade;
    ''      close;
}
 
upstream apiServers {
    server <IP_API_SERVER1>:8080;
    server <IP_API_SERVER2>:8080;
}
 
upstream appServers {
    server <IP_API_SERVER1>:80;
    server <IP_API_SERVER2>:80;
}
 
 
  server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
 
    ssl_stapling on;
    ssl_stapling_verify on;
    server_name ...;
 
    ssl_protocols TLSv1.2;
    ssl_prefer_server_ciphers on;
    ssl_ciphers "ECDHE-RSA-AES256-GCM-SHA384";
    ssl_ecdh_curve secp384r1;
    ssl_session_cache shared:SSL:10m;
    ssl_session_tickets off;
    resolver 8.8.8.8 8.8.4.4 valid=300s;
    resolver_timeout 5s;
 
    ssl_dhparam /etc/nginx/dhparam.pem;
    ssl_certificate /etc/letsencrypt/live/.../fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/.../privkey.pem;
 
    client_max_body_size 200m;
 
    location / {
        proxy_pass http://apiServers;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-Proto "https";
        proxy_set_header X-Forwarded-Ssl "on";
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
    }
}
 
 
server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
 
    ssl_stapling on;
    ssl_stapling_verify on;
    server_name ...;
 
    ssl_protocols TLSv1.2;
    ssl_prefer_server_ciphers on;
    ssl_ciphers "ECDHE-RSA-AES256-GCM-SHA384";
    ssl_ecdh_curve secp384r1;
    ssl_session_cache shared:SSL:10m;
    ssl_session_tickets off;
    resolver 8.8.8.8 8.8.4.4 valid=300s;
    resolver_timeout 5s;
 
    ssl_dhparam /etc/nginx/dhparam.pem;
    ssl_certificate /etc/letsencrypt/live/.../fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/.../privkey.pem;
 
    client_max_body_size 200m;
 
    location / {
        proxy_pass http://appServers;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-Proto "https";
        proxy_set_header X-Forwarded-Ssl "on";
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
    }
}
```

3. Now add the certificates and adjust the paths accordingly in the configuration. Ex: \
   Location of the certificates: `/etc/nginx/ssl`\
   \
   Adjustment in the configuration: \
   `ssl_certificate /etc/nginx/ssl/fullchain.pem;`\
   and \
   `ssl_certificate_key /etc/nginx/privkey.pem;`

{% hint style="warning" %}
If you have issued your certificates to IP addresses, make sure that the respective certificates are also issued to the correct IP addresses.
{% endhint %}

### Services VM

This VM is to be provisioned exclusively for the execution of the following services:

<table><thead><tr><th>Service</th><th>Function</th><th data-hidden></th></tr></thead><tbody><tr><td>sentinel-m3</td><td>Controls alerts and manages assigned notifications.</td><td></td></tr><tr><td>reporter-m4</td><td>Provides the Enginsight platform with up-to-date vulnerability data (CVEs) and distributes it to the modules.</td><td></td></tr><tr><td>profiler-m22</td><td>Provides for the calculation of the normal curve of the machine learning metrics.</td><td></td></tr><tr><td>anomalies-m28</td><td>Aligns normal curve of machine learning metrics with measured data to detect anomalies.</td><td></td></tr><tr><td>scheduler-m29</td><td>Triggers scheduled, automated actions, for example, plugins or audits.</td><td></td></tr><tr><td>updater-m34</td><td>Managed and updated configuration checklists.</td><td></td></tr><tr><td>generator-m35</td><td>Generates PDF reports, e.g. for hosts, endpoints and penetration tests.</td><td></td></tr><tr><td>historian-m38</td><td>Summarizes measured data to display it over time.</td><td></td></tr><tr><td>themis-m43</td><td>Acts as an integrity manager and checks data for correctness as well as for topicality.</td><td></td></tr></tbody></table>

{% hint style="warning" %}
Be sure to note that the services are now running on a separate VM and take this into account when configuring your app server.
{% endhint %}

1. [Install Docker](https://docs.docker.com/engine/install/) and download the Enginsight Repo to your Services VM ([Installation Appserver](https://docs.enginsight.com/docs/master/on-premises/manual-installation#application-server-install-enginsight) Steps 1-4).
2. Now customize the `docker-compose.yml` under `/opt/enginsight/enterprise` according to the instructions.

```
version: "3"
services:
  mongodb-cves:
    image: mongo:4
    networks:
    - mongodb-cves
    restart: always
    volumes:
    - mongodb-cves-volume:/data/db

  sentinel-m3:
    image: registry.enginsight.com/enginsight/sentinel-m3:2.22.37
    restart: always
    volumes:
    - "./conf/services/config.json.production:/etc/enginsight/sentinel-m3/config.json"

  reporter-m4:
    image: registry.enginsight.com/enginsight/reporter-m4:2.4.47
    networks:
    - mongodb-cves
    depends_on:
    - mongodb-cves
    restart: always
    volumes:
    - "./conf/services/config.json.production:/etc/enginsight/reporter-m4/config.json"

  profiler-m22:
    image: registry.enginsight.com/enginsight/profiler-m22:2.2.9
    restart: always
    volumes:
    - "./conf/services/config.json.production:/etc/enginsight/profiler-m22/config.json"

  anomalies-m28:
    image: registry.enginsight.com/enginsight/anomalies-m28:2.2.2
    restart: always
    volumes:
    - "./conf/services/config.json.production:/etc/enginsight/anomalies-m28/config.json"

  scheduler-m29:
    image: registry.enginsight.com/enginsight/scheduler-m29:1.8.76
    restart: always
    volumes:
    - "./conf/services/config.json.production:/etc/enginsight/scheduler-m29/config.json"

  updater-m34:
    image: registry.enginsight.com/enginsight/updater-m34:2.0.4
    restart: always
    volumes:
    - "./conf/services/config.json.production:/etc/enginsight/updater-m34/config.json"

  generator-m35:
    image: registry.enginsight.com/enginsight/generator-m35:1.14.2
    restart: always
    volumes:
    - "./conf/services/config.json.production:/etc/enginsight/generator-m35/config.json"

  historian-m38:
    image: registry.enginsight.com/enginsight/historian-m38:2.1.58
    restart: always
    volumes:
    - "./conf/services/config.json.production:/etc/enginsight/historian-m38/config.json"

  themis-m43:
    image: registry.enginsight.com/enginsight/themis-m43:1.18.20
    restart: always
    volumes:
    - "./conf/services/config.json.production:/etc/enginsight/themis-m43/config.json"

networks:
  mongodb-cves:

volumes:
  mongodb-cves-volume:
```

{% hint style="warning" %}
Make sure that always the latest version endings are used.
{% endhint %}

3. Store a [mail server configuration](https://docs.enginsight.com/docs/master/on-premises/configuration/mail-server) and ensure that the mail server configuration is removed from the app servers.

### Database VM

1. Secure your database with `iptables`.

{% hint style="danger" %}
Adjust the `iptables` to block any connections from outside to the database. This step results in only the application being able to access MongoDB and prevents unauthorized access.
{% endhint %}

2. Add new rules for the 2nd app server and the server running the services. To do this, simply invoke the iptables.

```
sudo nano /etc/iptables/rules.v4 
```

3. Replace `<APP_IP>` with the application server IP reachable from the database. \
   Replace `<DB_IP>` with the database server IP reachable from the application and add rules for redis as shown below.

```
 -A INPUT -p tcp -m tcp --dport 27017 -s 127.0.0.1 -j ACCEPT
 -A INPUT -p tcp -m tcp --dport 27017 -s <APP_IP> -j ACCEPT
 -A INPUT -p tcp -m tcp --dport 6379 -s <APP_IP> -j ACCEPT
 -A INPUT -p tcp -m tcp --dport 27017 -s <APP2_IP> -j ACCEPT
 -A INPUT -p tcp -m tcp --dport 6379 -s <APP2_IP> -j ACCEPT
 -A INPUT -p tcp -m tcp --dport 27017 -s <SERVICES_IP> -j ACCEPT
 -A INPUT -p tcp -m tcp --dport 6379 -s <SERVICES_IP> -j ACCEPT
 -A INPUT -p tcp -m tcp --dport 27017 -s <DB_IP> -j ACCEPT
 -A INPUT -p tcp -m tcp --dport 27017 -j DROP
 -A INPUT -p tcp -m tcp --dport 6379 -j DROP
```

4. Once all changes are made save your settings persistently.

```
sudo iptables-restore < /etc/iptables/rules.v4
sudo apt-get install -y iptables-persistent
```

5. Now add Redis by installing the Redis server.

```
sudo apt install redis-server
```

6. Adjust the configuration.

```
sudo nano /etc/redis/redis.conf
bind 0.0.0.0
```

7. Save and restart the application afterwards.

<pre><code><strong>sudo service redis restart
</strong></code></pre>

{% hint style="info" %}
**Recommendation for larger installations**\
For systems with **more than 1,000 assets**, we recommend **using a MongoDB cluster**.\
Replication and load balancing within the cluster significantly improve database performance and reliability.
{% endhint %}

### App-Server VMs

Prepare two virtual machines for the app servers. These VMs ensure that the user interface is accessible from all servers and the services can run in parallel.

{% hint style="info" %}
If you have previously used Enginsight without a load balancer, you can use the previously used app server as one of the 2 required app server VMs!

This greatly simplifies the load balancer setup and allows you to quickly proceed with the implementation.
{% endhint %}

1. [Install Docker](https://docs.docker.com/engine/install/) and download the Enginsight Repo to your Services VM ([Installation Appserver](https://docs.enginsight.com/docs/master/on-premises/manual-installation#application-server-install-enginsight) Steps 1-4).

{% hint style="info" %}
To save time and effort, we recommend that you either use our ISO file or clone the first app server to create the second VM for the app server.
{% endhint %}

2. Modify the [docker-compose.yml](https://docs.enginsight.com/docs/master/configuration/https#define-internal-ports), as indicated below.

```
version: "3"
services:
  ui-m1:
    image: registry.enginsight.com/enginsight/ui-m1:3.5.10
    ports:
    - "80:80"
    restart: always
    volumes:
    - "./conf/ui-m1/environment.js.production:/opt/enginsight/ui-m1/config/environment.js"

  server-m2:
    image: registry.enginsight.com/enginsight/server-m2:3.5.426
    ports:
    - "8080:8080"
    restart: always
    volumes:
    - "./conf/services/config.json.production:/etc/enginsight/server-m2/config.json"

```

{% hint style="warning" %}
Note also here that always the current versions are at the end. For this purpose, either adjust the .xml and delete all entries that are not required or adjust the versions independently.
{% endhint %}

3. If not already available, enter the [mail server configuration](https://docs.enginsight.com/docs/master/configuration/mail-server#set-up-mail-server) and ensure that the mail server configuration is identical to that on the Services VM.
4. If you have cloned the app server, disable nginx on both app servers with the command:&#x20;

   ```
   sudo systemctl stop nginx
   ```

   and

   ```
   sudo systemctl disable nginx
   ```
5. Copy the contents of the `DEFAULT_JWT_SECRET.conf` file under `/opt/enginsight/enterprise/conf` and paste it into the same file on the 2nd app server. This ensures that both files are stored identically on both servers.
6. Now check the connection to Redis. To do this, log into the container and establish a connection:
   * Check Redis Connection
     1. `sudo docker ps`
     2. `sudo docker exec -it <Id of server-m2> /bin/sh`&#x20;
     3. `apk add redis`
     4. `redis-cli -h <IPDB>`
7. Now check the Docker logs from server m-2 to see if there is a connection from the database to Redis.

## Start load balancing

1. Change the DNS entry.

{% hint style="warning" %}
Note here that the URL of the APP and the API are aligned to the load balancer and no longer to the APP server.
{% endhint %}

2. Once you have prepared all VMs, you can run `sudo setup.sh`.

{% hint style="warning" %}
Note that the Redis URL must be changed to `redis://:6379`
{% endhint %}

Now it's time to check if your application will continue to work without any problems in case of a single server failure.

1. To do this, run `sudo docker-compose down` on APP Server 1 and verify that APP Server 2 is still receiving data and all hosts are still active.
2. Restart all Docker containers with `sudo docker-compose up -d`
3. Perform subsections 1 and 2 again for App Server 2.

{% hint style="warning" %}
Note that the update script must now always be run on all three servers to ensure that all servers are up to date and no incompatibilities occur.
{% endhint %}
