Load Balancing
Use the load balancer to distribute your traffic across multiple servers and optimize performance.
Last updated
Use the load balancer to distribute your traffic across multiple servers and optimize performance.
Last updated
The load balancer receives the requests from the clients and forwards them to the most suitable server. In this way, you ensure that Enginsight runs optimally even with a large number of incoming requests.
We recommend using the load balancer at the latest from 500 hosts to be managed.
To use load balancing, 3 additional virtual machines must be provisioned:
VM for the load balancer
VM for the services
VM for the 2nd app server
For smooth operation, we recommend using Nginx as a reverse proxy.
In the first step, prepare the VM for the load balancer. This is used for certificate handling and forwarding of your requests.
It is important to note that Docker is not installed on the VM running Nginx. This is the only way to ensure smooth operation.
Provide a VM for the load balancer.
Install the Nginx using sudo apt install nginx
. To do this, simply take the configuration and adjust it accordingly.
Now add the certificates and adjust the paths accordingly in the configuration. Ex:
Location of the certificates: /etc/nginx/ssl
Adjustment in the configuration:
ssl_certificate /etc/nginx/ssl/fullchain.pem;
and
ssl_certificate_key /etc/nginx/privkey.pem;
If you have issued your certificates to IP addresses, make sure that the respective certificates are also issued to the correct IP addresses.
This VM is to be provisioned exclusively for the execution of the following services:
Be sure to note that the services are now running on a separate VM and take this into account when configuring your app server.
Install Docker on your Services VM.
Now customize the docker-compose.yml
under /opt/enginsight/enterprise
according to the instructions.
Make sure that always the latest version endings are used.
Store a mail server configuration and ensure that the mail server configuration is removed from the app servers.
Secure your database with iptables.
Adjust the iptables to block any connections from outside to the database. This step results in only the application being able to access MongoDB and prevents unauthorized access.
Add new rules for the 2nd app server and the server running the services. To do this, simply invoke the iptables.
Replace <APP IP>
with the application server IP reachable from the database.
Replace <DB IP>
with the database server IP reachable from the application and add rules for redis as shown below.
Once all changes are made save your settings persistently.
Now add Redis by installing the Redis server.
Adjust the configuration.
Save and restart the application afterwards.
Prepare two virtual machines for the app servers. These VMs ensure that the user interface is accessible from all servers and the services can run in parallel.
If you have previously used Enginsight without a load balancer, you can use the previously used app server as one of the 2 required app server VMs!
This greatly simplifies the load balancer setup and allows you to quickly proceed with the implementation.
Install Docker on the two app servers.
To save time and effort, we recommend that you either use our ISO file or clone the first app server to create the second VM for the app server.
Modify the docker-compose.yml, as indicated below.
Note also here that always the current versions are at the end. For this purpose, either adjust the .xml and delete all entries that are not required or adjust the versions independently.
If you have cloned the app server, disable nginx on both app servers with the command: systemctl stop nginx
and systemctl disable nginx
Copy the contents of the DEFAULT_JWT_SECRET.conf
file under /opt/enginsight/enterprise/conf
and paste it into the same file on the 2nd app server. This ensures that both files are stored identically on both servers.
Now check the connection to Redis. To do this, log into the container and establish a connection:
Check Redis Connection
docker ps
docker exec -it <Id of server-m2> /bin/sh
redis-cli -h <IPDB>
Now check the Docker logs from server m-2 to see if there is a connection from the database to Redis.
Change the DNS entry.
Note here that the URL of the APP and the API are aligned to the load balancer and no longer to the APP server.
Once you have prepared all VMs, you can run setup.sh
.
Note that the Redis URL must be changed to redis://:6379
Now it's time to check if your application will continue to work without any problems in case of a single server failure.
To do this, run docker-compose down
on APP Server 1 and verify that APP Server 2 is still receiving data and all hosts are still active.
Restart all Docker containers with docker-compose up -d
Perform subsections 1 and 2 again for App Server 2.
Note that the update script must now always be run on all three servers to ensure that all servers are up to date and no incompatibilities occur.
Service | Function |
---|---|
sentinel-m3
Controls alerts and manages assigned notifications.
reporter-m4
Provides the Enginsight platform with up-to-date vulnerability data (CVEs) and distributes it to the modules.
profiler-m22
Provides for the calculation of the normal curve of the machine learning metrics.
anomalies-m28
Aligns normal curve of machine learning metrics with measured data to detect anomalies.
scheduler-m29
Triggers scheduled, automated actions, for example, plugins or audits.
updater-m34
Managed and updated configuration checklists.
generator-m35
Generates PDF reports, e.g. for hosts, endpoints and penetration tests.
historian-m38
Summarizes measured data to display it over time.
themis-m43
Acts as an integrity manager and checks data for correctness as well as for topicality.