Install AquilaX

How to prepare and install AquilaX On-Prem / Cloud or your VM

The AquilaX solution consists of 4 key components, each responsible for specific tasks. These components work together seamlessly to deliver comprehensive security scanning and intelligence.

  1. AquilaX Server: Manages the API and User Interface (UI) of the service, acting as the central control point for all operations.

  2. AquilaX Worker: Responsible for executing the actual security scans, performing the analysis and reporting vulnerabilities.

  3. AquilaX AI : Specialized AI-powered models designed to assist with decision-making and emulate human logic in engineering tasks. These models enhance the solution’s ability to reason, automate processes, and build intelligent responses.

  4. On-Prem Models: These are not developed by AquilaX itself, but are open source models that are light and small to be used in CPU and GPU with limited resources.

Below is a diagram illustrating the relationship between these components. Following the diagram, you’ll find instructions on how to set up the solution in a dedicated environment.

Drawing

Prerequisite

  1. To prepare for the installation of various AquilaX components, you can structure the deployment using four (3) dedicated Virtual Machines

  2. Install on all of them Docker and Docker Compose: https://docs.docker.com/engine/install/

  3. Execute and follow the instruction of the command docker login registry.gitlab.com

The required dedicated VMs must have at least the below capacities to be able to work correctly

VM
CPU/GPU (min)
RAM (min)
Storage
Network

AquilaX Server

8 vCPU

16 GB

80 GB SSD

SSH / HTTPs

AquilaX Worker

12 vCPU

32 GB

50 GB SSD

SSH

AquilaX AI

32 vCPU or 4 GPU

32 GB

120 GB SSD

SSH

In addition, all the VMs must be within the same network and must be able to communicate in different ports between of them.

To install Docker on Ubuntu/Debian machine, you can follow the below instructions

Step 1 - Add Docker Repo

Step 2 - Installation

Step 3 - Docker User Access

After you executing the below command, you have to sign-out and sign-in again (for the config to be loaded to your account)

Step 4 - Test

Before you test, logout and login again and run the this command

Setup AquilaX Server

Sign-in into one of the machines dedicated for the Server and Create a folder and inside a new file named docker-compose.yml and paste the following:

In the same folder create a new file in the same directory, and named .env where we going to store some configuration needed for the application to start properly, paste the below content

Setup HTTPS interface

Now, you create a certificate (self-signed) for the HAProxy that is used to expose the service over https. Obviously you can use your own certificate signed by your CA for not using self-signed, however for demo purposes here is a command to generate the needed files and folders for the HAProxy.

within the new created haproxy proxy folder, create a file named haproxy.cfg and paste the below content

Start the service

Finally when you have these everything done, just start everything up by executing:

Go

Now you can signing into the service just by navigating at https://<your own ip>

Landing Page AquilaX On-Prem

License Application

Login into the new application (use magic-link) and create a new organization for your usage, and also set up a group name, don't worry to much about the names, you can always change them later. Once you are inside the application, create a personal access token, you will need this for the worker. In addition go into the settings of your new organization and apply a license key, the license key will be provided to you by the the AquilaX team.

Setup AquilaX Worker

Now that the server is up and running, we enable the AquilaX workers, that will actually are the main engines to consume the request and execute the scans.

Sign into the AquilaX worker machine and create a folder where you want the configuration files to be hosted, inside the folder create a new file named docker-compose.yml file with the below content:

within the same folder, create a new file named .env and paste the below enviroments

(repeat the same exact process into the other AquilaX workers VMs if needed)

Start the service

Finally when you have these everything done, just start everything up by executing:

Setup AI Models

Everything should work well, however you have to enable the AI Models to be used with AquilaX Server, to do that you have to follow these steps. 1. SSH into the Server dedicated to run the AI Models and install llama cpp: https://github.com/ggml-org/llama.cpp/blob/master/docs/build.md

  1. Start 8 servers with /build/bin/llama-server -hf Qwen/Qwen3-4B-GGUF --host 0.0.0.0

  2. Make sure the server is pointed in the .env variable file with the correct hostname and port

Auto-Train (optional)

Optionally you can have the training of the review model locally in order to learn from the usage of the system.

Speak with AquilaX for seting up the enviroment

Contact a member of AquilaX if you need help for the installation or configuration

Last updated

Was this helpful?