Install AquilaX
How to prepare and install AquilaX On-Prem / Cloud or your VM
The AquilaX solution consists of 4 key components, each responsible for specific tasks. These components work together seamlessly to deliver comprehensive security scanning and intelligence.
AquilaX Server: Manages the API and User Interface (UI) of the service, acting as the central control point for all operations.
AquilaX Worker: Responsible for executing the actual security scans, performing the analysis and reporting vulnerabilities.
AquilaX AI : Specialized AI-powered models designed to assist with decision-making and emulate human logic in engineering tasks. These models enhance the solution’s ability to reason, automate processes, and build intelligent responses.
On-Prem Models: These are not developed by AquilaX itself, but are open source models that are light and small to be used in CPU and GPU with limited resources.
Below is a diagram illustrating the relationship between these components. Following the diagram, you’ll find instructions on how to set up the solution in a dedicated environment.
Prerequisite
To prepare for the installation of various AquilaX components, you can structure the deployment using four (3) dedicated Virtual Machines
Install on all of them
Docker
andDocker Compose
: https://docs.docker.com/engine/install/Execute and follow the instruction of the command
docker login registry.gitlab.com
The required dedicated VMs must have at least the below capacities to be able to work correctly
AquilaX Server
8 vCPU
16 GB
80 GB SSD
SSH / HTTPs
AquilaX Worker
12 vCPU
32 GB
50 GB SSD
SSH
AquilaX AI
32 vCPU or 4 GPU
32 GB
120 GB SSD
SSH
In addition, all the VMs must be within the same network and must be able to communicate in different ports between of them.
To install Docker on Ubuntu/Debian machine, you can follow the below instructions
Step 1 - Add Docker Repo
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
Step 2 - Installation
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Step 3 - Docker User Access
After you executing the below command, you have to sign-out and sign-in again (for the config to be loaded to your account)
sudo usermod -aG docker $USER
Step 4 - Test
Before you test, logout and login again and run the this command
docker run hello-world
Setup AquilaX Server
Sign-in into one of the machines dedicated for the Server and Create a folder and inside a new file named docker-compose.yml
and paste the following:
services:
haproxy:
image: haproxy:lts-alpine3.21
restart: always
depends_on:
- aquilax-server
ports:
- 443:443
volumes:
- ./haproxy/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg
- ./haproxy/certs:/certs/:ro
aquilax-server-go:
image: registry.gitlab.com/aquila-x/aquilax-server-go
restart: always
env_file:
- .env
deploy:
mode: replicated
replicas: 2
endpoint_mode: dnsrr
aquilax-ui:
image: registry.gitlab.com/aquila-x/frontend
restart: always
environment:
- REACT_APP_REDIRECT_URI=https://aquilax.ai/app/callback
- REACT_APP_API_BASE_URL=https://aquilax.ai/api/v1
- REACT_APP_API_URL=https://aquilax.ai/api/v1
- REACT_APP_API_BASE=https://aquilax.ai/app
deploy:
mode: replicated
replicas: 2
endpoint_mode: dnsrr
entrypoint: ["npm","run","serve"]
aquilax-server:
image: registry.gitlab.com/aquila-x/aquilax-server
restart: always
deploy:
mode: replicated
replicas: 2
endpoint_mode: dnsrr
depends_on:
- mongo
env_file:
- .env
entrypoint: ["python3","app.py"]
volumes:
- ./media:/app/media
aquilax-jobs-scans:
image: registry.gitlab.com/aquila-x/aquilax-jobs:scans
restart: always
depends_on:
- mongo
env_file:
- .env
entrypoint: ["python3","app.py"]
aquilax-ai:
image: registry.gitlab.com/aquila-x/ai-api
restart: always
ports:
- 10000:10000
env_file:
- .env
entrypoint: ["python3","main.py"]
mongo:
image: mongo:8.0.0
ports:
- 27017:27017
volumes:
- ./data8:/data/db
In the same folder create a new file in the same directory, and named .env
where we going to store some configuration needed for the application to start properly, paste the below content
MONGODB_URI=mongodb://mongo:27017/
AI_ENDPOINTS="http://10.8.0.10:8080,http://172.17.0.1:8080"
MONGO_URL="mongodb://mongo:27017/"
KL_SERVER=https://auth.aquilax.io/
KL_CLIENT_ID=aquilax
KL_REALM=aquilax
KC_CLIENT_SECRET=
=https://<your-own-ip>/callback
=********
MAIL_USERNAME=
MAIL_PASSWORD=
MAIL_FROM=
MAIL_PORT=
MAIL_SERVER=
MAIL_FROM_NAME='AquilaX AI'
=*********
AI_API="http://aquilax-ai:10000"
SCHEDULER_ENDPOINT=http://aquilax-jobs:8001
GENAI_AX_KEY=
AQUILAX_SERVER=http://aquilax-server:8000/api/v1
HEARBEAT_CODE=
DEPLOY=ONPREM
=*******
GITHUB_APP_ID=none
GITHUB_CLIENTID=none
GITHUB_SECRET=none
GITHUB_PEM=none
Setup HTTPS interface
Now, you create a certificate (self-signed) for the HAProxy that is used to expose the service over https. Obviously you can use your own certificate signed by your CA for not using self-signed, however for demo purposes here is a command to generate the needed files and folders for the HAProxy.
mkdir -p haproxy
mkdir -p postgres_data
mkdir -p haproxy/certs
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout haproxy/certs/haproxy.key -out haproxy/certs/haproxy.crt \
-subj "/C=US/ST=State/L=City/O=Organization/OU=Unit/CN=yourdomain.com" && \
cat haproxy/certs/haproxy.key haproxy/certs/haproxy.crt > haproxy/certs/haproxy.pem && chmod 600 haproxy/certs/haproxy.pem
chmod a+r haproxy/certs/haproxy.pem
within the new created haproxy proxy folder, create a file named haproxy.cfg
and paste the below content
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
pidfile /tmp/haproxy.pid
maxconn 4096
daemon
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
maxconn 2000
timeout connect 12000
timeout client 120000
timeout server 120000
frontend balancer
bind *:443 ssl crt /certs/haproxy.pem
mode http
# User back end server
acl is_ui path_beg /app/
use_backend aquilax-ui if is_ui
acl is_v2 path_beg /api/v2
use_backend aquilax-server-go if is_v2
# ACL to match requests starting with /api/v3/
acl is_api_v3_1 path /api/v3/openapi
acl is_api_v3_2 path /api/v3/health
# Route requests matching /api/v3/ to backend_api_v3
use_backend backend_api_v3 if is_api_v3_1 || is_api_v3_2
default_backend aquilax-server
backend aquilax-server
option forwardfor
balance roundrobin
server aquilax-1 aquilax-server:8000
server aquilax-2 aquilax-server:8000
backend backend_api_v3
server aquilax-ai-server aquilax-ai:10000
backend aquilax-server-go
option forwardfor
balance roundrobin
server aquilax-go-1 aquilax-server-go:4000
server aquilax-go-2 aquilax-server-go:4000
backend aquilax-ui
option forwardfor
balance roundrobin
server aquilax-ui-1 aquilax-ui:3000
server aquilax-ui-2 aquilax-ui:3000
Start the service
Finally when you have these everything done, just start everything up by executing:
docker compose pull
docker compose up -d
Go
Now you can signing into the service just by navigating at https://<your own ip>

License Application
Login into the new application (use magic-link) and create a new organization for your usage, and also set up a group name, don't worry to much about the names, you can always change them later. Once you are inside the application, create a personal access token, you will need this for the worker. In addition go into the settings of your new organization and apply a license key, the license key will be provided to you by the the AquilaX team.

Setup AquilaX Worker
Now that the server is up and running, we enable the AquilaX workers, that will actually are the main engines to consume the request and execute the scans.
Sign into the AquilaX worker machine and create a folder where you want the configuration files to be hosted, inside the folder create a new file named docker-compose.yml
file with the below content:
services:
aquilax-worker:
image: registry.gitlab.com/aquila-x/aquilax-worker
deploy:
mode: replicated
replicas: 6
restart: always
env_file:
- .env
entrypoint: ["python3","app.py"]
within the same folder, create a new file named .env
and paste the below enviroments
=https://<your-aquilax-server-ip>
=
(repeat the same exact process into the other AquilaX workers VMs if needed)
Start the service
Finally when you have these everything done, just start everything up by executing:
docker compose pull
docker compose up -d
Setup AI Models
Everything should work well, however you have to enable the AI Models to be used with AquilaX Server, to do that you have to follow these steps. 1. SSH into the Server dedicated to run the AI Models and install llama cpp: https://github.com/ggml-org/llama.cpp/blob/master/docs/build.md
Start 8 servers with
/build/bin/llama-server -hf Qwen/Qwen3-4B-GGUF --host 0.0.0.0
Make sure the server is pointed in the .env variable file with the correct hostname and port
Auto-Train (optional)
Optionally you can have the training of the review model locally in order to learn from the usage of the system.
Speak with AquilaX for seting up the enviroment
Last updated
Was this helpful?