# Install AquilaX

The **AquilaX** solution consists of 4 key components, each responsible for specific tasks. These components work together seamlessly to deliver comprehensive security scanning and intelligence.

1. **AquilaX Server**: Manages the API and User Interface (UI) of the service, acting as the central control point for all operations.
2. **AquilaX Worker**: Responsible for executing the actual security scans, performing the analysis and reporting vulnerabilities.
3. **AquilaX AI** : Specialized AI-powered models designed to assist with decision-making and emulate human logic in engineering tasks. These models enhance the solution’s ability to reason, automate processes, and build intelligent responses.
4. **On-Prem Models**: These are not developed by AquilaX itself, but are open source models that are light and small to be used in CPU and GPU with limited resources.

Below is a diagram illustrating the relationship between these components. Following the diagram, you’ll find instructions on how to set up the solution in a dedicated environment.

<img src="https://53914109-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FjAmSnvnfbHl4EDK56iDo%2Fuploads%2Fgit-blob-3c1f0850307751666819833c26578b2c8142400b%2Ffile.excalidraw.svg?alt=media" alt="" class="gitbook-drawing">

## Prerequisite

1. To prepare for the installation of various AquilaX components, you can structure the deployment using four **(3)** dedicated Virtual Machines
2. Install on all of them `Docker` and `Docker Compose`: <https://docs.docker.com/engine/install/>
3. Execute and follow the instruction of the command `docker login registry.gitlab.com`

The required dedicated VMs must have at least the below capacities to be able to work correctly

<table><thead><tr><th width="140.359375">VM</th><th>CPU/GPU (min)</th><th width="123.40234375">RAM (min)</th><th>Storage</th><th>Network</th></tr></thead><tbody><tr><td>AquilaX Server</td><td>8 vCPU</td><td>16 GB</td><td>80 GB SSD</td><td>SSH / HTTPs</td></tr><tr><td>AquilaX Worker</td><td>12 vCPU</td><td>32 GB</td><td>50 GB SSD</td><td>SSH</td></tr><tr><td>AquilaX AI</td><td>32 vCPU or 4 GPU</td><td>32 GB</td><td>120 GB SSD</td><td>SSH</td></tr></tbody></table>

In addition, all the VMs must be within the same network and must be able to communicate in different ports between of them.

To install Docker on Ubuntu/Debian machine, you can follow the below instructions

#### Step 1 - Add Docker Repo

```bash
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

# Add the repository to Apt sources:
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
```

#### Step 2 - Installation

```bash
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
```

#### Step 3 - Docker User Access

After you executing the below command, you have to sign-out and sign-in again (for the config to be loaded to your account)

```bash
sudo usermod -aG docker $USER
```

#### Step 4 - Test

Before you test, logout and login again and run the this command

```bash
docker run hello-world
```

## Setup AquilaX Server

Sign-in into one of the machines dedicated for the Server and Create a folder and inside a new file named `docker-compose.yml` and paste the following:

{% code title="docker-compose.yml" lineNumbers="true" %}

```yaml
services:
  haproxy:
    image: haproxy:lts-alpine3.21
    restart: always
    depends_on:
      - aquilax-server
    ports:
      - 443:443
    volumes:
      - ./haproxy/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg
      - ./haproxy/certs:/certs/:ro

  aquilax-server-go:
    image: registry.gitlab.com/aquila-x/aquilax-server-go
    restart: always
    env_file:
      - .env
    deploy:
      mode: replicated
      replicas: 2
      endpoint_mode: dnsrr

  aquilax-ui:
    image: registry.gitlab.com/aquila-x/frontend
    restart: always
    environment:
      - REACT_APP_REDIRECT_URI=https://aquilax.ai/app/callback
      - REACT_APP_API_BASE_URL=https://aquilax.ai/api/v1
      - REACT_APP_API_URL=https://aquilax.ai/api/v1
      - REACT_APP_API_BASE=https://aquilax.ai/app
    deploy:
      mode: replicated
      replicas: 2
      endpoint_mode: dnsrr
    entrypoint: ["npm","run","serve"]
    
  aquilax-server:
    image: registry.gitlab.com/aquila-x/aquilax-server
    restart: always
    deploy:
      mode: replicated
      replicas: 2
      endpoint_mode: dnsrr
    depends_on:
      - mongo
    env_file:
      - .env
    entrypoint: ["python3","app.py"]
    volumes:
     - ./media:/app/media
     
  aquilax-jobs-scans:
    image: registry.gitlab.com/aquila-x/aquilax-jobs:scans
    restart: always
    depends_on:
      - mongo
    env_file:
      - .env
    entrypoint: ["python3","app.py"]

  aquilax-ai:
    image: registry.gitlab.com/aquila-x/ai-api
    restart: always
    ports:
      - 10000:10000
    env_file:
      - .env
    entrypoint: ["python3","main.py"]

  mongo:
    image: mongo:8.0.0
    ports:
      - 27017:27017
    volumes:
      - ./data8:/data/db
      
```

{% endcode %}

In the same folder create a new file in the same directory, and named `.env` where we going to store some configuration needed for the application to start properly, paste the below content

<pre data-title=".env" data-line-numbers><code>MONGODB_URI=mongodb://mongo:27017/
AI_ENDPOINTS="http://10.8.0.10:8080,http://172.17.0.1:8080"
MONGO_URL="mongodb://mongo:27017/"
KL_SERVER=https://auth.aquilax.io/
KL_CLIENT_ID=aquilax
KL_REALM=aquilax
KC_CLIENT_SECRET=
<a data-footnote-ref href="#user-content-fn-1">CALLBACK_URI</a>=https://&#x3C;your-own-ip>/callback

<a data-footnote-ref href="#user-content-fn-2">X_AQUILAX_WORKER</a>=********

MAIL_USERNAME=
MAIL_PASSWORD=
MAIL_FROM=
MAIL_PORT=
MAIL_SERVER=
MAIL_FROM_NAME='AquilaX AI'

<a data-footnote-ref href="#user-content-fn-3">JWT_SIGING_TOKEN</a>=*********

AI_API="http://aquilax-ai:10000"

SCHEDULER_ENDPOINT=http://aquilax-jobs:8001

GENAI_AX_KEY=

AQUILAX_SERVER=http://aquilax-server:8000/api/v1
HEARBEAT_CODE=

DEPLOY=ONPREM
<a data-footnote-ref href="#user-content-fn-4">RUNNING_KEY</a>=*******

GITHUB_APP_ID=none
GITHUB_CLIENTID=none
GITHUB_SECRET=none
GITHUB_PEM=none
</code></pre>

### Setup HTTPS interface

Now, you create a certificate (self-signed) for the HAProxy that is used to expose the service over https. Obviously you can use your own certificate signed by your CA for not using self-signed, however for demo purposes here is a command to generate the needed files and folders for the HAProxy.

<pre class="language-bash"><code class="lang-bash"><strong>mkdir -p haproxy
</strong><strong>mkdir -p postgres_data
</strong>mkdir -p haproxy/certs
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
    -keyout haproxy/certs/haproxy.key -out haproxy/certs/haproxy.crt \
    -subj "/C=US/ST=State/L=City/O=Organization/OU=Unit/CN=yourdomain.com" &#x26;&#x26; \
cat haproxy/certs/haproxy.key haproxy/certs/haproxy.crt > haproxy/certs/haproxy.pem &#x26;&#x26; chmod 600 haproxy/certs/haproxy.pem
chmod a+r haproxy/certs/haproxy.pem
</code></pre>

within the new created haproxy proxy folder, create a file named `haproxy.cfg` and paste the below content

{% code title="" lineNumbers="true" %}

```yaml
global
  log 127.0.0.1	local0
  log 127.0.0.1	local1 notice
  pidfile /tmp/haproxy.pid
  maxconn 4096
  daemon

defaults
  log	global
  mode	http
  option	httplog
  option	dontlognull
  retries	3
  option redispatch
  maxconn	2000
  timeout connect	12000
  timeout client	120000
  timeout server	120000

frontend balancer

  bind *:443 ssl crt /certs/haproxy.pem
  mode http

  # User back end server
  acl is_ui path_beg /app/
  use_backend aquilax-ui if is_ui

  acl is_v2 path_beg /api/v2
  use_backend aquilax-server-go if is_v2

  # ACL to match requests starting with /api/v3/
  acl is_api_v3_1 path /api/v3/openapi
  acl is_api_v3_2 path /api/v3/health

  # Route requests matching /api/v3/ to backend_api_v3
  use_backend backend_api_v3 if is_api_v3_1 || is_api_v3_2
  
  default_backend aquilax-server

backend aquilax-server
  option forwardfor
  balance roundrobin
  server aquilax-1 aquilax-server:8000 
  server aquilax-2 aquilax-server:8000

backend backend_api_v3
  server aquilax-ai-server aquilax-ai:10000

backend aquilax-server-go
  option forwardfor
  balance roundrobin
  server aquilax-go-1 aquilax-server-go:4000
  server aquilax-go-2 aquilax-server-go:4000

backend aquilax-ui
  option forwardfor
  balance roundrobin
  server aquilax-ui-1 aquilax-ui:3000
  server aquilax-ui-2 aquilax-ui:3000

```

{% endcode %}

### Start the service

Finally when you have these everything done, just start everything up by executing:

```bash
docker compose pull
docker compose up -d
```

### Go

Now you can signing into the service just by navigating at `https://<your own ip>`

<figure><img src="https://53914109-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FjAmSnvnfbHl4EDK56iDo%2Fuploads%2Fgit-blob-888d3150bc10f57e51960b096bed80b4a68c6148%2Fimage.png?alt=media" alt=""><figcaption><p>Landing Page AquilaX On-Prem</p></figcaption></figure>

### License Application

Login into the new application (use magic-link) and create a new organization for your usage, and also set up a group name, don't worry to much about the names, you can always change them later.\
\
Once you are inside the application, create a personal access token, you will need this for the worker. In addition go into the settings of your new organization and apply a license key, the license key will be provided to you by the the AquilaX team.

<figure><img src="https://53914109-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FjAmSnvnfbHl4EDK56iDo%2Fuploads%2Fgit-blob-586fa75b3857bee63cdcdfc1d7ed3e9e2cbeb6cb%2FScreenshot%202025-01-19%20at%2022.50.33.png?alt=media" alt="" width="375"><figcaption></figcaption></figure>

## Setup AquilaX Worker

Now that the server is up and running, we enable the AquilaX workers, that will actually are the main engines to consume the request and execute the scans.

Sign into the AquilaX worker machine and create a folder where you want the configuration files to be hosted, inside the folder create a new file named `docker-compose.yml` file with the below content:

{% code title="docker-compose.yml" lineNumbers="true" %}

```
services:
  aquilax-worker:
    image: registry.gitlab.com/aquila-x/aquilax-worker
    deploy:
      mode: replicated
      replicas: 6
    restart: always
    env_file:
      - .env
    entrypoint: ["python3","app.py"]
```

{% endcode %}

within the same folder, create a new file named `.env` and paste the below enviroments

<pre data-title=".env" data-line-numbers><code><a data-footnote-ref href="#user-content-fn-5">AQUILAX_SERVER</a>=https://&#x3C;your-aquilax-server-ip>
<a data-footnote-ref href="#user-content-fn-6">X_AQUILAX_WORKER</a>=
</code></pre>

(repeat the same exact process into the other AquilaX workers VMs if needed)

### Start the service

Finally when you have these everything done, just start everything up by executing:

```bash
docker compose pull
docker compose up -d
```

## Setup AI Models

Everything should work well, however you have to enable the AI Models to be used with AquilaX Server, to do that you have to follow these steps.\
\
1\. SSH into the Server dedicated to run the AI Models and install llama cpp: <https://github.com/ggml-org/llama.cpp/blob/master/docs/build.md>

2. Start 8 servers with `/build/bin/llama-server -hf Qwen/Qwen3-4B-GGUF --host 0.0.0.0`
3. Make sure the server is pointed in the .env variable file with the correct hostname and port

## Auto-Train (optional)

Optionally you can have the training of the review model locally in order to learn from the usage of the system.

Speak with AquilaX for seting up the enviroment

{% hint style="info" %}
Contact a member of AquilaX if you need help for the installation or configuration
{% endhint %}

[^1]: KeyClock Call back URL

[^2]: Generate a random secure token. Used to secure connections with the workers

[^3]: Generate a 64chars secret token (used to sign the JWT for users) - Do not change this after you created otherwise users will lose access

[^4]: Speak with AquilaX for this. Is used to unlock AquilaX binary

[^5]: this is the end point of the aquilax server

[^6]: This the same you created for the server
