LogoDev portal

Standard Private Cloud Single Node Deployment Guide

Step-by-step guide for provisioning and deploying the iMBrace Platform in Single Node mode on AWS Private Cloud

Standard Private Cloud Single Node Deployment Guide

By: iMBrace Limited
Version: 0.1 (2025-10-14)
Author: Kong Lee
Status: Draft

Converted from the original “Standard Private Cloud Single Node Deployment Guide.docx”.
This MDX preserves the original structure and instructions, formatted for technical readers and deployment engineers.


🗂️ Document History

VersionDateDescriptionAuthor
0.12025-10-14DraftKong Lee

🧩 Prerequisites

Local Machine Environment

Get the Credentials

  • Encrypted zip files will be sent to authorized users via email. It consists of:
    • GitLab full access user credentials
    • iMBrace IAM user programmatic keys
    • SSH private key imbrace-ig
  • Decryption will also be sent in another email.
  • Download and save all credentials to a secure and appropriate location on the local machine.

Install and Configure Tools (latest versions)

Install and configure the following tools on your local machine:

Configure an AWS profile:

aws configure --profile imbrace
AWS Access Key ID [None]: AKIAI44QH8DHBEXAMPLE
AWS Secret Access Key [None]: je7MtGbClwBF/2Zp9Utk/EXAMPLEKEY
Default region name [None]: ap-east-1
Default output format [None]: text

Reference: [https://medium.com/ivymobility-developers/configure-named-aws-profile-usage-in-applications-aws-cli-60ea7f6f7b40]

[At Local]: Clone the Deployment Repository (Local)

  • Run the following command to your local repo directory
    git clone https://gitlab.com/imbraceco/partners/private-cloud-single.git
    
    # It will ask for credentials
    # Username: imbrace-partner
    # Password: (from credentials zip)
  • Run the following command to check out the develop branch and pull the latest commit.
  • Switch to the correct branch and pull latest commits:
    git fetch --all
    git checkout develop
    git branch
    git pull
  • Here are essential directories under repo directory:
    ├── ai-service               # ai-service application
    ├── aiv2                     # aiv2 application
    ├── ansible                  # ansible deployment directory
    ├── app-gateway              # app-gateway application
    ├── backend                  # backend application
    ├── chat-widget              # chat-widget application
    ├── dashboard                # dashboard application
    ├── ips                      # ips application
    ├── kafka                    # system component: kafka
    ├── marketplace              # marketplace application
    ├── nginx                    # nginx proxy container
    ├── wfconnectorservice        # wfconnectorservice application
    └── workflow-engine           # workflow-engine application

[At Local]: Install and configure the servers system

  • Navigate to ansible directory:
    cd ansible
  • Modify Ansible host file to target your onserver host:
    [onserver]
    ip_address_onserver ansible_python_interpreter=/usr/bin/python3 ansible_user=ec2-user ansible_ssh_private_key_file=$USER/.ssh/imbrace-ig
  • Run initialization playbook:
    ansible-playbook -v sysinit-alux.yml -e hosts_group=onserver -i hosts
  • Set credentials for AWS and GitLab:
    ansible-playbook -v set-credentials.yml -e hosts_group=onserver -i hosts --ask-vault-pass

[At Local]: Clone Repo on Target Server

  • Run the following command to clone the repo on the target server:
    ansible-playbook -v clone-repo.yml -e hosts_group=onserver -i hosts

[At Local]: Post Initialization and Infra Deployment

  • Run the following to deploy infrastructure components:
    ansible-playbook site-deploy-remote.yml -e "hosts_group=onserver" -i hosts

[At Local]: Update Application Config Files

  1. Export AWS profile for SOPS authentication:
    export AWS_PROFILE=imbrace
  2. Navigate to an application directory (e.g. ai-service):
    cd private-cloud-single/ai-service
  3. Decrypt secrets:
    sops -d ai-service/secrets.enc.env > ai-service/secrets.env
  4. Edit configuration values:
    MONGODB_URI=<mongodb endpoint>
    MONGODB_OPENAI_URI=<mongodb endpoint>
    MONGODB_BACKEND_URI=<mongodb endpoint>
    WORKFLOW_URL=https://api.solara.io:9981
    AWS_ACCESS_KEY_ID=<imbrace-app-user key>
    AWS_SECRET_ACCESS_KEY=<imbrace-app-user secret>
    AWS_S3_BUCKET=imbrace-data-solara
    AWS_S3_URL=<S3 HTTPS endpoint>
  5. Encrypt secrets:
    sops -e ai-service/secrets.env > ai-service/secrets.enc.env

Repeat steps for all applications:

aichat
ai-service
aiv2
app-gateway
backend
chat-widget
dashboard
ips
marketplace
wfconnectorservice
workflow-engine

[At Local]: Nginx Deployment (TBC)

  • To be completed in next version.

Push Latest Commit to Repository

git add .
git commit -m "First Application deployment commit 20250926"
git push

Allow Target Servers to Pull Latest Commits

ansible-playbook -v clone-repo.yml -e hosts_group=onserver -i hosts

[At Local]: Deploy Nginx Proxy

  • Run playbook to deploy nginx proxy container:
    ansible-playbook -v deploy-nginx-proxy.yml -e hosts_group=onserver -i hosts --ask-vault-pass

[At Local]: Deploy Applications

  • Run playbook to deploy applications:
    ansible-playbook -v deploy-apps.yml -e hosts_group=onserver -i hosts

[At Remote]: Deploy Applications

  • SSH to All Servers
    • Open multiple terminals and SSH into each target server.
  • Sync Files
    • At each server:
    cd /home/imbrace/
    sh /opt/imbrace/repos/private-cloud-single/ansible/files/sync-files.sh
    
    ls -ltr
    # Expect:
    # -rw-r--r--. 1 imbrace imbrace  237 Sep 25 11:09 app_mapping.txt
    # -rw-r--r--. 1 imbrace imbrace 2458 Sep 25 17:35 deploy-apps.sh
  • Onserver — Revise app_mapping.txt
    # Format: <folder> <service>
    app-gateway app-gateway
    backend backend
    chat-widget chat-widget
    dashboard dashboard
    ips ips
    marketplace marketplace
    wfconnectorservice wfconnectorsvc
  • Run deployment:
    sh deploy-apps.sh
  • Engine — Revise app_mapping.txt
    # Format: <folder> <service>
    ai-service ai-service
    aiv2 ai-v2
    workflow-engine workflow
  • Run deployment:
    sh deploy-apps.sh
  • Restart nginx:
    docker restart nginx-proxy

Appendix: Nginx Deployment

Background

This nginx setup uses template files with environment variables to support multiple customers. Instead of maintaining separate config files per customer, .template files contain placeholders like $DOMAIN which are substituted at container startup.

Step-by-Step Deployment

Step 1: Configure Customer Domain

Edit docker-compose.yml:

environment:
- DOMAIN=<customer-domain>      # e.g., neuralcore.tech
- LAN_DOMAIN=<customer-lan>     # e.g., neuralcore.lan

Customer examples:

CustomerDOMAINLAN_DOMAIN
Neuralcoreneuralcore.techneuralcore.lan
Meridian Capitalmeridiancapital.commeridiancapital.lan
Solarasolara.iosolara.lan
Step 2: Review Template Files

Template files to check:

nginx.conf.template
conf.d/gateway.conf.template
conf.d/backend.conf.template
conf.d/workflow.conf.template

Note: Edit only .template files. Never modify generated .conf files.

How to Switch Customers

To switch customers, update docker-compose.yml:

    environment:
    - DOMAIN=<new-customer-domain>
    - LAN_DOMAIN=<new-customer-lan>    # e.g., neuralcore.lan

Then restart nginx:

docker-compose restart nginx-proxy
  • The container regenerates all .conf files with new values automatically.

Making Configuration Changes

Edit .template files locally, test, commit, and push:

git pull
docker-compose restart nginx-proxy

Quick Reference

File Types:

TypeDescription
.templateEditable config templates
.confAuto-generated; do not edit
.backupOriginal reference configs

Common Commands:

# Restart nginx
docker-compose restart nginx-proxy

# View logs
docker logs -f nginx-proxy

# Test config
docker exec nginx-proxy nginx -t

# Check generated config
docker exec nginx-proxy cat /etc/nginx/conf.d/gateway.conf

Troubleshooting

ProblemSolution
Changes not reflectedEnsure .template files edited, not .conf.
Nginx fails to startCheck logs: docker logs nginx-proxy and verify syntax.
Domain substitution failsVerify docker-compose.yml variables and restart.