Job Title: AIVA Prime Systems Engineer
Role Overview
We are hiring a Lead Systems Engineer to architect, build, and scale AIVA Prime, the core provisioning engine that manufactures and deploys all client AIVA instances.
AIVA Prime is not a client-facing product. It is the factory layer of the ecosystem responsible for creating fully isolated, production-ready AIVA environments for every client. This role sits at the heart of our infrastructure and directly impacts on the scalability, security, and reliability of the entire AIVA product line.
You will design and own a zero-trust, fully automated deployment system that ensures every AIVA instance is securely provisioned, consistently configured, and traceable from inception.
Key Responsibilities
- AIVA Prime Architecture & Provisioning Engine
- Design and build the end-to-end provisioning system for AIVA instances
- Develop automated VM orchestration, ensuring each client receives a fresh, isolated environment
- Maintain local hardware deployment infrastructure (non-cloud dependent execution layer)
- Automate credential generation and secure vault storage (e.g., Dashlane integration)
- Engineer the automated build pipeline, including:
- Secure VM initialization
- Manus Desktop deployment
- Full AIVA environment bootstrapping
- Zero-Trust Data Pipeline Engineering
- Architect and maintain a Zero Trust Email Detonation Chamber to mitigate prompt injection risks
- Build secure ingestion pipelines using:
- AWS SES (email interception)
- AWS Lambda (event-driven processing)
- Python-based sanitization (HTML stripping, parsing, regex filtering)
- Enforce strict data isolation (Silo Model):
- Dedicated Amazon Aurora PostgreSQL databases per client
- Ensure all inbound data is sanitized, neutralized, and safe for AI processing
- AI Stack Integration & Orchestration
- Integrate and optimize LLM systems, including:
- Anthropic Claude (intent classification, structured outputs)
- OpenAI models for operational workflows
- Implement structured tool-calling pipelines and JSON outputs
- Build and maintain API connectors across:
- Google Workspace (Drive, Gmail)
- Notion
- Xero
- GoHighLevel
- Support real-time AI workflows, including:
- Live session transcription (Fireflies/Otter)
- Automated task generation into Notion
- Workflow Orchestration & Automation
- Design durable workflows using:
- LangGraph (core orchestration)
- CrewAI (domain-specific agents)
- n8n (automation and integrations)
- Implement human-in-the-loop checkpoints for critical processes
- Ensure all workflows are auditable, traceable, and fault-tolerant
- System Security, Isolation & Reliability
- Enforce zero-trust architecture principles across all systems
- Design hermetically isolated environments per client
- Protect against:
- Prompt injection
- Data leakage
- Cross-client contamination
- Ensure high system reliability and observability
Technology Stack Requirements
Infrastructure & Data
- Supabase (PostgreSQL, real-time systems)
- Amazon Aurora PostgreSQL (client data isolation)
- Pinecone (vector database for RAG)
AI & LLM Layer
- Anthropic Claude (advanced reasoning & structured outputs)
- OpenAI models (operational execution)
- LLM gateways (Maxim AI / Portkey)
- Mem0 (agent memory systems)
Automation & Orchestration
- LangGraph
- CrewAI
- n8n (self-hosted)
Tooling & Integrations
- Retool (dashboard/control room)
- GoHighLevel (CRM)
- Xero (finance)
- Notion (knowledge & task management)
- Google Workspace APIs
- Voice tools (e.g., JustCall)
Required Skills & Experience
- Advanced experience in systems architecture & distributed systems
- Deep expertise in VM provisioning, sandboxing, and infrastructure automation
- Strong hands-on experience with AWS (Lambda, SES, Aurora)
- Advanced Python engineering (data parsing, sanitization, regex)
- Proven experience integrating LLMs and AI workflows
- Strong understanding of zero-trust security models
- Extensive experience with API integrations and automation pipelines
- Ability to design complex, scalable, multi-layered systems
Compensation (commission-based)
This role is fully commission-based, designed to directly reward the scale, impact, and longevity of the system you build. You are not building a one-off solution—you are building AIVA Prime, the core infrastructure that will continuously deploy AIVA instances across all clients.
Build Once, Earn Continuously
- Your primary responsibility is to design and build the AIVA infrastructure once. From that point forward, AIVA Prime will autonomously provision and deploy AIVA instances to every new client.
- This creates a high-leverage model:
- One system built → unlimited deployments → continuous earnings
2% Royalty on Every AIVA Sale
You will receive a 2% royalty on every AIVA sold, equivalent to:
- £40 per AIVA deployed and sold
Direct Link to System Scale
Because AIVA Prime is the manufacturing engine behind every deployment, your earnings increase as:
- More clients are onboarded
- More AIVA instances are created
- The ecosystem expands across organizations
Compounding Income Potential
Each AIVA deployed contributes to a growing base of recurring revenue, giving you:
- Long-term earning potential
- Scalable, compounding income
- Direct participation in the platform's growth
Why This Is Great
This is not a traditional salaried engineering role.
It Is a High-upside, Ownership-driven Opportunity Where
- Your output is leveraged across the entire business
- Your compensation scales with adoption
- You benefit directly from every future AIVA deployment
Other Qualifications
- With own working laptop. and
- Has a wired internet connection with a minimum speed of 25 Mbps and a backup connection.
Work Setup