Running OpenClaw on ProxMox: The Complete Guide to Self-Hosting Your AI Assistant

Running an AI assistant 24/7 doesn’t have to mean expensive cloud bills or privacy compromises. With ProxMox and OpenClaw, you can host your own intelligent assistant on your own hardware — whether that’s a spare PC, a home server, or a mini-PC in your office.

This guide walks you through everything: from creating the VM to hardening security to keeping your AI token costs under control.

Why ProxMox + OpenClaw?

ProxMox VE is a powerful, open-source virtualization platform that turns any x86 machine into a hypervisor. It’s free, enterprise-grade, and perfect for running multiple VMs and containers on a single box.

OpenClaw is an open-source AI agent platform that connects to Claude, GPT-4, Gemini, and dozens of other AI models. It integrates with Telegram, WhatsApp, Discord, email, calendars, and can automate practically anything. Think of it as your own self-hosted AI assistant that you actually control.

Why this combo works:

  • Privacy: Your conversations and data stay on your hardware
  • Cost control: Pay for compute once, not per-message
  • Flexibility: Run other services alongside OpenClaw (databases, n8n, ComfyUI, etc.)
  • Reliability: No dependency on cloud provider uptime
  • Learning: Full visibility into how everything works

Prerequisites

Before you begin, you’ll need:

Hardware:

  • A machine running ProxMox VE 7.x or 8.x
  • At least 4GB RAM available for the VM (8GB+ recommended)
  • 20GB+ free disk space (SSD/NVMe strongly preferred)
  • Stable network connection

Access:

  • ProxMox web UI credentials
  • SSH access to your ProxMox host (optional but helpful)

External:

  • An AI API key (OpenAI, Anthropic, OpenRouter, etc.)
  • (Optional) Telegram/WhatsApp/Discord account for messaging integration

Knowledge:

  • Basic Linux command line familiarity
  • Understanding of VM concepts

Part 1: Creating the ProxMox VM

Step 1: Download Ubuntu Server

Head to ubuntu.com/download/server and grab the latest LTS ISO (22.04 or 24.04). Upload it to ProxMox:

  1. In ProxMox web UI, go to your storage node (e.g., local)
  2. Click ISO ImagesUpload
  3. Select your Ubuntu Server ISO

Step 2: Create the VM

Click Create VM in the top-right and configure:

General:

  • VM ID: Pick something memorable (e.g., 100)
  • Name: openclaw-vm (or whatever you prefer)

OS:

  • ISO image: Select your uploaded Ubuntu ISO
  • Guest OS: Linux, kernel version 6.x

System:

  • Machine: q35
  • BIOS: OVMF (UEFI) for modern systems, or SeaBIOS for compatibility
  • SCSI Controller: VirtIO SCSI single
  • Qemu Agent: ✅ Enable (you’ll install this in the guest later)

Disks:

  • Bus/Device: SCSI0
  • Storage: Your SSD/NVMe pool (e.g., local-lvm)
  • Size: 40GB minimum, 60GB+ recommended if you plan to add skills/services
  • Cache: Write back (for performance)
  • Discard: ✅ Enable (for SSD TRIM support)
  • IO thread: ✅ Enable

CPU:

  • Cores: 2-4 (OpenClaw benefits from multiple cores)
  • Type: host (pass through host CPU features)

Memory:

  • RAM: 4096MB minimum, 8192MB recommended
  • Ballooning: ✅ Enable (allows dynamic memory management)

Network:

  • Bridge: vmbr0 (your default bridge)
  • Model: VirtIO (paravirtualized) for best performance
  • Firewall: ✅ Enable at VM level

Click Finish to create the VM.

Step 3: Install Ubuntu Server

  1. Start the VM and open the console
  2. Follow Ubuntu installer prompts:
    • Language: English
    • Keyboard: Your layout
    • Network: DHCP (or assign static IP if you prefer)
    • Storage: Use entire disk (LVM recommended for flexibility)
    • Profile: Create a user account (e.g., admin or your name)
    • SSH: ✅ Install OpenSSH server
    • Featured snaps: Skip for now
  3. Let it install and reboot
  4. Log in via console or SSH

Step 4: Post-Install VM Tweaks

SSH into your new VM and run:

# Update packages
sudo apt update && sudo apt upgrade -y

# Install Qemu Guest Agent (allows ProxMox to communicate with VM)
sudo apt install qemu-guest-agent -y
sudo systemctl enable qemu-guest-agent
sudo systemctl start qemu-guest-agent

# Install basic utilities
sudo apt install curl wget git htop vim -y

Back in ProxMox, verify the agent is working: – Go to VM → Summary – You should now see the VM’s IP address displayed

Part 2: Installing OpenClaw

Step 1: Install Node.js 22+

OpenClaw requires Node.js 22 or later. The easiest way:

# Install via official installer script
curl -fsSL https://openclaw.ai/install.sh | bash

This downloads the OpenClaw CLI, installs Node if needed, and launches onboarding.

Alternative (manual Node install):

# Install Node via NodeSource
curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash -
sudo apt install -y nodejs

# Verify
node -v   # Should show v22.x or higher
npm -v    # Should show 10.x or higher

# Install OpenClaw globally
npm install -g openclaw@latest

Step 2: Run the Onboarding Wizard

openclaw onboard --install-daemon

The wizard will walk you through:

  1. Security prompt: Select Yes (enables security guardrails)
  2. Configuration mode: Choose QuickStart (you can customize later)
  3. Model selection:
    • Select your preferred provider (e.g., Anthropic for Claude, OpenAI for GPT)
    • Paste your API key when prompted
    • Pick a default model (e.g., claude-sonnet-4, gpt-4o, etc.)
  4. Channels (optional but recommended):
    • Add Telegram for easy mobile access
    • Or WhatsApp, Discord, etc. (use openclaw channels add later to expand)
  5. Skills: Skip for now (you can add later)
  6. Interface: Choose Web UI or Terminal (TUI)

The setup typically takes 10-15 minutes.

Step 3: Verify Installation

# Check daemon status
openclaw status

# View logs
openclaw daemon logs

# Test the TUI (terminal interface)
openclaw tui

# Or open the web dashboard
openclaw dashboard

If the dashboard opens successfully in your browser, you’re golden! 🎉

Part 3: Security Hardening

Running an AI assistant means giving it access to APIs, files, and potentially sensitive data. Lock it down properly.

1. Firewall Configuration

ProxMox firewall (host-level):

In ProxMox UI, go to Datacenter → Firewall: – Enable firewall at datacenter level – Create a Security Group for OpenClaw VMs – Allow only necessary ports: – 22 (SSH) — restrict to your local network only – 18789 (OpenClaw web UI) — restrict to LAN or VPN – 8006 (ProxMox web UI) — if accessing from VM network

Guest VM firewall (via ufw):

# Install and enable UFW
sudo apt install ufw -y

# Default policies: deny incoming, allow outgoing
sudo ufw default deny incoming
sudo ufw default allow outgoing

# Allow SSH (restrict to your local network)
sudo ufw allow from 192.168.1.0/24 to any port 22

# Allow OpenClaw web UI (LAN only)
sudo ufw allow from 192.168.1.0/24 to any port 18789

# Enable firewall
sudo ufw enable

# Check status
sudo ufw status verbose

2. SSH Hardening

Edit /etc/ssh/sshd_config:

sudo vim /etc/ssh/sshd_config

Set these values:

PermitRootLogin no
PasswordAuthentication no
PubkeyAuthentication yes
PermitEmptyPasswords no
X11Forwarding no
MaxAuthTries 3
ClientAliveInterval 300
ClientAliveCountMax 2

Add your public key to ~/.ssh/authorized_keys, then restart SSH:

sudo systemctl restart sshd

3. OpenClaw Gateway Authentication

Edit ~/.openclaw/openclaw.json:

{
  "gateway": {
    "auth": {
      "mode": "password",
      "password": "your-strong-password-here"
    },
    "bind": "loopback"
  }
}

If you need LAN access, you can bind to all interfaces, but always set authentication:

{
  "gateway": {
    "auth": {
      "mode": "password",
      "password": "very-secure-passphrase-here"
    },
    "bind": "lan"
  }
}

Restart the gateway:

openclaw daemon restart

4. Regular Updates

Enable unattended security updates:

sudo apt install unattended-upgrades -y
sudo dpkg-reconfigure -plow unattended-upgrades

Set a weekly cron for system updates:

sudo crontab -e

Add:

0 3 * * 0 apt update && apt upgrade -y && apt autoremove -y

5. Backup Strategy

ProxMox built-in backups: – Go to Datacenter → Backup – Schedule daily backups of your OpenClaw VM – Store on separate storage (NAS, external drive, or remote)

OpenClaw workspace backups:

# Manual backup
tar -czf ~/openclaw-backup-$(date +%F).tar.gz ~/.config/openclaw ~/.local/share/openclaw ~/clawd

# Automated daily backup (add to crontab)
0 2 * * * tar -czf ~/backups/openclaw-$(date +\%F).tar.gz ~/.config/openclaw ~/.local/share/openclaw ~/clawd

Part 4: Cost Optimization & Token Management

Running your own AI assistant doesn’t mean unlimited free API calls. Here’s how to keep costs under control.

1. Choose the Right Model for the Job

Not all tasks need GPT-4 or Claude Opus. OpenClaw supports model switching per-session:

  • Quick tasks (scheduling, simple Q&A): Use cheaper models like gpt-4o-mini, claude-haiku
  • Complex reasoning (code review, analysis): Use claude-sonnet-4, gpt-4o
  • Heavy lifting (research, writing): Use claude-opus, o1

Configure model aliases in your config file (~/.openclaw/openclaw.json):

{
  "agents": {
    "defaults": {
      "models": {
        "openai/gpt-4o-mini": {
          "alias": "cheap"
        },
        "anthropic/claude-sonnet-4": {
          "alias": "balanced"
        },
        "anthropic/claude-opus-4": {
          "alias": "powerful"
        }
      }
    }
  }
}

Then switch on the fly: /model cheap

2. Enable Response Caching

OpenClaw supports caching for repeated queries through provider-level configuration. This can save 15-30% on redundant API calls.

Note: Caching is primarily handled at the LLM provider level (e.g., Anthropic’s prompt caching). For application-level caching strategies, consider implementing response caching in your skills or using external caching layers.

3. Set Max Token Limits

Prevent runaway responses by setting sensible output limits in your config:

{
  "agents": {
    "defaults": {
      "maxTokens": 2048
    }
  }
}

You can also control this per-model through provider configuration or by using model-specific parameters when making requests.

4. Use Self-Hosted Models (Advanced)

If you have spare GPU power or want to eliminate API costs entirely, run local LLMs via Ollama:

# Install Ollama on the VM or another machine
curl -fsSL https://ollama.com/install.sh | sh

# Pull a model
ollama pull llama3.3:70b  # Or mistral, deepseek, etc.

# Configure OpenClaw to use it
openclaw models add ollama http://localhost:11434

Trade API costs for electricity and hardware — great for high-volume, non-critical tasks.

5. Monitor Usage

Track your spend:

# View session usage stats
openclaw status

# Check recent API costs
openclaw usage --last 7d

Set budget alerts in your AI provider dashboard (OpenAI, Anthropic, etc.).

6. Optimize Prompts

Shorter, more precise prompts = fewer input tokens = lower costs. Use:

  • Structured formats: JSON, bullet points (not long paragraphs)
  • Clear instructions: “Summarize in 3 sentences” vs. “Give me a summary”
  • Reusable system prompts: Store in skills/config rather than repeating

Part 5: Pro Tips & Optimizations

🚀 Performance Tuning for Low-Power VMs

If your VM feels sluggish (common on ARM or budget hardware):

# Enable Node.js compile cache
echo 'export NODE_COMPILE_CACHE=/var/tmp/openclaw-compile-cache' >> ~/.bashrc
echo 'mkdir -p /var/tmp/openclaw-compile-cache' >> ~/.bashrc
echo 'export OPENCLAW_NO_RESPAWN=1' >> ~/.bashrc
source ~/.bashrc

This can significantly speed up CLI command startup times on constrained hardware.

🔒 Use Tailscale for Secure Remote Access

Want to access your OpenClaw VM from anywhere without exposing ports?

# Install Tailscale
curl -fsSL https://tailscale.com/install.sh | sh

# Authenticate
sudo tailscale up

# Serve OpenClaw dashboard via Tailscale
tailscale serve https / http://127.0.0.1:18789

Now access your dashboard securely from any device on your Tailscale network.

📊 Add Monitoring

Keep an eye on resource usage:

# Install Prometheus Node Exporter
sudo apt install prometheus-node-exporter -y

# Or use simple logging
openclaw daemon logs --follow | tee -a ~/openclaw.log

Pair with Grafana for dashboards, or just check htop periodically.

🧠 Add Skills Gradually

OpenClaw’s power comes from skills (modular capabilities). Start simple:

# List available skills
openclaw skills list

# Add useful ones
openclaw skills add weather
openclaw skills add gog  # Google Workspace integration
openclaw skills add gemini  # Google AI

Gotcha: Avoid installing untrusted third-party skills. Stick to official or well-reviewed ones.

🎨 Pair with ComfyUI for Image Generation

If you have a GPU (in the VM or on the ProxMox host), run ComfyUI for local image generation:

# Install ComfyUI (requires Python 3.10+, CUDA)
git clone https://github.com/comfyanonymous/ComfyUI.git
cd ComfyUI
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt

# Start server
python main.py --listen 0.0.0.0

# Connect from OpenClaw via skill or API calls

No more DALL-E API costs!

⚡ Storage Optimization

OpenClaw stores sessions, logs, and memory files. Keep them tidy:

# Clean old logs (keep last 30 days)
find ~/.local/share/openclaw/logs -type f -mtime +30 -delete

# Archive old memory files
tar -czf ~/memory-archive-$(date +%F).tar.gz ~/.local/share/openclaw/memory

🔄 Automate Daily Tasks

Use OpenClaw’s heartbeat feature to run periodic checks:

Edit ~/clawd/HEARTBEAT.md:

# HEARTBEAT.md

## Daily Checks (8:00 AM)
- Check email for urgent messages
- Review calendar for today's meetings
- Fetch weather forecast

## Every 4 hours
- Monitor server health (CPU, memory, disk)
- Check for OpenClaw updates

Configure heartbeat settings through the CLI (openclaw heartbeat config) or by editing ~/.openclaw/openclaw.json.

🐳 Run in a Container (Advanced Alternative)

If you prefer Docker/Podman over a full VM:

# Pull official image
docker pull openclaw/openclaw:latest

# Run with persistent storage
docker run -d \
  --name openclaw \
  -v openclaw-data:/root/.config/openclaw \
  -v openclaw-workspace:/root/clawd \
  -p 18789:18789 \
  openclaw/openclaw:latest

ProxMox can run LXC containers natively — lighter than VMs!

Common Gotchas & Troubleshooting

“Command not found: openclaw”

Your npm global bin path isn’t in $PATH. Fix:

echo 'export PATH="$(npm prefix -g)/bin:$PATH"' >> ~/.bashrc
source ~/.bashrc

Gateway won’t start

Check logs:

openclaw daemon logs

Common causes: – Port 18789 already in use (change in config) – Missing API key (run openclaw onboard again) – File permission issues (check ~/.config/openclaw ownership)

High RAM usage

OpenClaw caches conversations in memory. Limit session count or restart periodically:

# Restart daemon weekly
sudo crontab -e
# Add: 0 3 * * 0 openclaw daemon restart

Or allocate more RAM to the VM.

Firewall blocking connections

Temporarily disable to test:

sudo ufw disable
# Test connection
sudo ufw enable

If that fixes it, adjust your rules.

Slow API responses

  • Check your internet connection
  • Try a different AI provider (some have latency spikes)
  • Use a geographically closer API endpoint if available

Conclusion

Running OpenClaw on ProxMox gives you the best of both worlds: the power and flexibility of virtualization plus the privacy and control of self-hosting your AI assistant.

What you’ve built: – A dedicated, isolated environment for your AI assistant – Hardened security with firewalls, SSH keys, and authentication – Optimized costs through smart model selection and caching – A foundation you can expand (add more services, skills, integrations)

Next steps: – Explore OpenClaw skills (Google Workspace, Telegram bots, automation) – Integrate with your existing tools (calendars, email, project management) – Experiment with local LLMs via Ollama – Join the OpenClaw community on Discord for tips and updates

Your AI assistant is now online, private, and under your control. Welcome to the future of self-hosted AI.


Resources: – OpenClaw Docs: docs.openclaw.ai – ProxMox Wiki: pve.proxmox.com/wiki – OpenClaw Discord: discord.com/invite/clawd – GitHub: github.com/openclaw/openclaw


Need help setting this up for your team? Uptown4 specializes in AI automation and self-hosted solutions. Get in touch →

Running OpenClaw on ProxMox: The Complete Guide to Self-Hosting Your AI Assistant

Leave a Reply

Your email address will not be published. Required fields are marked *