Install IronClad AI Agent
Secure-by-Design Autonomous AI Agent — built in Rust. One command to install, full control over your environment.
⚡ Quick Install #
Run the automated installer for your platform. It handles system dependencies, Rust, IronClad, and configuration — all in one command.
curl -fsSL https://overcrash66.github.io/IronClad-AI-Agent/install.sh | bash
What this does: Detects your package manager (apt/dnf/brew/pacman), installs build
dependencies, Rust, and IronClad from crates.io. Generates a default settings.toml and
optionally installs Ollama.
iwr -useb https://overcrash66.github.io/IronClad-AI-Agent/install.ps1 | iex
Prerequisite: Visual Studio Build Tools with "Desktop development with C++" must be installed. Download here.
curl -fsSL https://overcrash66.github.io/IronClad-AI-Agent/install-docker.sh | bash
No Rust needed. Builds a multi-stage Docker image and runs IronClad in a container. Ollama must be running on the host.
📋 Prerequisites #
If you prefer to install manually, ensure these are in place first.
System Dependencies
sudo apt update
sudo apt install -y build-essential pkg-config libssl-dev curl git
sudo dnf install -y gcc pkgconf-pkg-config openssl-devel curl git
sudo pacman -Syu --noconfirm base-devel openssl curl git
xcode-select --install
If you use Homebrew: brew install openssl pkg-config
Install Visual Studio Build Tools with the "Desktop development with C++" workload.
During installation, make sure to check "MSVC v143 - VS 2022 C++ x64/x86 build tools" and "Windows SDK".
Rust Toolchain
Install Rust from rustup.rs:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
Download and run rustup-init.exe from rustup.rs.
LLM Provider (choose one)
| Provider | Type | Setup |
|---|---|---|
| Ollama (default) | Local | Install from ollama.com, then
ollama pull llama3
|
| LM Studio | Local | Download from lmstudio.ai, start local server |
| OpenAI | Cloud | Set default_provider = "openai" + IRONCLAD_OPENAI_KEY |
| Anthropic | Cloud | Set default_provider = "anthropic" + IRONCLAD_ANTHROPIC_KEY |
| Google Gemini | Cloud | Set default_provider = "gemini" + API key |
| NVIDIA NIM | Cloud | Set default_provider = "nvidia" + API key |
| Any OpenAI-compatible | Local/Cloud | Configure as openai provider with custom base_url |
🔧 Manual Install (Step-by-Step) #
For users who prefer full control. Follow these steps in order.
Install system dependencies
See Prerequisites above for your platform-specific commands.
Install Rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source "$HOME/.cargo/env"
Install IronClad via cargo
cargo install ironclad-ai-agent
This compiles from source and installs the binary to ~/.cargo/bin/. Takes 3–8 minutes
depending on your system.
Create settings.toml
Download the default configuration file or copy from below:
⬇️ Download settings.toml# IronClad Minimal Configuration
[general]
workspace_dir = "./scratchpad"
log_level = "info"
[sandbox]
backend = "local" # "docker" | "wsl" | "local"
[llm]
default_provider = "ollama"
agentic_mode = true
max_tool_calls = 30
[llm.ollama]
base_url = "http://127.0.0.1:11434"
model = "llama3"
keep_alive = "10m"
[dashboard]
enabled = true
port = 8080
[integrations.telegram]
enabled = false
# telegram_key = "" # Or set via: IRONCLAD_TELEGRAM_KEY
[tools]
python_venv_enforced = true
allow_system_commands = false
Place this file in the same directory as the binary, or at
~/.config/ironclad/settings.toml.
Set environment variables
# Required for local tool execution
export IRONCLAD_ALLOW_LOCAL_EXEC=1
# API keys (set only the providers you use)
export IRONCLAD_OPENAI_KEY="sk-..."
export IRONCLAD_ANTHROPIC_KEY="sk-ant-..."
export IRONCLAD_GITHUB_KEY="ghp_..."
export IRONCLAD_TELEGRAM_KEY="123456:ABC-..."
# Required for local tool execution
$env:IRONCLAD_ALLOW_LOCAL_EXEC = "1"
# API keys (set only the providers you use)
$env:IRONCLAD_OPENAI_KEY = "sk-..."
$env:IRONCLAD_ANTHROPIC_KEY = "sk-ant-..."
$env:IRONCLAD_GITHUB_KEY = "ghp_..."
$env:IRONCLAD_TELEGRAM_KEY = "123456:ABC-..."
To persist across sessions, use
[System.Environment]::SetEnvironmentVariable("IRONCLAD_OPENAI_KEY", "sk-...", "User")
Install an LLM provider
curl -fsSL https://ollama.com/install.sh | sh
ollama pull llama3
No cloud API key needed. IronClad works entirely offline with Ollama. For Windows, download from ollama.com/download/windows.
Run IronClad
ironclad-ai-agent
ironclad-ai-agent.exe
🐳 Docker #
Run IronClad without installing Rust on your host. The Docker install script builds a multi-stage image automatically.
curl -fsSL https://overcrash66.github.io/IronClad-AI-Agent/install-docker.sh | bash
Manual Docker run
docker run -it --rm \
-v ~/.config/ironclad/settings.toml:/workspace/settings.toml \
-v ~/ironclad-workspace:/workspace/scratchpad \
-p 8080:8080 -p 3000:3000 \
-e IRONCLAD_OPENAI_KEY=$IRONCLAD_OPENAI_KEY \
-e IRONCLAD_TELEGRAM_KEY=$IRONCLAD_TELEGRAM_KEY \
ironclad-ai-agent
Ollama access: The generated settings.toml uses
host.docker.internal:11434 to reach Ollama running on your host machine.
⚙️ Configuration #
All options live in settings.toml. Here are the most important ones:
[sandbox]
backend = "local" # "wsl" | "docker" | "local"
[llm]
default_provider = "ollama" # "ollama" | "openai" | "anthropic" | "gemini" | "nvidia"
turbo_mode = false # true = skip orchestrator + QA (fastest)
agentic_mode = true # ReAct loop (recommended)
max_tool_calls = 30 # tool budget per task
[llm.ollama]
base_url = "http://127.0.0.1:11434"
model = "llama3"
keep_alive = "10m"
# LM Studio or any OpenAI-compatible API
[llm.openai]
base_url = "http://127.0.0.1:1234/v1" # LM Studio default
model = "llama-3.1-8b-instruct"
[integrations.telegram]
enabled = false
# telegram_key = "" # Or set via env: IRONCLAD_TELEGRAM_KEY
allowed_chat_ids = []
trusted_chat_ids = []
[security]
autonomous_mode = false # true = skip TUI approval dialogs
[dashboard]
enabled = true
port = 8080
Web Dashboard: Once running, open http://127.0.0.1:8080 — the ⚙
Settings tab lets you configure everything visually without editing TOML files!
Environment variables can override any nested config value using the pattern
IRONCLAD__<SECTION>__<KEY>. For example:
export IRONCLAD__LLM__DEFAULT_PROVIDER=openai
export IRONCLAD__SECURITY__AUTONOMOUS_MODE=true
✅ Verify Installation #
# Check the binary is available
ironclad-ai-agent --version
# Start IronClad
ironclad-ai-agent
# Open the Web Dashboard
# → http://127.0.0.1:8080
You should see the interactive TUI:
╔═══════════════════════════════════╗
║ IronClad — Secure AI Agent ║
╚═══════════════════════════════════╝
Persona: default │ Provider: ollama │ Model: llama3
Type a task and press Enter. Type /help for commands.
>
Try these tasks to confirm everything works:
> List the files in the current directory
> What is the current system time?
> Search the web for "Rust programming language"
🔍 Troubleshooting #
❌ "connection refused" connecting to Ollama
Make sure Ollama is running:
ollama serve
On Windows with WSL2, use 127.0.0.1 (not localhost) in settings.toml
to avoid ~300ms DNS delays.
❌ WSL2 command failures
Verify your WSL distro name matches settings.toml:
wsl --list
⏱️ Model responds slowly
Enable turbo mode and keep-alive:
[llm]
turbo_mode = true
[llm.ollama]
keep_alive = "10m"
Or switch backend = "local" to eliminate sandbox overhead.
🚫 "Blocked" actions
IronClad's Traffic Light policy blocks certain operations by design. To allow Yellow/Red actions without manual approval:
[security]
autonomous_mode = true
Caution: This skips TUI confirmation dialogs. Some operations (force-push, path traversal) remain hard-blocked regardless.
🔗 Linking errors during compilation
If cargo install fails with linker errors, ensure system dependencies are installed:
sudo apt install -y build-essential pkg-config libssl-dev
🪟 WSL2 Tips #
For best performance on Windows with Ollama and WSL2:
[wsl2]
networkingMode=mirrored
dnsTunneling=true
autoMemoryReclaim=gradual
Use 127.0.0.1 instead of localhost in settings.toml to avoid ~300ms
DNS resolution delays in WSL2 mirrored networking mode.