Sunday, 10 August 2025

Run AI Agents Locally with Ollama + n8n on Docker (Step-by-Step)

🚀 Want to run powerful AI right on your own computer — no cloud, no API keys, no extra costs?

Here’s how you can do it with Ollama, n8n, and Docker — all fully local!


💡 Why this combo rocks
Ollama → run and manage open-source GPT-OSS models locally
n8n → automate AI workflows inside Docker
Docker → keeps everything clean, isolated, and persistent

⚡ Quick Setup Steps
1️⃣ Install Docker Desktop
Create a Docker volume for n8n data:
`docker volume create n8n_data`
Run n8n:
docker run -it --rm --name n8n -p 5678:5678 -v n8n_data:/home/node/.n8n n8nio/n8n
Open 👉 http://localhost:5678 to start building workflows.

2️⃣ Install Ollama
Download from 👉 https://ollama.com/search
Simple installer for managing local AI models.

3️⃣ Download LLMs & Run (Example: GPT-OSS)
In terminal, run:
ollama run gpt-oss:latest
Common commands:
ollama ps
ollama run <model>
ollama stop <model>
Local endpoint: 👉 http://localhost:11434

Test the model fully offline — no internet needed.
4️⃣ Start Ollama Server & Connect to n8n
Run: ollama serve
In n8n, create an Ollama Chat Model credential with base URL:
👉 http://host.docker.internal:11434
💡 Note: Inside Docker, localhost refers to the container, not your machine. Use http://host.docker.internal to connect Ollama to n8n.

✅ Why this matters
Full privacy → all data stays local
No cloud costs → GPT-OSS is free to run
Automation power → build workflows with n8n
Simple AI model management → thanks to Ollama

No comments: