How I Keep My Dev Machine Clean with Docker: A Full AI-Powered Setup for .NET, Python & React | Somvilla
Every dev tool in Docker, nothing on the host. A complete setup for .NET, Python, React, local AI via Ollama, and three databases on CachyOS.
As a developer who also games, the last thing I want is my machine cluttered with SDKs, runtimes, database servers, and AI tooling all fighting for resources. After a fresh install of CachyOS, I decided to do things properly — every dev tool lives in Docker, my host stays lean, and Claude Code runs inside isolated dev containers per project.
This is the exact setup I landed on. It supports .NET 9, Python, React, local AI models via Ollama (with AMD GPU acceleration), and three databases. Let’s build it from scratch.
What We’re Building
- One always-on infrastructure stack — databases, Ollama, Open WebUI, Portainer
- Per-project dev containers — each project gets its own isolated environment
- A shared Docker network — so every container can talk to every other container by name
- A clean host — no SDKs, no runtimes, no database installs on your actual machine
Prerequisites
You’ll need Docker, Docker Compose, and the Buildx plugin installed. VS Code’s Dev Containers extension requires Buildx — it won’t work without it.
sudo pacman -S docker docker-compose docker-buildx
sudo systemctl enable --now docker
sudo usermod -aG docker $USER
# Log out and back in for the group change to take effect
You’ll also need a global git config and an SSH folder. On a fresh install these might not exist yet, and Dev Containers will fail trying to mount them:
git config --global user.name "Your Name"
git config --global user.email "your@email.com"
mkdir -p ~/.ssh
chmod 700 ~/.ssh
Step 1: AMD GPU Setup (Skip if Using NVIDIA or CPU)
If you have an AMD GPU, the ollama/ollama:rocm Docker image bundles all the ROCm libraries inside the container — you don’t need to install anything ROCm-related on your host. CachyOS already includes the amdgpu kernel module which handles everything at the driver level.
The only thing you need on the host is to add your user to the right groups so Docker containers can access the GPU devices:
sudo usermod -aG render,video $USER
# Log out and back in for the group change to take effect
Then verify the GPU devices exist:
ls /dev/kfd /dev/dri
If /dev/kfd is present and you’re in the render and video groups, you’re done. The Docker image handles the rest.
Step 2: Create Your Folder Structure
Keep everything organised from the start. This layout separates your infrastructure config from your actual project code:
~/docker/
infra/
docker-compose.yml
.env
volumes/
ollama/
postgres/
mssql/
mongodb/
~/projects/
my-dotnet-app/
.devcontainer/
devcontainer.json
my-python-project/
.devcontainer/
devcontainer.json
my-react-app/
.devcontainer/
devcontainer.json
Create the directories:
mkdir -p ~/docker/infra
mkdir -p ~/docker/volumes/{ollama,postgres,mssql,mongodb}
mkdir -p ~/projects
SQL Server requires one extra step. It runs as a non-root user internally (UID 10001, hardcoded by Microsoft for security). The mssql volume folder needs to be owned by that UID, otherwise SQL Server can’t write to it and will crash-loop on startup:
sudo chown -R 10001:10001 ~/docker/volumes/mssql
The other databases (Postgres, Mongo, Redis) handle their own permissions at startup — only SQL Server needs this.
Step 3: Create Your Environment File
Before writing the Compose file, set up your secrets. Never hardcode passwords directly in docker-compose.yml.
You’ll also need the numeric group IDs for render and video. Docker containers don’t share your host’s group names — they need the actual IDs. Get them:
getent group render
getent group video
You’ll see output like render:x:989:somvilla — the number is the GID you need.
Create ~/docker/infra/.env:
POSTGRES_PASSWORD=changeme
MSSQL_SA_PASSWORD=YourStr0ng@Password!
MONGO_PASSWORD=changeme
# Replace these with the numbers from getent above
RENDER_GID=989
VIDEO_GID=985
Note: SQL Server requires the SA password to be at least 8 characters and include uppercase, lowercase, a digit, and a special character. It will refuse to start if this isn’t met.
Step 4: The Infrastructure Docker Compose File
This is the heart of the setup. Create ~/docker/infra/docker-compose.yml:
name: dev-infra
networks:
devnet:
name: devnet
driver: bridge
services:
# ── Local AI ─────────────────────────────────────────
ollama:
image: ollama/ollama:rocm
container_name: ollama
restart: unless-stopped
ports:
- "11434:11434"
devices:
- /dev/kfd
- /dev/dri
group_add:
- "${RENDER_GID}"
- "${VIDEO_GID}"
volumes:
- ~/docker/volumes/ollama:/root/.ollama
networks:
- devnet
open-webui:
image: ghcr.io/open-webui/open-webui:main
container_name: open-webui
restart: unless-stopped
ports:
- "3000:8080"
environment:
- OLLAMA_BASE_URL=http://ollama:11434
depends_on:
- ollama
networks:
- devnet
# ── Databases ─────────────────────────────────────────
postgres:
image: postgres:16
container_name: postgres
restart: unless-stopped
ports:
- "5432:5432"
environment:
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_USER: dev
volumes:
- ~/docker/volumes/postgres:/var/lib/postgresql/data
networks:
- devnet
mssql:
image: mcr.microsoft.com/mssql/server:2022-latest
container_name: mssql
restart: unless-stopped
ports:
- "1433:1433"
environment:
ACCEPT_EULA: "Y"
MSSQL_SA_PASSWORD: ${MSSQL_SA_PASSWORD}
volumes:
- ~/docker/volumes/mssql:/var/opt/mssql
networks:
- devnet
mongodb:
image: mongo:7
container_name: mongodb
restart: unless-stopped
ports:
- "27017:27017"
environment:
MONGO_INITDB_ROOT_USERNAME: dev
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_PASSWORD}
volumes:
- ~/docker/volumes/mongodb:/data/db
networks:
- devnet
redis:
image: redis:7-alpine
container_name: redis
restart: unless-stopped
ports:
- "6379:6379"
networks:
- devnet
# ── Management ────────────────────────────────────────
portainer:
image: portainer/portainer-ce:latest
container_name: portainer
restart: unless-stopped
ports:
- "9000:9000"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- devnet
What each service does
| Service | Purpose | URL |
|---|---|---|
ollama | Runs local AI models with AMD GPU | localhost:11434 |
open-webui | ChatGPT-like UI for your local models | localhost:3000 |
postgres | PostgreSQL 16 | localhost:5432 |
mssql | SQL Server 2022 | localhost:1433 |
mongodb | MongoDB 7 | localhost:27017 |
redis | Redis 7 (caching, queues) | localhost:6379 |
portainer | Docker management web UI | localhost:9000 |
Start the stack:
cd ~/docker/infra
docker compose up -d
Step 5: Pull Your First Local Model
Once Ollama is running, pull a model. A good starting point for code assistance:
docker exec -it ollama ollama pull qwen2.5-coder:7b
Then open localhost:3000 in your browser — Open WebUI will be waiting with your model ready to chat.
Step 6: Per-Project Dev Containers
This is where the real magic happens. Each project gets a .devcontainer/devcontainer.json file. When you open the project in VS Code or Rider, it automatically builds and connects you to an isolated container — with your databases and Ollama reachable by name over devnet.
.NET Project
Create .devcontainer/devcontainer.json inside your .NET project:
{
"name": "dotnet-app",
"image": "mcr.microsoft.com/devcontainers/dotnet:9.0",
"runArgs": ["--network=devnet"],
"features": {
"ghcr.io/devcontainers/features/node:1": {},
"ghcr.io/devcontainers/features/git:1": {}
},
"mounts": [
"source=${localEnv:HOME}/.gitconfig,target=/home/vscode/.gitconfig,type=bind,readonly",
"source=${localEnv:HOME}/.ssh,target=/home/vscode/.ssh,type=bind,readonly"
],
"remoteEnv": {
"OLLAMA_HOST": "http://ollama:11434",
"ConnectionStrings__DefaultConnection": "Server=mssql,1433;Database=mydb;User=sa;Password=YourStr0ng@Password!;TrustServerCertificate=True",
"ConnectionStrings__Postgres": "Host=postgres;Port=5432;Database=mydb;Username=dev;Password=changeme"
},
"postCreateCommand": "dotnet restore",
"customizations": {
"vscode": {
"extensions": ["ms-dotnettools.csdevkit"]
}
}
}
Astro Project (with pnpm + Cloudflare)
Note the correct image is javascript-node, not node — devcontainers/node does not exist. The postCreateCommand installs pnpm and Claude Code globally, then installs project dependencies:
{
"name": "astro-app",
"image": "mcr.microsoft.com/devcontainers/javascript-node:22",
"runArgs": ["--network=devnet"],
"forwardPorts": [4321],
"mounts": [
"source=${localEnv:HOME}/.gitconfig,target=/home/vscode/.gitconfig,type=bind,readonly",
"source=${localEnv:HOME}/.ssh,target=/home/vscode/.ssh,type=bind,readonly"
],
"postCreateCommand": "npm install -g pnpm @anthropic-ai/claude-code && pnpm config set store-dir /home/node/.pnpm-store && pnpm install",
"customizations": {
"vscode": {
"extensions": [
"astro-build.astro-vscode",
"dbaeumer.vscode-eslint",
"esbenp.prettier-vscode"
]
}
}
}
Cloudflare adapter + dev server fix: If your Astro project uses the Cloudflare adapter (output: 'server'), the dev server binds to localhost inside the container by default and VS Code can’t forward it through. Add this to your astro.config.mjs:
export default defineConfig({
site: 'https://yoursite.com',
server: {
host: '0.0.0.0',
port: 4321
},
output: 'server',
adapter: cloudflare(),
// ... rest of config
});
This only affects astro dev — it has no effect on your Cloudflare build or deployment.
pnpm store location: By default pnpm will create a .pnpm-store folder inside your project directory inside the container, which shows up as thousands of untracked git changes. The pnpm config set store-dir command in postCreateCommand above fixes this by moving the store outside your workspace. If you’ve already hit this issue, add .pnpm-store to your .gitignore:
echo ".pnpm-store" >> .gitignore
React / Node Project
{
"name": "react-app",
"image": "mcr.microsoft.com/devcontainers/javascript-node:22",
"runArgs": ["--network=devnet"],
"forwardPorts": [5173],
"mounts": [
"source=${localEnv:HOME}/.gitconfig,target=/home/vscode/.gitconfig,type=bind,readonly",
"source=${localEnv:HOME}/.ssh,target=/home/vscode/.ssh,type=bind,readonly"
],
"postCreateCommand": "npm install -g @anthropic-ai/claude-code && npm install"
}
Python Project
{
"name": "python-app",
"image": "mcr.microsoft.com/devcontainers/python:3.12",
"runArgs": ["--network=devnet"],
"mounts": [
"source=${localEnv:HOME}/.gitconfig,target=/home/vscode/.gitconfig,type=bind,readonly",
"source=${localEnv:HOME}/.ssh,target=/home/vscode/.ssh,type=bind,readonly"
],
"remoteEnv": {
"OLLAMA_HOST": "http://ollama:11434",
"MONGO_URI": "mongodb://dev:changeme@mongodb:27017"
},
"postCreateCommand": "npm install -g @anthropic-ai/claude-code && pip install -r requirements.txt"
}
Opening in VS Code
Install the Dev Containers extension, then open your project folder and hit Ctrl+Shift+P → Dev Containers: Reopen in Container. VS Code will build the container and drop you straight in.
Opening in Rider
Install the Dev Containers plugin from JetBrains Marketplace. Rider will detect the .devcontainer folder and offer to open the project inside the container automatically.
Step 7: Claude Code in Your Dev Containers
Claude Code is included in the postCreateCommand of each devcontainer above, so it installs automatically when the container first builds. Once inside a container, just run:
claude
It will ask you to authenticate with your Anthropic account on first run. After that it’s ready to use — and because your container is on devnet, Claude Code can reach your databases and Ollama directly. You can point it at your local models for code tasks if you want to keep things fully offline.
Step 8: Gaming-Friendly Resource Limits
To make sure your AI containers don’t eat your RAM mid-game, add resource limits to the Ollama service in your Compose file:
ollama:
# ... existing config ...
deploy:
resources:
limits:
memory: 12G
You can also stop the infra stack before a gaming session and bring it back up after:
# Before gaming
docker compose -f ~/docker/infra/docker-compose.yml stop ollama open-webui
# After gaming
docker compose -f ~/docker/infra/docker-compose.yml start ollama open-webui
The Result
Your CachyOS host has nothing on it except Docker. Every language runtime, every database, every AI tool is containerised. Your projects are fully isolated from each other. You connect via Dev Containers in Rider or VS Code and it feels completely native — git works, SSH works, your connection strings are pre-wired.
And when it’s time to game, you’re one command away from freeing up those resources.
Quick Reference
| What | Command |
|---|---|
| Start everything | cd ~/docker/infra && docker compose up -d |
| Stop everything | cd ~/docker/infra && docker compose down |
| Pull a new model | docker exec -it ollama ollama pull <model> |
| View all containers | docker ps or open Portainer at localhost:9000 |
| Rebuild a dev container | VS Code: Dev Containers: Rebuild Container |
Found this useful? I write about developer tooling, .NET, and Linux on somvilla.com.