How to install Ollama + Open-WebUI on Fedora/RHEL using podman Quadlets

ai May 28, 2025

Ollama is a powerful tool for running large language models locally. This guide walks you through setting up Ollama on Fedora using Podman Quadlets, ensuring it runs as a systemd service with persistent storage and GPU support.

Requirements

Make sure you have full GPU support for podman containers:

# This must return something like
# > GPU 0: NVIDIA GeForce RTX 3070 (UUID: GPU-...)
podman run --rm --security-opt=label=disable \
  --device nvidia.com/gpu=all \
  ubi9 \
    nvidia-smi -L

Check podman GPU support

If you have a NVIDIA GPU and the command above didn't work, check the following post to fix that:

How to use NVIDIA GPU on podman (RHEL 9 / Fedora 42)
Podman is a container engine for developing, managing, and running containers on your Linux System. With the support for NVIDIA GPUs, you can easily run GPU-accelerated workloads in your containers, making it a great option for machine learning and other high-performance computing tasks.

Configuration File

Create a systemd service file for Ollama:

mkdir -p ~/.config/containers/systemd/

Create the systemd container config dir

[Unit]
Description=Ollama container
After=local-fs.target

[Service]
# Create the main directories
ExecStartPre=/usr/bin/mkdir -p %h/containers/ollama
# Always pull the latest version on boot
ExecStartPre=podman pull docker.io/ollama/ollama:latest
# Create a dedicated network
ExecStartPre=-podman network create ollama
Restart=always

[Container]
ContainerName=ollama
HostName=ollama
Image=docker.io/ollama/ollama:latest
Volume=%h/containers/ollama:/root/.ollama:z
PublishPort=11434:11434
PodmanArgs=--security-opt=label=disable --device nvidia.com/gpu=all
Network=podman
Network=ollama

[Install]
# Start by default on boot
WantedBy=multi-user.target default.target

$HOME/.config/containers/systemd/ollama.container

[Unit]
Description=OpenWebUI container
After=local-fs.target

[Service]
# Create the main directories
ExecStartPre=/usr/bin/mkdir -p %h/containers/openwebui
# Always pull the latest version on boot
ExecStartPre=podman pull ghcr.io/open-webui/open-webui:main
Restart=always

[Container]
ContainerName=openwebui
HostName=openwebui
Image=ghcr.io/open-webui/open-webui:main
Volume=%h/containers/openwebui:/app/backend/data:z
PublishPort=3000:8080
Network=podman
Network=ollama
Environment=OLLAMA_BASE_URL=http://ollama:11434

[Install]
# Start by default on boot
WantedBy=multi-user.target default.target

$HOME/.config/containers/systemd/openwebui.container

# DO NOT RUN AS ROOT

# Reload configs
systemctl --user daemon-reload

# Now, enable and start them
systemctl --user start --now ollama.service
systemctl --user start --now openwebui.service

# Check if things are working
systemctl --user status ollama.service
systemctl --user status openwebui.service

Create and enable the systemd service

Install the ollama CLI

I like to use brew to install and manage all my CLIs. You can follow any other method if you prefer.

# Install homebrew if you don't have it installed
# https://brew.sh
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

# Make sure to do the post installation steps. Read carefully the installation output.

Install brew.sh

# Install ollama
brew install ollama

# Check if it is available
ollama help

Install ollama CLI

Now, the final step! Let's check if it works!

# Final test!
ollama run gemma3:1b

# if it works, the model should be pulled and you will have
# an interactive prompt to type on!

Let's run a model

Check the Open-WebUI

Everything should already be up and running. Check your port 3000 to see if the web server is available:

Conclusion

There are many ways to put the ollama server up and running. Here, I covered how to do that using podman quadlets.

The biggest advantage is the power of systemd to manage your server application. You can extend that configuration file to solve your needs. On top of that, you have all the security controls from podman and systemd out-of-the-box, even though you need to disable SElinux labeling to allow the GPU access.

Now, you system is ready to start working with ollama and Open-WebUI.

Tags

Luiz Costa

I am a senior software engineer at Red Hat / Ansible. I love automation tools, games, and coffee. I am also an active contributor to open-source projects.