<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Ai on Luiz Felipe F M Costa</title><link>https://thenets.org/tags/ai/</link><description>Recent content in Ai on Luiz Felipe F M Costa</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Wed, 28 May 2025 08:02:31 +0000</lastBuildDate><atom:link href="https://thenets.org/tags/ai/index.xml" rel="self" type="application/rss+xml"/><item><title>How to install Ollama + Open-WebUI on Fedora/RHEL using podman Quadlets</title><link>https://thenets.org/posts/how-to-install-ollama-open-webui-on-fedora-using-podman-quadlets/</link><pubDate>Wed, 28 May 2025 08:02:31 +0000</pubDate><guid>https://thenets.org/posts/how-to-install-ollama-open-webui-on-fedora-using-podman-quadlets/</guid><description>&lt;p&gt;Ollama is a powerful tool for running large language models locally. This guide walks you through setting up Ollama on Fedora using Podman Quadlets, ensuring it runs as a systemd service with persistent storage and GPU support.&lt;/p&gt;
&lt;h2 id="requirements"&gt;Requirements&lt;/h2&gt;
&lt;p&gt;Make sure you have full GPU support for podman containers:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-bash"&gt;# This must return something like
# &amp;gt; GPU 0: NVIDIA GeForce RTX 3070 (UUID: GPU-...)
podman run --rm --security-opt=label=disable \
 --device nvidia.com/gpu=all \
 ubi9 \
 nvidia-smi -L
&lt;/code&gt;&lt;/pre&gt;
&lt;figcaption class="code-caption"&gt;Check podman GPU support&lt;/figcaption&gt;
&lt;p&gt;If you have a NVIDIA GPU and the command above didn&amp;rsquo;t work, check the following post to fix that:&lt;/p&gt;</description></item></channel></rss>