Skip to Content

Host Your Own Private GPT — Docker + Ollama + OpenWebUI (No Subscription Needed)

In this video, we walk you through how to host a private GPT-style AI on your own machine using Docker Desktop, Ollama, and Open WebUI. No cloud, no subscription fees, and no external restrictions just a fully local, private AI setup you control.

Why Use a Local AI Setup?

  • Privacy & control : Your data stays on your machine; nothing is sent to external servers.
  • No subscription, no pay-per-use: Once everything is installed, you can run models as often as you like without ongoing costs. 
  • Flexibility: Run different open-source models (small or large), choose CPU or GPU mode depending on hardware, and customize as needed. 
  • Offline capability: With everything local, you don’t need an internet connection to use your AI.

What Components Make It Work

  • Ollama: Acts as the “engine” to run large-language models (LLMs) locally. It handles model download, loading, and inference. 
  • Open WebUI: A browser-based interface (a “ChatGPT-like UI”) that connects to Ollama, letting you chat with the models, switch models, manage settings — no terminal needed. 
  • Docker Desktop / Docker Compose: Containerization makes setup consistent and portable: Ollama and Open WebUI run in isolated containers, with persistent storage for models/data.

This combination gives you a powerful, self-hosted AI stack all on your terms.

Step-by-Step Setup Guide [Watch the video walkthrough]

Here’s a simplified version; you can adapt based on your OS / hardware:


    1. Install Docker Desktop: https://docs.docker.com/desktop/setup/install/windows-install/ ensures your system can run containers.
    2. Pull or build the containers for Ollama and Open WebUI.
      1. Ollama can be directly pulled using the serach function. Pull Image ollama/ollama.
      2. Open WebUI Use Docker desktop Terminal :  User command 
        1. docker pull ghcr.io/open-webui/open-webui:main 
    3. Start both the stacks by clicking Run.
      1. Provide Container Name, Host port: as per the Video.
    4. Access Open WebUI: open your browser at http://localhost:8080. You’ll see a UI where you can manage models.
    5. Create an Admin user account by providing Name, Email, and Password.
    6. Visit https://ollama.com/search to select a model thats fits your laptop.
        1. Smaller models for lower-spec machines
        1. Larger models (if you have enough RAM/GPU) for better performance
        2. Copy the exact model name.  
    7. Pull and run models from Ollama using OpenWebUI Admin Panel.
      • Click on your User Account Icon and Navigate to Admin Panel.
      • Select Connections under Settings Tab. 
      • Navigate to Manage Ollama API Connections and click Download(Pull) Icon.
      • Enter your model name under option "Pull a model from Ollama.com" 

    Create a new Chat and Use your AI chat, test prompts, build applications, or integrate with other local tools.

    How to Create a Free Website for Your Small Business (And When to Invest in a Better Option)