Creating an AI Discord Bot with Ollama

Published: (December 14, 2025 at 03:09 AM EST)
4 min read
Source: Dev.to

Source: Dev.to

Overview

In this tutorial you’ll learn how to create an AI‑powered Discord bot from scratch. By the end you’ll have a bot that can:

  • Respond to messages using AI
  • Have conversations with users

No prior experience is required, though basic knowledge of Python is helpful.

What you will learn

  • How to create a Discord bot application
  • How to install and use Ollama (fully local)
  • How to write Python code to connect them
  • How to make your bot talk like a real person
  • How to run the bot 24/7

What you will need

  • Python 3.8+ – programming language
  • Discord account – to create the bot
  • Ollama – free local AI
  • Text editor / IDE – of your choice

Part 1: Setting up your Discord bot

  1. Go to the Discord developers page.
  2. Click New Application, give it a name, and create it.
  3. Navigate to the Bot tab and enable the following privileged gateway intents:
    • Presence Intent
    • Server Members Intent
    • Message Content Intent
  4. Click Reset Token, copy the token, and store it securely.
  5. Invite the bot to your server:
    • Go to OAuth2URL Generator
    • Scopes: bot, applications.commands
    • Permissions:
      • Send Messages
      • Send Messages in Threads
      • Read Message History
      • View Channels
      • Use Slash Commands
      • Add Reactions
    • Copy the generated URL, select your server, and authorize the bot.

The bot will appear offline in your server until you run it.

Part 2: Installing Ollama

Ollama runs AI models locally, offering free, private, and fast inference.

macOS

brew install ollama

Linux

curl -fsSL https://ollama.ai/install.sh | sh

Windows

Download the installer from .

Start Ollama:

ollama serve

Keep this terminal open; the bot will communicate with Ollama through it.

Download a model

ollama pull llama3.1

The llama3.1 model is free, open‑source, and well‑suited for conversations.

Verify the installation

ollama run llama3.1

You should see a prompt where you can type and receive responses.

Part 3: Installing Python dependencies

Create and activate a virtual environment:

python3 -m venv env
source env/bin/activate   # On Windows use `env\Scripts\activate`

Install the required packages:

pip install py-cord requests python-dotenv
  • py‑cord – Discord bot library
  • requests – HTTP client for talking to Ollama
  • python‑dotenv – Loads environment variables (e.g., the Discord token)

Part 4: Preparing the project

  1. Create a .env file to store your Discord token:

    vim .env
  2. Add the following line (replace your_token_here with the token you copied earlier):

    DISCORD_TOKEN="your_token_here"

Part 5: The complete bot code

Create a file named bot.py and paste the code below.

# ============================================
# IMPORTS
# ============================================
import discord
from discord.ext import commands
import requests
import os
from dotenv import load_dotenv

# Load environment variables
load_dotenv()

# ============================================
# CONFIGURATION
# ============================================
TOKEN = os.getenv("DISCORD_TOKEN")
OLLAMA_URL = "http://localhost:11434/api/generate"
OLLAMA_MODEL = "llama3.1"
PREFIX = "!"

# ============================================
# BOT SETUP
# ============================================
intents = discord.Intents.all()
bot = commands.Bot(command_prefix=PREFIX, intents=intents)

# ============================================
# HELPER: Talk to Ollama AI
# ============================================
def ask_ai(prompt, temperature=0.9):
    """
    Send a prompt to Ollama and get the AI response.
    """
    try:
        payload = {
            "model": OLLAMA_MODEL,
            "prompt": prompt,
            "stream": False,
            "options": {
                "temperature": temperature,
                "num_predict": 200
            }
        }
        response = requests.post(OLLAMA_URL, json=payload, timeout=30)

        if response.status_code == 200:
            data = response.json()
            return data.get("response", "").strip()
        else:
            return "Sorry, I had trouble thinking of a response!"
    except requests.exceptions.ConnectionError:
        return "Error: Ollama is not running! Start it with 'ollama serve'"
    except Exception as e:
        return f"Error: {str(e)}"

# ============================================
# EVENTS
# ============================================
@bot.event
async def on_ready():
    """Called when the bot successfully connects to Discord."""
    print(f"✅ Bot is online as {bot.user.name}")
    print(f"✅ Connected to {len(bot.guilds)} server(s)")
    print(f"✅ Prefix: {PREFIX}")
    print("Ready to chat!")

@bot.event
async def on_message(message):
    """Called whenever a message is sent in a channel the bot can see."""
    # Ignore messages from the bot itself or other bots
    if message.author == bot.user or message.author.bot:
        return

    # Respond when the bot is mentioned
    if bot.user.mentioned_in(message):
        user_message = message.content.replace(f'', '').strip()
        if not user_message:
            return

        # Show typing indicator while processing
        async with message.channel.typing():
            reply = ask_ai(user_message)
            await message.reply(reply)

    # Ensure commands still work
    await bot.process_commands(message)

# ============================================
# RUN THE BOT
# ============================================
if __name__ == "__main__":
    bot.run(TOKEN)

Running the bot

  1. Ensure Ollama is running (ollama serve).

  2. Activate your virtual environment.

  3. Execute:

    python bot.py

The bot should come online, and you can mention it in Discord to start a conversation.

You now have a fully functional AI Discord bot powered by a local Ollama model. Enjoy building and customizing it further!

Back to Blog

Related posts

Read more »