I Built a Mini ChatGPT in Just 10 Lines Using LangChain (Part 1)

Published: (December 3, 2025 at 12:36 AM EST)
2 min read
Source: Dev.to

Source: Dev.to

Everyone wants to build an AI assistant—a chatbot, personal agent, support bot, or micro‑GPT.
Beginners often think they need complex architectures, fine‑tuned models, heavy GPUs, RAG pipelines, vector databases, or advanced prompt engineering before they can start. The truth is you can build a functioning conversational AI—a mini ChatGPT—in less than 10 lines of Python using LangChain. It remembers context, responds smoothly, and serves as a solid foundation for any real AI application.

What We’re Building (Mini ChatGPT)

The mini chatbot supports:

  • Conversational responses
  • Automatic memory
  • Context retention
  • Continuous interaction
  • Clean and expandable architecture
  • Runs entirely from a single Python file
from langchain.llms import OpenAI
from langchain.chains import ConversationChain

llm = OpenAI(openai_api_key="YOUR_API_KEY")
chat = ConversationChain(llm=llm)

while True:
    message = input("You: ")
    print("Bot:", chat.run(message))

Example Interaction

You: hey there
Bot: Hello! How can I help you today?

You: remember my name is Ashish
Bot: Got it! Nice to meet you, Ashish.

You: what's my name?
Bot: You just told me your name is Ashish.

The bot understands context and stores memory without any explicit state‑machine code.

Components

ComponentRole
OpenAI()The language model generating responses
ConversationChainHandles dialog flow and memory
while loopKeeps the interaction alive
chat.run()Passes input → LLM → memory → output

No database, embeddings, vector store, or fine‑tuning is required—just clean conversational AI.

Extending the Mini ChatGPT

Voice

  • Speech‑to‑text: Whisper
  • Text‑to‑speech: gTTS, ElevenLabs
  • Front‑ends: Streamlit, FastAPI, or a React UI

Agents

  • Frameworks: LangGraph, custom Tools
  • Enable multi‑step reasoning and tool use

Custom Personality

  • Prompt templates
  • System messages
  • LoRA fine‑tuning for style adjustments

These extensions let the 10‑line foundation evolve into a full AI product.

Scaling Up

Once the basic chatbot works, you can add:

  • Memory backends: ConversationBufferMemory, Redis, SQLite, etc.
  • Retrieval: embeddings with FAISS or ChromaDB, and a RetrievalQA chain

Complexity should be added after functionality is verified.

Next Steps (Part 2)

The upcoming part will show how to turn this Mini ChatGPT into a PDF Q&A bot using Retrieval‑Augmented Generation (RAG).

If you’d like that tutorial, comment “PDF BOT”. Let me know if you want versions that:

  • Work on WhatsApp or Telegram
  • Store memory in a database
  • Use local open‑source LLMs
  • Have a web UI
  • Become voice‑enabled

I’ll write the next version for you.

Back to Blog

Related posts

Read more »