I pip-installed LangChain and Accidentally Triggered EU AI Act Compliance
Source: Dev.to
A Surprise from the EU AI Act
Last month I was reviewing my startup’s requirements.txt before a deploy.
Standard stuff: FastAPI, SQLAlchemy, LangChain, some utility packages.
Then I read EU AI Act Article 6 and realized that the line langchain==0.2.14 makes my app an “AI system” under EU law—not a theoretical maybe, but a concrete legal classification with obligations attached.
If your Python app imports OpenAI, HuggingFace Transformers, LangChain, or any of a dozen AI frameworks—and your users include anyone in the EU—you’re probably in the same boat. The EU AI Act doesn’t care whether you think you’re building AI; it cares about what your code does. Article 3 defines an AI system as any machine‑based system that generates outputs like predictions, recommendations, or decisions.
My discovery
I spent a Friday afternoon grepping through my project and found three frameworks I’d forgotten about:
# main.py — the obvious one
from langchain_openai import ChatOpenAI
# utils/embeddings.py — forgot this existed
from sentence_transformers import SentenceTransformer
# scripts/analyze.py — "temporary" script from 4 months ago
import openai
Three files. Three separate compliance obligations I didn’t know I had.
Why the deadline matters
Most EU AI Act provisions take full effect in August 2026. After that, non‑compliance penalties can reach €35 million or 7 % of global annual turnover, whichever is higher.
- Startup with €500 K ARR → theoretical fine of €35 K
- Series A company with €5 M ARR → up to €350 K
The regulation scales with you, and “I didn’t know” isn’t a defense.
Quick self‑assessment checklist
Dependency scan
Open your requirements.txt or pyproject.toml and look for any of these direct AI framework dependencies:
openai
anthropic
transformers # HuggingFace
torch / torchvision # PyTorch
tensorflow
langchain / langchain-core / langchain-openai
google-generativeai # Gemini
mistralai
cohere
llama-index
replicate
groq
Source‑code grep
grep -rn "from openai import\|from langchain\|from transformers import\|import torch\|from anthropic import" --include="*.py" .
If anything matches, your project uses an AI framework. That doesn’t automatically make you high‑risk, but you need to assess your risk category under the Act.
Going beyond manual grepping
I built a scanner that checks both dependency files and source code for 16 AI frameworks. It catches things manual grep often misses:
- Transitive dependencies – e.g., your app doesn’t import
openaidirectly, but LangChain does. - Multiple entry points – a Jupyter notebook in
notebooks/that importstransformers. - Cloud provider SDKs –
boto3with Bedrock calls,azure-ai-openai,google-cloud-aiplatform; these count as AI framework usage even though the package name isn’t obvious.
Running it on my project took about 30 seconds and found the three frameworks I mentioned, plus a fourth one (sentence‑transformers via a transitive dependency) I’d completely missed.
Practical compliance sequence
- List what you found – Which frameworks, which files, what they do.
- Classify your risk level – Most startup use cases (chatbots, content generation, search) fall under limited risk or minimal risk. High‑risk is specific: credit scoring, hiring, medical devices, law enforcement.
- Document it – Even minimal‑risk systems need basic transparency. If your app generates AI content, users should know.
- Set a calendar reminder – August 2026. Aim to have compliance sorted before then, not the week of.
The actual compliance work for most startups is a few days of documentation, not a rewrite. The hard part is knowing you need to do it at all.
Open‑source scanner
I open‑sourced the scanner as an MCP server you can run locally. Point it at your project directory, and it will:
- Scan your dependencies and source code.
- Report which frameworks it found.
- Suggest a risk category.
No signup, no API key for the free tier. It takes about 5 minutes to set up and run.
Call for stories
I’m building compliance tooling for the EU AI Act because I needed it myself first. If you’ve gone through a similar “wait, this applies to me?” moment, I’d genuinely like to hear about it in the comments.