I Built an Open-Source YouTube Scraper for Python, No API Key Needed
Source: Dev.to
Every time I needed YouTube data for a project (search results, channel videos, transcripts) I ended up installing three different libraries, none of which played well together or supported async.
So I built tubescrape: a single Python package that handles search, channels, transcripts, and playlists by talking directly to YouTube’s internal InnerTube API. No API key, no OAuth, no quotas. Just pip install tubescrape.
What It Does
from tubescrape import YouTube
with YouTube() as yt:
# Search with filters
results = yt.search('python tutorial', max_results=5, type='video', duration='long')
# Browse a channel
videos = yt.get_channel_videos('@lexfridman', max_results=10)
shorts = yt.get_channel_shorts('@mkbhd')
# Get transcript and save as subtitles
transcript = yt.get_transcript('dQw4w9WgXcQ')
transcript.save('subtitles.srt')
# Translate transcript
transcript = yt.get_transcript('dQw4w9WgXcQ', translate_to='es')
# Scrape playlist
playlist = yt.get_playlist('PLrAXtmErZgOeiKm4sgNOknGvNjby9efdf')
Every result is a frozen dataclass with .to_dict() for instant JSON serialization.
Three Interfaces
Python SDK
import json
from tubescrape import YouTube
yt = YouTube()
search = yt.search('python programming', max_results=10)
print(json.dumps(search.to_dict(), indent=4))
CLI
pip install "tubescrape[cli]"
tubescrape search "python" -n 5
tubescrape transcript dQw4w9WgXcQ --format srt --save output.srt
REST API
pip install "tubescrape[api]"
tubescrape serve --port 8000
# Swagger docs at http://localhost:8000/docs
Async Support
import asyncio
from tubescrape import YouTube
async def main():
async with YouTube() as yt:
r1, r2, r3 = await asyncio.gather(
yt.asearch('python'),
yt.asearch('javascript'),
yt.asearch('rust'),
)
asyncio.run(main())
All methods have async variants, making it easy to run multiple requests concurrently in FastAPI, Discord bots, or any async application.
How It Works
tubescrape uses YouTube’s InnerTube API—the same internal API that the YouTube website and mobile apps use. It avoids HTML scraping, Selenium, or headless browsers, relying instead on structured HTTP requests and JSON responses. This approach is more reliable because the API response format is far more stable than the DOM.
The Stack
The core dependency is httpx (an async‑capable HTTP client). Optional CLI dependencies include click and rich; the REST API adds FastAPI and uvicorn. The package is built with hatchling, includes 146 tests covering unit, integration, and parser logic, and runs CI on GitHub Actions across Ubuntu, Windows, and macOS with Python 3.10–3.13.
Links
- GitHub:
- PyPI:
- Docs: Full Usage Guide
MIT licensed. Feedback and contributions are welcome. If you find it useful, a star on GitHub would mean a lot.