Scraping ZoomInfo with One Universal Script Using SeleniumBase

Published: (December 10, 2025 at 09:47 AM EST)
3 min read
Source: Dev.to

Source: Dev.to

Before You Start

ZoomInfo can be scraped in a simple and stable way. The site stores key data inside application/json blocks, allowing you to extract search results, profiles, and company details without complex CSS selectors.

Prerequisites and Setup

ZoomInfo shows a press‑and‑hold captcha if it suspects automated actions. Simple request libraries or headless Selenium/Playwright won’t work. You need tools that patch fingerprints, headers, and hide headless mode, such as SeleniumBase, Playwright Stealth, or Patchright.

We’ll use SeleniumBase in UC mode (built on Undetectable Browser). Install it with:

pip install seleniumbase

Available Data Points

Page TypeKey Data PointsNotes
Search Pagesname, title, profile link, company linkFirst 5 pages only. Emails/phone/images usually missing. Some fields empty or null.
Person Profilesfull name, title, photo, bio, masked email/phone, work address, social links, work & education history, employer info, colleagues, similar profiles, web mentions, AI signalsMost data complete. Contact info partially hidden.
Company Pageslegal name, size, employees, tech stack, finances, competitors, executives, address, social links, news, acquisitions, org charts, email patterns, hiring trends, intent signals, awards, comparablesSome contact info, historic financials, email samples, and intent data may be partial or masked.

Universal Scraping Script

The script works for search pages, person profiles, and company pages. It extracts JSON data from <script> tags, removes unnecessary keys, and saves the result to a file.

from seleniumbase import SB
from selenium.webdriver.common.by import By
import time, json

# Base URL for the page (search, person, or company)
base_url = "https://www.zoominfo.com/people-search/"  # or person/company URL
pages = 5  # for search pages; set 1 for single profile/company
all_data = []

with SB(uc=True, test=True) as sb:
    for page in range(1, pages + 1):
        url = f"{base_url}?pageNum={page}" if pages > 1 else base_url
        sb.uc_open_with_reconnect(url, 4)
        time.sleep(1)  # Wait for JSON scripts to render
        try:
            scripts = sb.find_elements('script[type="application/json"]', by=By.CSS_SELECTOR)
            for el in scripts:
                content = el.get_attribute("innerHTML")
                data = json.loads(content)
                data.pop("__nghData__", None)
                data.pop("cta_config", None)
                all_data.append(data)
        except Exception as e:
            print(f"Page {page} error:", e)

# Save the data
with open("zoominfo_data.json", "w", encoding="utf-8") as f:
    json.dump(all_data, f, ensure_ascii=False, indent=2)

The resulting JSON contains almost all available data:

  • Search pages: names, titles, profile links, company links. Emails, phones, and images are mostly hidden.
  • Person pages: full personal info, masked contacts, work/education history, colleagues, AI signals.
  • Company pages: legal name, employees, tech stack, finances, executives, news, acquisitions, hiring trends, awards, etc. Some fields are partially masked.

Anti‑Scraping Measures

ZoomInfo employs strong anti‑bot protections. Check HTTP status codes before parsing data and handle them accordingly.

Status CodeMeaningRecovery
200SuccessContinue
429Rate limitedWait 30‑60 s, then retry
403Forbidden (IP blocked)Switch IP/proxy, retry next day
503Service unavailableRetry after 5 min
200 (empty)HoneypotSwitch IP
Redirect /errorDetected scraperAdd delay, rotate proxy

Mitigate errors, captchas, and bans by masking bots, rotating proxies, adding pauses, and handling errors with longer delays.

Notes

For a complete walkthrough with step‑by‑step explanations, visuals, and tips, see the full article on our blog: Read the Full Article

All examples are available in the GitHub repo.

Back to Blog

Related posts

Read more »