How I Built an Instagram Profile Scraper in Go and Shipped It to Apify
Published: (March 18, 2026 at 05:19 PM EDT)
4 min read
Source: Dev.to
Source: Dev.to
[](https://dev.to/alwaysprimedev)

I recently built a small Instagram profile scraper in Go, packaged it as an Apify Actor, and published it so other people can use it without maintaining the infrastructure themselves.
The goal was simple: fetch public Instagram profile data by username and return clean, automation‑friendly JSON. I did **not** want browser automation, heavy dependencies, or deeply nested output that becomes painful to use in datasets, exports, or pipelines.
---
## The problem
A lot of scraping projects work, but they are hard to operationalize. They rely on full browser stacks, break on minor changes, or return raw payloads that still need another transformation layer before they become useful. For profile lookups, I wanted something much lighter:
- **input:** one or more Instagram usernames
- **output:** structured profile data
- **deployment:** packaged for Apify
- **operations:** proxy‑ready and resilient to partial failures
---
## The approach
I built the Actor in pure Go with no external dependencies beyond the standard library.
Instead of browser automation, the scraper makes a direct request to Instagram’s web profile endpoint and sends the headers that Instagram expects for that request. That keeps the runtime small and fast—perfect for an Apify Actor.
The Actor accepts either a legacy `username` field or a `usernames` array, normalizes the input, strips `@`, and removes duplicates. This makes it easier to use both manually and from automations.
---
## What the scraper returns

The Actor extracts and normalizes the most useful profile fields, including:
- username and internal Instagram ID
- full name and biography
- follower, following, and post counts
- profile picture URLs
- private, verified, business, and professional flags
- related profiles
- latest posts
The `latestPosts` section received extra attention. Each post includes:
- caption text
- parsed hashtags and mentions
- likes and comments count
- dimensions
- image URLs
- tagged users
- child posts for carousel content
- normalized timestamps
This makes the output ready for lead generation, competitor monitoring, influencer research, and internal dashboards.
---
## Making it practical for Apify

Building the scraper itself was only half the task. The other half was productizing it.
I added:
- an Apify input schema for usernames
- a dataset schema for cleaner output browsing
- a Docker build so the Actor runs consistently
- dataset‑push logic so each profile is saved directly to the Apify dataset
- proxy support for more reliable requests at scale
**Failure handling** is a key implementation detail: if one username is invalid or unavailable, the run continues processing the rest. The Actor only fails on genuine technical errors (e.g., network or dataset write failures). This behavior is crucial in production.
---
## What I learned
1. **Data shape matters** – a flat, predictable output is more valuable than a massive raw JSON blob.
2. **Operational details matter early** – timeouts, proxy support, and partial‑failure handling aren’t “later” concerns if you want a usable product.
3. **Packaging changes mindset** – publishing on Apify shifted my thinking from a one‑off script to maintaining a small API product.
---
## Final result

The result is a lightweight Instagram Profile Scraper Actor in Go that can fetch one or many public profiles and return structured output ready for datasets and automations.
If you want to try it without building your own pipeline, you can check it out here:
[Instagram Profile Scraper Actor on Apify](https://apify.com/alwaysprimedev/instagram)
- `profile-scraper`
If you are building scraping tools yourself, my main advice is this: **optimize for usable output, not just successful requests**. That is usually what makes the difference between a side script and a product.