How My Website Ranks #1 on ChatGPT Search (And Yours Can Too)
Source: Dev.to
I didn’t set out to rank on ChatGPT. I just built a portfolio, pushed it to GitHub, and moved on.
Then someone messaged me:
“Dude, your template shows up first when I ask ChatGPT for Next.js portfolio templates.”
I thought they were trolling—until I tried it myself.
My portfolio template was the #1 result for “best Next.js portfolio template GitHub” on ChatGPT search. No ads, no backlink campaigns, no SEO agency.
So I did what any curious dev would do: I reverse‑engineered why.
Turns out I accidentally nailed something called AEO (Answer Engine Optimization) and GEO (Generative Engine Optimization).
Below is everything I learned.
TL;DR (If You Want the Playbook)
- Add JSON‑LD schema so AI can understand “who/what” your site is.
- Set metadata so crawlers can read + reuse your content (within reason).
- Write specific, verifiable content (entities + numbers + links).
- Keep the site crawlable (sitemap + robots + clean structure).
The Game Changed (And Most Devs Missed It)
Google isn’t the only search engine anymore.
ChatGPT, Perplexity, Claude… they’re not just chatbots; they’re search engines.
The important shift isn’t the numbers; it’s the behavior shift: people ask AI, they don’t type keywords.
How AI Search Works
| Traditional SEO | AEO / GEO |
|---|---|
| Ranks links based on backlinks, domain authority, keywords. | Retrieves and cites information. Needs to understand your content, not just index it. |
| Question: “How do I rank higher?” | Question: “How do I become the answer?” |
Big difference.
The Secret Sauce: JSON‑LD Schema
The #1 thing that made my portfolio “visible” to AI is structured data, specifically JSON‑LD markup.
Example Schema (added to app/(root)/page.tsx)
// app/(root)/page.tsx
const personSchema = {
"@context": "https://schema.org",
"@type": "Person",
name: "Naman Barkiya",
url: "https://nbarkiya.xyz",
image: "https://res.cloudinary.com/.../og-image.png",
jobTitle: "Applied AI Engineer",
sameAs: [
"https://github.com/namanbarkiya",
"https://x.com/namanbarkiya",
],
};
const softwareSchema = {
"@context": "https://schema.org",
"@type": "SoftwareApplication",
name: "Next.js Portfolio Template",
applicationCategory: "DeveloperApplication",
operatingSystem: "Web",
offers: {
"@type": "Offer",
price: "0",
priceCurrency: "USD",
},
author: {
"@type": "Person",
name: "Naman Barkiya",
url: "https://nbarkiya.xyz",
},
};
Why This Works
When a crawler hits the page, it sees structured data that explicitly says:
- “This is a Person named Naman Barkiya”
- “His job is Applied AI Engineer”
- “He is the same as these GitHub and X profiles”
- “He authored this software application”
That’s how AI builds its knowledge graph and becomes citable.
Schema Types You Should Know
| Schema Type | When to Use |
|---|---|
Person | Personal portfolios, about pages |
Organization | Company websites |
SoftwareApplication | Dev tools, apps, templates |
Article | Blog posts, tutorials |
FAQPage | Q&A sections |
HowTo | Step‑by‑step guides |
The One Line That Changed Everything
Most websites are invisible to AI search—not because the content is bad, but because crawlers often only ingest snippets.
The line that fixes it lives in your app/layout.tsx metadata:
// app/layout.tsx
export const metadata = {
// ... other metadata
robots: {
index: true,
follow: true,
googleBot: {
index: true,
follow: true,
"max-image-preview": "large",
"max-snippet": -1, // ← This is the magic line
},
},
};
max-snippet: -1
“You can use as much of my content as you want.”
The directive is widely recognized for Google (googleBot) and respected by several other crawlers. With -1, AI can cite your entire page. Small change, massive impact.
Metadata That Actually Matters
Below is my full metadata setup (Next.js 13+ style):
export const metadata = {
metadataBase: new URL("https://nbarkiya.xyz"),
title: {
default: "Naman Barkiya - Applied AI Engineer",
template: "%s | Naman Barkiya - Applied AI Engineer",
},
description:
"Naman Barkiya - Applied AI Engineer working at the intersection of AI, data, and scalable software systems.",
keywords: [
"Naman Barkiya",
"Applied AI Engineer",
"Next.js Developer",
"XYZ Inc",
"Databricks",
// …more keywords
],
authors: [
{
name: "Naman Barkiya",
url: "https://nbarkiya.xyz",
},
],
alternates: {
canonical: "https://nbarkiya.xyz",
},
openGraph: {
type: "website",
locale: "en_US",
url: "https://nbarkiya.xyz",
title: "Naman Barkiya - Applied AI Engineer",
description: "Applied AI Engineer working at...",
siteName: "Naman Barkiya - Applied AI Engineer",
images: [
{
url: "https://res.cloudinary.com/.../og-image.png",
width: 1200,
height: 630,
alt: "Naman Barkiya - Applied AI Engineer",
},
],
},
robots: {
index: true,
follow: true,
googleBot: {
"max-snippet": -1,
"max-image-preview": "large",
},
},
};
Takeaways
- Add JSON‑LD for the entities you want AI to recognize.
- Expose the full content to crawlers with
max-snippet: -1. - Provide rich metadata (Open Graph, keywords, authors, etc.) so AI can reuse your data confidently.
- Keep the site crawlable (sitemap, clean URLs, proper robots.txt).
Implement these steps, and you’ll start showing up as the answer rather than just another link. Happy optimizing!
Enter fullscreen mode
Exit fullscreen mode
Notice the pattern?
I repeat “Naman Barkiya” + “Applied AI Engineer” everywhere:
title.defaultdescriptionkeywordsauthorsopenGraph.titleopenGraph.siteName
This isn’t keyword stuffing. It’s entity reinforcement.
AI needs to see the same entity described consistently across multiple signals to build confidence in what it “knows” about you.
Write Content That AI Can Actually Use
Here’s something most people miss:
AI doesn’t trust vague claims.
This won’t get you cited
❌ “Worked on various web development projects”
❌ “Experienced software engineer”
❌ “Built many applications”
This will
✅ “Built client dashboard at XYZ serving global traders”
✅ “Reduced API load time by 30 %”
✅ “Scaled platform to 3,000+ daily users”
AI models are trained to identify:
- Named entities (XYZ, Databricks, Next.js)
- Quantified results (30 %, 3,000 users, first month)
- Verifiable links (company URLs, GitHub repos)
How I structure my experience data
// config/experience.ts
{
id: "xyz",
position: "Software Development Engineer",
company: "XYZ",
location: "Mumbai, India",
startDate: new Date("2024-08-01"),
endDate: "Present",
achievements: [
"Shipped production features within the first month for a trader‑facing P&L dashboard",
"Won XYZ AI Venture Challenge by building data transformation pipelines",
"Led a 12‑member team in an internal hackathon",
],
companyUrl: "https://www.xyz.com",
skills: ["Typescript", "React", "Databricks", "Python"],
}
Every claim is:
- Specific (not vague)
- Quantified (where possible)
- Verifiable (company URL included)
The Technical Foundation
Sitemap
// app/sitemap.ts
import { MetadataRoute } from "next";
export default function sitemap(): MetadataRoute.Sitemap {
return [
{
url: "https://nbarkiya.xyz",
lastModified: new Date(),
changeFrequency: "monthly",
priority: 1.0,
},
{
url: "https://nbarkiya.xyz/projects",
lastModified: new Date(),
changeFrequency: "monthly",
priority: 0.8,
},
// ... more routes
];
}
Robots.txt
User-agent: *
Allow: /
Sitemap: https://nbarkiya.xyz/sitemap.xml
Simple. Open. Crawlable.
The Complete Checklist
Schema & Structured Data
- JSON‑LD
Personschema on homepage - Additional schemas for your content type (
SoftwareApplication,Article, etc.)
Metadata
-
max-snippet: -1in robots config - Canonical URLs on every page
-
authorsfield with name and URL - Entity‑rich descriptions
Content
- Specific, quantified achievements
- Named entities (companies, tools, technologies)
- External verification links
- Semantic HTML (proper heading hierarchy, lists)
Technical
- Dynamic sitemap
- Open
robots.txt - Fast page loads (AI crawlers have timeouts too)
Final Thoughts
The future of search is AI‑first.
Google isn’t going anywhere, but it’s no longer the only game in town. If your content can’t be understood by LLMs, you’re invisible to a growing chunk of the internet.
The good news? It’s not that hard to fix.
- Add schema markup.
- Open up your snippets.
- Write specific, verifiable content.
That’s it. That’s the whole playbook.
I open‑sourced my entire portfolio template. You can see all of this implemented:
github.com/namanbarkiya/minimal-next-portfolio
Fork it. Use it. Make it yours.
And maybe someday I’ll ask ChatGPT for portfolio templates and see your site at #1. That’d be pretty cool.