From Static Portfolio to Indexed Decisions šŸ“ƒ

Published: (February 8, 2026 at 10:00 PM EST)
7 min read
Source: Dev.to

Source: Dev.to

This is a submission for the Algolia Agent Studio Challenge: Consumer‑Facing Non‑Conversational Experiences

šŸ¦„ I instantly knew what to build as soon as I saw this challenge paired with the New Year, New You Google Challenge. Honestly, I’d been meaning to build a portfolio for a long time and never prioritized the work. This challenge finally interested me enough to take that idea and actually run with it.

Besides, it’s much more satisfying to show why something works when there’s a story attached. If you want to skip ahead, at least read the first part carefully.

Static portfolios treat decisions as narrative. This project treats them as data.

Human‑crafted, AI edited badge

What I Built

This backend‑focused project wasn’t built for pretty UIs; it was built for systems. I created a non‑conversational portfolio that behaves like a well‑oiled machine instead of a static showcase.

Traditional portfolios require interpretation. This system removes interpretation entirely.

When I first envisioned my portfolio site, I wanted it to stand apart from the usual LinkedIn rĆ©sumĆ© echoes. I’m strongly allergic to ā€œnormalā€ on the best days, but novelty alone doesn’t scale. I also knew the site had to be backed by infrastructure strong enough to survive my constant experiments and changing approaches over time.

Naturally, those ideas converged into a single decision: build a living journal of my projects, struggles, and decisions as they happened.

Decisions are first‑class records here, not explanatory prose.

Something future‑me could query months from now when I’m inevitably asking, ā€œWhat in the world were you thinking?ā€

As soon as I saw this challenge post, I started documenting every challenge, decision, outcome, and constraint. That process began with everything I could reconstruct from my existing GitHub projects.

šŸ¦„ Even if I documented every single time I changed my mind, this index structure could absolutely handle it. Don’t worry—I didn’t go that far.

Index Design

The index is the system. If it fails, nothing else matters.

Once I had a handle on controlling assistant agents on the UI side, the index became the real work. Designing it, breaking it, and curating it took the most time. Early patterns weren’t great for retrieval performance, but after studying Algolia best‑practice guidance, things finally clicked.

The result is a collection of small, atomic records optimized for retrieval.

These power a clean UX through facets and deterministic sorting using both signal strength and record creation time.

Here’s a real example pulled directly from the site:

[
  {
    "objectID": "card:project:challenge:algolia-agent-studio-2026-02",
    "title": "Algolia Agent Studio Challenge participation",
    "blurb": "An applied exploration of conversational retrieval.",
    "fact": "I participated in the Algolia Agent Studio DEV Challenge during February 2026, focusing on conversational and non‑conversational search behavior using indexed content.",
    "tags.lvl0": ["DEV Challenge", "Approach"],
    "tags.lvl1": [
      "DEV Challenge > Algolia Agent Studio",
      "Approach > Experimentation"
    ],
    "projects": ["System Notes"],
    "category": "Experience",
    "created_at": "2026-02-08T05:42:00-05:00",
    "signal": 5
  }
]

Why these fields exist

  • signal – controls relevance pressure under ranking
  • created_at – stabilizes ordering across time
  • hierarchical tags – enable narrowing without dilution
  • constrained categories – prevent ambiguous grouping

šŸ¦„ In case you were wondering, I didn’t write these by hand. I defined the rules and constraints, handed them to ChatGPT, and manually tracked the generated output in a JSON file stored in the repo at System Notes v2.0.0/Algolia.

This project includes both a conversational chat interface and an Ask AI Search experience.

For this entry, Ask AI is intentionally treated as a pure search surface, not a conversational agent.

The conversational state is optional; these results consider only the non‑conversational queries executed against the Algolia index.

I evaluated retrieval performance over time—specifically speed, relevance, and consistency—while making iterative improvements to index configuration, ranking rules, and facets.

If identical queries did not return identical results, the configuration was not finished.

The system now returns the correct indexed records quickly and predictably, without requiring query reformulation.

Screenshot 256 results in 1 ms

Screenshot filter categories, search results

Live Demo

The site is deployed at to keep it separate from the previous challenge submission.

Try searching for ā€œAlgoliaā€ or filter by the categories on the left to load relevant results.

Current canonical:

Source code: System Notes v2.0.0

šŸ¦„ If you want a full comparison snapshot, the original site remains live at . The difference between that version and the Algolia‑powered build is dramatic.

Algolia Agent Studio in Practice

A well‑designed index alone isn’t enough.

Retrieval quality is dictated by configuration discipline, not feature count.

I tested most options available in Algolia’s configuration panel while tuning this system. The most impactful changes involved aggressively limiting searchable attributes and tightening facet definitions.

I also discovered that overly generous synonym expansion negatively affected agent retrieval speed, so those were deliberately scaled back.

Screenshot of primary indexes in Algolia

Keeping the Index in Sync

To avoid duplicating content manually, I configured an Algolia crawler to index content from DEV using my AI‑optimized mirror site.

This keeps the index authoritative without human intervention.

The crawler is a lightweight JavaScript configuration managed directly from the Algolia dashboard.

Screenshot of Algolia crawler testing

šŸ’” The crawler configuration file is stored in the repo at
apps/api/algolia/sources/crawler.js (System Notes v2.0.0)

Tuning with Analytics

An unfortunate API‑key mistake prevented me from retaining full historical analytics.

Even so, analytics were used to confirm that retrieval behavior stabilized under repeat queries.

Screenshot of Algolia search events

šŸ¦„ For the record, Algolia makes API‑key recovery painless if you record the original key. Naturally, I did not.

Why Fast, Predictable Retrieval Matters

Before Algolia, users had to rely on me to remember and document every meaningful decision tied to a project.

That does not scale. Retrieval does.

Now I have a system capable of rapidly retrieving hundreds of decision‑level records across active builds.

Original DesignPaired with AlgoliaObserved Improvement
Project cards showing finished workChoice cards indexed as search recordsāœ… Enables decision‑level retrieval instead of content browsing
Projects shown as static artifactsSearchable sequence of constrained decisionsāœ… Demonstrates retrieval‑first system thinking
Narrative explanations onlyRetrieval‑backed records with rationaleāœ… Proves answers are grounded in indexed data
Generic portfolio navigationAlgolia‑powered discovery as primary UXāœ… Makes Algolia structural to the experience
ā€œChat with AIā€ as a featureAI layered over Algolia retrievalāœ… Signals intentional AI restraint
Silent gaps when data is missingFallback logic surfaced in resultsāœ… Shows real‑world constraint handling

This system would not exist without Algolia. It isn’t an enhancement; it’s the foundation.

What’s Next

I ran out of time and this challenge had a hard stop.

Given the choice, I optimized retrieval stability over feature breadth.

When time allows, these are the next steps:

  • Wire custom URL routing so search results are directly addressable
  • Finalize recommendations driven by real user interaction events
  • Introduce a Supabase backing store for indexed records to support long‑term growth
  • Migrate existing project cards into the new indexed‑record format
  • Continue UI refinement and performance tuning

šŸ¦„ After winners are announced, this site will live solely at as my canonical portfolio.

šŸ›”ļø The Credits in the Margins

This piece was written by a human, with ChatGPT used along the way for editing, clarity passes, and structural tightening while drafting. The final shape, technical claims, and decisions are human‑made.

0 views
Back to Blog

Related posts

Read more Ā»

Launching Interop 2026

Interop Project Overview The Interop Project is a cross‑browser initiative aimed at improving web compatibility in the areas that provide the greatest benefit...

JUMAA LEARNING BY CLONING

!Cover image for JUMAA LEARNING BY CLONINGhttps://media2.dev.to/dynamic/image/width=1000,height=420,fit=cover,gravity=auto,format=auto/https%3A%2F%2Fdev-to-uplo...