'New Year, New You' Portfolio Challenge with Google AI

Published: (January 16, 2026 at 05:46 PM EST)
3 min read
Source: Dev.to

Source: Dev.to

This is a submission for the New Year, New You Portfolio Challenge Presented by Google AI

About Me

I’m a software engineer specializing in backend systems, data engineering, and cloud infrastructure. I built this portfolio to showcase two AI‑powered projects that solve real problems I face daily: information overload and LLM observability during LLM auto‑review of PRs on personal repos.

Portfolio

(overview of the two main projects is provided below)

How I Built It

Tech Stack

ComponentTechnology
BackendFastAPI on Cloud Run
AI/ScoringGemini 2.0 Flash API
Data StorageBigQuery
Rate LimitingFirestore + BudgetGuard
ObservabilityOpenTelemetry, Cloud Monitoring

Projects

1. Content Intelligence Hub

An AI‑powered content curation system that transforms 500+ daily RSS articles into ~10 high‑value reads using a dual‑scoring approach:

  • Gemini AI analyzes personal relevance to my interests.
  • Community signals (Hacker News, Lobsters) validate quality.

This solves the “obscure blog post” problem—where AI alone can’t distinguish a random tutorial from a battle‑tested Netflix engineering post.

2. LLM Code Review Observability

End‑to‑end monitoring for AI code‑review systems, tracking:

  • RAG retrieval quality (embedding similarity scores)
  • Cost and latency trends
  • Request volume and error rates

Live dashboards query BigQuery directly for real‑time KPIs and time‑series charts.

Technical Deep Dive

Dual‑Scoring Algorithm (Content Intelligence Hub)

The system uses confidence‑based weighting that adapts based on available signals:

weights = {
    'high': (0.5, 0.5),    # Found on BOTH Hacker News and Lobsters
    'medium': (0.7, 0.3),  # Found on ONE platform
    'low': (0.9, 0.1)      # No community signals
}

Viral Override – when community_score >= 70 and ai_relevance >= 25 the weighting shifts to favor community signals:

if community_score >= 70 and ai_relevance >= 25:
    ai_weight, community_weight = 0.3, 0.7

Structured Output with Gemini

Gemini returns type‑safe JSON validated by Pydantic, ensuring downstream code can rely on a known schema.

StruQ Pattern for Safe NL→SQL

The chat assistant never generates raw SQL from user input. Instead, Gemini extracts a structured intent that maps to a parameterized query:

User: "Show me Python tutorials from this week"

SearchIntent {
    topics: ["Python"],
    time_range_days: 7,
    content_type: "tutorial"
}

Parameterized SQL (user input never touches the query)

LLM Observability Patterns (LLM Code Review Observability)

The observability pipeline tracks metrics that surface actionable insights:

Metric PatternMeaning
High cost, low similaritySending lots of context that isn’t relevant – tune RAG
Low context utilizationAdd more files or history to improve reviews
Embedding failuresVertex AI quota/connectivity issues – check GCP console
Cost variance between reposSome codebases need different review strategies

Google AI Integration

Gemini 2.0 Flash powers:

  • Article relevance scoring with structured JSON output
  • Natural‑language chat interface (StruQ pattern for safe NL→SQL)
  • Content classification (tutorials, deep dives, news)

Security

5‑layer prompt‑injection defence at $0 additional cost:

  1. Input validation (20+ regex patterns)
  2. Secure prompt construction with delimiters
  3. Structured output schema enforcement (Pydantic)
  4. Output validation (schema enforcement, prompt leakage detection)
  5. Rate limiting ($2 / day budget cap via Firestore)

What I’m Most Proud Of

  • Dual‑Scoring Innovation – Combines AI relevance with community validation, delivering personalized and battle‑tested recommendations.
  • Live Stats – Dashboards show real, up‑to‑date numbers. The Content Intelligence Hub queries BigQuery for actual article counts and scores; the LLM Observability tab displays live KPIs (total reviews, cost, latency, RAG similarity) pulled directly from the llm_observability.metrics table.
  • Cost Control – BudgetGuard caps spending at $2 / day with graceful degradation. The entire platform runs for ≈ $20 / month while processing hundreds of daily articles and sporadic PRs, thanks to Cloud Run’s scale‑to‑zero capability.
Back to Blog

Related posts

Read more »

Rapg: TUI-based Secret Manager

We've all been there. You join a new project, and the first thing you hear is: > 'Check the pinned message in Slack for the .env file.' Or you have several .env...

Technology is an Enabler, not a Saviour

Why clarity of thinking matters more than the tools you use Technology is often treated as a magic switch—flip it on, and everything improves. New software, pl...