Making Igala First-Class: My AI Safety Portfolio on Cloud Run
Source: Dev.to
Mission: Why Igala + AI Safety?
I’m Godwin Faruna Abuh, an AI safety and NLP researcher based in Abuja, Nigeria. Most AI safety research today happens in English, effectively treating low‑resource African languages like my native tongue, Igala (~2 million speakers), as “noise”.
If safety filters and interpretability tools don’t work for Igala, then the AI of the future isn’t safe for us. This portfolio demonstrates that we can build production‑grade, safety‑aligned systems for low‑resource languages using Gemini 3 Flash and Google Cloud Run.
Note: Qwiklabs sandbox expired post‑deadline. Live mirror at https://faruna.space. Labels applied during deploy: dev-tutorial=devnewyear2026.
Challenge Deployment Metadata
- Project ID:
qwiklabs-gcp-00-aab206db1d7c - Region:
us-west4 - Services:
portfolio-frontend,portfolio-backend - Build Tool: Google Cloud Buildpacks (Next.js 14)
- Required Label:
dev-tutorial=devnewyear2026
Pro‑Tip for Judges: Click the “Ask About My Work” widget. It’s a custom‑built AI Twin powered by a Gemini 3 Flash backend, grounded in my actual research notes and repository data. Try asking:
- “What did you find in the Igala red‑teaming project?”
- “How did you handle data scarcity for NMT?”
Technical Deep Dive: How I Built It
Frontend
Built with Next.js 14 (App Router) and Tailwind CSS. Serves as the primary interface for my seven research projects.
Backend
A FastAPI service running on Cloud Run that interfaces with the Gemini 3 Flash API.
AI Implementation
Used Google AI Studio to refine system prompts for the assistant, focusing on “Zero‑Hallucination” grounding.
Infrastructure
Deployed via gcloud CLI. A hybrid approach: Cloud Run for the challenge‑verified backend and faruna.space for long‑term persistence.
The 7 Projects (Research Highlights)
- Igala‑English NMT – The first public translation model for Igala.
- Igala GPT from Scratch – Study on how tiny datasets impact transformer learning.
- Red‑Teaming LLMs – Found adversarial jailbreaks are 45 % more successful in Igala than English because current safety filters are “blind” to our syntax.
- Mechanistic Interpretability – Probing the “brains” of mBERT to see how it represents African linguistic structures.
- (Additional projects omitted for brevity; see the portfolio for full list.)
What I’m Most Proud Of
The Interactive Proof: a live tool where users can query my research results in real‑time, turning a static résumé into an active demonstration.
What’s Next?
- Expand the red‑teaming dashboard to include more middle‑belt Nigerian languages.
- Fine‑tune Gemini models for cultural nuances that general‑purpose safety evaluations currently miss.
Repository
GitHub:
Screenshots
Thank you, Dev.to and Google AI team!

