Carific.ai: From AI Slop to Actionable Feedback - Structured Output with Zod and the AI SDK
Source: Dev.to
This is my third dev.to post. If you missed the previous ones: Building the Auth System and Building the AI Resume Analyzer.
“Spearheaded cross‑functional initiatives to leverage synergies…”
That’s what my AI resume analyzer suggested. I stared at the screen, realizing I’d built exactly what I was trying to help users avoid: generic, buzzword‑filled nonsense.
The resume analyzer worked. It streamed markdown. It looked impressive. But when I asked myself “Would I actually use this feedback?” the answer was no.
Below is the story of how I rebuilt the entire output system to be genuinely useful.
The Stack
| Package | Version | Purpose |
|---|---|---|
| Next.js | 16.0.7 | App framework |
| AI SDK | 5.0.108 | generateObject for structured output |
| Zod | 4.1.13 | Schema validation |
| Lucide React | 0.555.0 | Icons |
| Sonner | 2.0.7 | Toast notifications |
Chapter 1: The Problem with Streaming Markdown
The first version of the resume analyzer used streamText from the AI SDK:
// ❌ The old approach
const result = await streamText({
model: MODEL,
system: SYSTEM_PROMPT,
prompt: `Analyze this resume...`,
});
return result.toTextStreamResponse();
The frontend received chunks of markdown and rendered them progressively—a cool demo but terrible UX.
Problems
- No structure – The AI could return anything: bullet points, paragraphs, or a mix.
- No copy functionality – Users couldn’t easily copy suggested improvements.
- No persistence – Unstructured markdown can’t be saved meaningfully in a database.
- Generic advice – “Add more metrics to your bullet points” tells users nothing actionable.
The breaking point came when I tested it with my own resume. The AI suggested “Ready‑to‑Use Bullet Points” that had nothing to do with my actual experience, and it didn’t indicate where they should go.
Chapter 2: The Switch to Structured Output
The AI SDK provides a generateObject function that returns typed JSON validated against a Zod schema. This solved the structural issues.
// ✅ The new approach
// lib/ai/resume-analyzer.ts
import { generateObject } from "ai";
import { ResumeAnalysisOutputSchema } from "@/lib/validations/resume-analysis";
export async function analyzeResume({ resumeText, jobDescription }) {
const { object } = await generateObject({
model: "google/gemini-2.5-flash-lite",
schema: ResumeAnalysisOutputSchema,
system: RESUME_ANALYSIS_SYSTEM_PROMPT,
prompt: `Analyze this resume against the job description...`,
});
return object;
}
Validation Schema
// lib/validations/resume-analysis.ts
export const ResumeAnalysisOutputSchema = z.object({
score: z.number().min(0).max(100),
scoreLabel: z.enum(["Poor", "Fair", "Good", "Strong", "Excellent"]),
scoreSummary: z.string(),
missingKeywords: z.array(MissingKeywordSchema).min(1),
bulletFixes: z.array(BulletFixSchema),
priorityActions: z.array(z.string()).min(1).max(3),
sectionFeedback: z.array(SectionFeedbackSchema).length(5),
lengthAssessment: z.object({
currentLength: z.enum(["Too Short", "Appropriate", "Too Long"]),
recommendation: z.string(),
}),
});
Now the AI must fill every field, and Zod validates the response before it reaches the frontend.
Chapter 3: Making Feedback Actually Actionable
Having structured output was only the first step; the content still needed refinement.
The “Before/After” Bullet Fix
Old approach: “Here are some sample bullet points you could use.”
Problem: Users didn’t know where to place them or how they related to their existing resume.
Fix: Identify weak bullets in the user’s resume and show exactly how to improve them.
export const BulletFixSchema = z.object({
location: z
.string()
.describe("Where this bullet is, e.g. 'Experience → Acme Corp → 2nd bullet'"),
original: z.string().describe("The exact text from the user's resume"),
improved: z.string().describe("The suggested replacement"),
reason: z.string().describe("Why this helps – reference job requirements"),
impact: z.enum(["High", "Medium"]),
});
Prompt fragment enforcing the shape:
### Bullet Fixes
- original: Exact text from the resume (must match verbatim)
- improved: Rewritten with action verb, metrics, and relevance to job
Rules:
- original must be text that exists in the resume
- Target vague phrases: "Responsible for", "Worked on", "Helped with"
The UI now displays the original bullet, the improved version, and a copy button—no guessing required.
Killing AI Slop
The first improved bullet started with “Spearheaded.” Classic AI resume speak. I added explicit rules to the prompt:
- DO NOT use these overused words: Spearheaded, Leveraged, Synergy,
Utilize, Facilitated, Orchestrated, Pioneered, Revolutionized,
Streamlined, Championed
- Use plain, professional verbs: Led, Built, Created, Reduced,
Increased, Managed, Designed, Developed, Improved, Launched
Overall tone guidelines:
## Writing Style
- Direct and concise
- No filler phrases ("I'd recommend", "You might consider")
- No exclamation marks
- State facts, not opinions
The output now reads like feedback from a senior colleague rather than a chatbot.
Chapter 4: Skill Gap Categorization
Missing keywords were originally a flat list (e.g., “Docker, Leadership, HIPAA, Python”), treating all items the same. Different skills require different actions, so I added categorization.
export const MissingKeywordSchema = z.object({
keyword: z.string(),
category: z.enum(["Hard Skill", "Soft Skill", "Domain"]),
importance: z.enum(["Critical", "Important", "Nice to Have"]),
whereToAdd: z.string(),
});
UI configuration for grouping:
const CATEGORY_CONFIG = {
"Hard Skill": {
icon: Wrench,
label: "Hard Skills",
description: "Learnable, measurable skills you can add",
},
"Soft Skill": {
icon: Users,
label: "Soft Skills",
description: "Reframe existing experience to highlight these",
},
Domain: {
icon: BookOpen,
label: "Domain Knowledge",
description: "Industry‑specific expertise",
},
};
Now users see, for example, “I’m missing 3 hard skills I can learn, 1 soft skill I need to reframe, and 2 domain areas where I might not be a fit.”
Chapter 5: Section Completeness & Length Assessment
Section Feedback
Checks whether standard resume sections exist and are complete:
export const SectionFeedbackSchema = z.object({
section: z.enum(["Contact", "Summary", "Experience", "Education", "Skills"]),
status: z.enum(["Present", "Missing", "Incomplete"]),
feedback: z.string(),
});
The UI only shows cards for sections with issues; if everything is present, the card is omitted.
Length Assessment
lengthAssessment: z.object({
currentLength: z.enum(["Too Short", "Appropriate", "Too Long"]),
recommendation: z.string(),
});
This gives users a clear indication of whether their resume is the right length and what to adjust.
By moving from raw markdown streams to validated, structured JSON and tightening the prompts, the resume analyzer now delivers actionable, copy‑ready, and context‑aware feedback that feels like a senior colleague reviewing your document.