🌟 The Ultimate Memory Hooks for AWS Certified AI Practitioner (AIF-C01)

Published: (December 7, 2025 at 11:47 AM EST)
2 min read
Source: Dev.to

Source: Dev.to

Machine Learning Basics

Supervised vs Unsupervised

  • Labels → Supervised
  • No Labels → Unsupervised

✔ Supervised = Teacher + Correct Answers
✔ Unsupervised = Find patterns (clustering, segments)

Classification vs Regression

  • Classes → Classification
  • Numbers → Regression

Overfitting vs Underfitting

  • Overfitting = Too complex → Increase regularization
  • Underfitting = Too simple → Decrease regularization

Key Algorithms

  • Clustering – Group customers? No labels? → K‑Means
  • Image Classification – Flower classification → k‑NN or Decision Tree
  • Anomaly Detection – No labels + abnormal detection → Autoencoders

GenAI Prompt Engineering

  • Few‑shot prompting – Show format → Few‑shot prompting.
  • Prompt chaining – Multi‑step workflow → Prompt chaining.
  • ReAct prompting – Reason + Action + Tool use → ReAct.

Temperature

  • Creativity ↑ → Temperature ↑
  • Consistency ↑ → Temperature ↓

LLM Inference Parameters

  • Temperature – Creativity
  • Top‑K – Number of token choices
  • Top‑P – Probability bucket
  • Max Tokens – Output length
  • Frequency Penalty – Reduce repeated words
  • Presence Penalty – Discourage repeated topics

Mapping

  • Creativity → Temperature / Top‑K / Top‑P
  • Length → Max Tokens
  • Repetition → Frequency & Presence

Retrieval‑Augmented Generation (RAG)

Purpose of Chunking

Chunking = Better retrieval → Better context

Batch Steps in RAG

  • ✔ Content embeddings
  • ✔ Build search index

Do not include query embeddings or response generation in this batch.

Text + Image queries → Multimodal model

Evaluating ML Models

Summarization Metrics

  • ROUGE (default)
  • If ROUGE missing → Choose BLEU

Translation Metrics

  • BLEU / METEOR

Classification Metrics

  • Imbalanced dataF1 Score
  • Balanced dataAccuracy

Regression Metrics

  • Numeric prediction → MSE / RMSE

LLM Quality

  • Perplexity – How surprised is the model?

AWS Services – Quick Memory Hooks

  • Model Cards – Governance + Documentation
  • Model Monitor – Detect drift in production
  • Ground Truth – Human labeling
  • JumpStart – Pre‑built models + quick deploy
  • SageMaker Canvas – No‑code data prep
  • HealthScribe – Medical speech‑to‑text
  • Guardrails for Bedrock – Responsible AI (safety filters)
  • PartyRock – Experiment + Learn + No cost (Not for VPC, not for deployments)

GenAI Lifecycle

Design → Data → Train → Evaluate → Deploy → Monitor

Evaluation Stage

  • Accuracy testing
  • Safety + toxicity testing
  • Hallucination measurements

Inference

  • Train = Learn
  • Infer = Predict
  • Deploy = Serve

Embeddings

  • Embeddings = Meaning → Vectors
  • Reduced dimension → Same meaning → Similarity search

Foundational Concepts

Fine‑tuning

Teach a large model a small task well.

  • Domain‑specific labeled data
  • Improves specific task performance
  • Not retraining from scratch
  • Not updating model to recent events

Responsible AI

Safety + Filters + Detect toxicity → Use Guardrails

Final Thoughts

These memory hooks are designed to:

  • Make recall instant during the exam
  • Reduce confusion between similar concepts
  • Build confidence with patterns instead of memorising definitions

Prepared using insights from the QA/CloudAcademy course “AWS Certified AI Practitioner (AIF‑C01) Certification Preparation” by Danny Jessee.

Back to Blog

Related posts

Read more »