BeSA Batch 09 Week3 - Building Agents with SDKs and Improving Discovery with AI
Source: Dev.to
Disclaimer
These notes were drafted using AI for clarity, structure, and readability. They are intended solely for learning purposes.
These are the structured notes from Week 3, focused only on the two role‑plays. They serve as a quick revision for attendees and a concise recap for anyone who couldn’t make it.
Role Play 1 – Technical Session: Getting Started with Strands Agent
Context
The conversation centered on the practical challenges of building agents and how a standardized SDK approach can simplify development. The customer began with a basic understanding of what is needed to build an agentic AI system.
Core Components Required for an Agent
| Category | Details |
|---|---|
| Infrastructure | Cloud or on‑premise environment to run the workloads |
| Foundation Model | Acts as the “brain” of the agent |
| Supporting Services | • Security • Memory for conversations • Observability • Orchestration |
The solutions architect confirmed that this understanding is correct and forms the baseline for agent architectures.
Common Challenges When Building Agents
- Steep learning curve – multiple frameworks, different SDKs, rapidly evolving ecosystem
- Complex orchestration – managing how agents call tools, handling multi‑step workflows
- Black‑box behavior – limited visibility into what the agent is doing; hard to debug reasoning steps
- Language & framework fragmentation – switching between tools and languages increases complexity
Main theme: the need for standardization.
What Is Strands Agent?
Definition – an open‑source SDK designed for building agents using minimal code.
Conceptually it combines:
- Models (the brain)
- Tools (the hands)
This lets developers focus on agent behavior rather than infrastructure complexity.
Understanding SDK vs. Framework
| SDK (Software Development Kit) | Framework |
|---|---|
| Collection of tools, libraries, and documentation. Helps developers build applications faster. Provides reusable building blocks. | Defines architectural structure and rules. Determines how components interact. |
| Analogy: Lego pieces – you assemble existing blocks instead of creating everything from scratch. | Analogy: Blueprint for a building – it dictates the overall layout. |
Strands essentially provides both:
- The framework structure
- The SDK tools to implement it
Why Use Strands?
- Ease of use – few lines of code to build agents.
- Native AWS integrations – works naturally with AWS services.
- Model‑agnostic – supports Claude, OpenAI models, Llama, etc.
- Rapid experimentation – iterate and deploy faster.
Agent Interaction Flow
| Component | Role |
|---|---|
| Agent | Orchestrator |
| Prompt | User input that triggers the workflow |
| Model | Performs reasoning; decides which tools are needed |
| Tools | Execute actions (e.g., API calls, sending emails) |
| Response | Final output returned to the user |
The cycle operates continuously as an agentic loop:
Prompt → Reason → Tool Selection → Tool Execution → Response
Working with Models
from strands import Agent
agent = Agent(
model="claude-3.5-sonnet", # or any supported model
system_instructions="You are a helpful assistant."
)
response = agent.run(prompt="What is the weather in Paris?")
Running models locally (e.g., via Ollama) enables:
- Local experimentation
- Avoiding cloud dependency during development
- Faster prototyping
Tools in Strands
Tools are analogous to a carpenter’s toolbox—agents need the right tools to perform tasks.
| Type | Examples |
|---|---|
| Pre‑built tools | HTTP request tool, calculator tool |
| Custom tools | Defined with a simple Python decorator |
from strands import tool
@tool
def get_customer_balance(customer_id: str) -> float:
"""Fetches the balance for a given customer."""
# implementation here
return 123.45
Custom tools let agents interact with internal APIs or services.
Model Context Protocol (MCP)
Definition: An open standard for connecting AI systems to external tools and services.
Analogy: A USB hub – your laptop may have one port, but the hub lets you connect many devices. Similarly, MCP lets agents interact with multiple systems through a standardized interface.
Benefits
- Reduced integration complexity
- Consistent communication format
- Easier expansion of agent capabilities
Role Play 2 – Behavioral Session: Using AI to Accelerate Discovery
Context
The conversation explored how architects can use AI tools to prepare for customer engagements and accelerate discovery. Scenario: a solutions architect must prepare for a new customer meeting with very little time.
Traditional Preparation vs. AI‑Assisted Preparation
| Traditional Workflow (Days) | AI‑Assisted Workflow (Hours) |
|---|---|
| Research the company | Prompt LLM with company name → receive concise overview |
| Understand industry trends | Ask LLM for latest trends in the relevant sector |
| Identify likely technical challenges | Generate a list of common pain points for similar customers |
| Prepare discovery questions | Get a ready‑to‑use questionnaire tailored to the prospect |
AI dramatically shortens the preparation cycle, allowing architects to focus on personalization and strategic thinking rather than data gathering.
Sample Prompt for Rapid Discovery Prep
You are a solutions architect preparing for a discovery call with <Company>, a mid‑size retailer in the US. Provide:
1. A 3‑sentence company overview.
2. Top 3 industry trends affecting retailers today.
3. 5 likely technical challenges they might face.
4. A list of 7 discovery questions to uncover their pain points.
The LLM returns a structured response that can be copied directly into a slide deck or meeting notes.
Benefits of AI‑Assisted Discovery
- Speed – reduces prep time from days to minutes.
- Consistency – ensures all relevant topics are covered.
- Customization – prompts can be tweaked for different verticals or customer sizes.
- Collaboration – teams can share generated content instantly via shared docs or chat tools.
Best Practices
- Validate the output – double‑check facts and tailor language to the audience.
- Iterate prompts – refine wording to get more precise answers.
- Combine with human insight – use AI as a starting point, then add personal anecdotes or proprietary data.
- Document prompts – keep a library of effective prompts for future reuse.
Closing Thoughts
Both role‑plays highlighted a common thread: leveraging standardized tools (Strands SDK, MCP) and AI assistance can dramatically accelerate development and discovery workflows, freeing engineers and architects to focus on higher‑value activities.
Using AI Changes This Process Significantly
Initial Research with AI
The architect begins by briefing the AI with basic information about the customer:
- Industry
- Market size
- Business trends
- Competitive pressures
The AI then generates insights such as:
- Regulatory environment (e.g., GDPR)
- Industry modernization challenges
- Technical considerations like latency sensitivity
This provides a fast baseline understanding.
Adding Human Context
AI does not understand internal personalities or organizational dynamics. This is where the TAM adds valuable insight.
Examples discussed:
- A cost‑focused CTO – strong requirement to reduce costs.
- A risk‑averse CISO – concerned about customer data protection.
- An engineering leader with a small team – limited capacity to manage complexity.
Combining AI insights with relationship knowledge produces much better preparation.
Anticipating Objections
The architect then uses AI to anticipate objections.
Example approach
- Feed AI the stakeholder concerns.
- Ask it to generate likely objections or concerns.
This allows the architect to prepare responses in advance rather than reacting in the meeting.
Generating a Discovery Framework
AI can also help generate a discovery framework, which includes:
- Business drivers
- Technical risks
- Modernization priorities
- Operational constraints
These questions are often generic, so the architect must adapt them to the specific customer context.
| Generic Question | Contextual Question |
|---|---|
| What are your modernization goals? | How is your small engineering team managing technical debt during modernization? |
AI provides the structure, while the architect adds depth.
Using AI After Meetings
A useful technique discussed was the “raw notes dump.” After the meeting, the architect:
- Pastes rough notes into the AI tool.
- Asks it to identify:
- Explicit requirements
- Implicit concerns
- Risks
- Action items
The AI performs structured analysis on unstructured notes, converting messy meeting notes into organized documentation.
Producing Clean Documentation
The final step is creating clear documentation to share with the customer, such as:
- Requirements summaries
- Key concerns identified
- Architecture considerations
- Next steps
This demonstrates that the architect is listening and thinking strategically.
Important Advice
Do not walk into a customer meeting with a generic presentation.
Better approach
- Use AI to understand the customer’s world.
- Combine that with the TAM’s relationship knowledge.
- Tailor discussions to real concerns.
The winning combination is: AI research + human insight.
Week 3 Consolidated Takeaways
From the Technical Role‑Play
- SDKs and frameworks can significantly reduce complexity when building agents.
- Standardization helps address fragmentation in the agent ecosystem.
- Tools and MCP enable agents to interact with external systems in a scalable way.
From the Behavioral Role‑Play
- AI can dramatically accelerate discovery preparation.
- Human context and relationships remain essential.
- AI works best as a research and analysis assistant rather than a decision maker.
This week shifted the focus from foundational concepts and architecture to practical workflows—both for building agents and for improving how architects engage with customers during discovery.
