Smaller Self Contained Units: Writing Code that AI Can Work With
Source: Dev.to
Introduction
As AI becomes more involved in software development, an important question appears: how should software be shaped so that AI tools can contribute meaningfully? Modern AI systems can assist with coding and refactoring, but they often struggle when a codebase is large and many parts depend on one another. Even if future models can read far more context at lower cost, there is practical value in shaping software as a collection of smaller, self‑contained units.
Benefits of Smaller Self‑Contained Units
AI tools tend to perform better when the area they work on is narrow and clearly defined. This usually means:
- One clear responsibility
- Minimal unnecessary dependence on other parts
- Predictable logic
- Behaviour that can be tested or understood on its own
When these properties exist, intent becomes clearer, misunderstandings are reduced, and the cost of using AI tools is lower. Larger structures often require more explanation, more context, and more caution.
This approach benefits development even without AI involvement and aligns well with how AI systems reason about behaviour.
Characteristics of a Self‑Contained Unit
A small self‑contained unit is not just a reduced file; it is a piece of behaviour that can stand on its own without needing to understand the entire system. Typically such a unit has:
- A focused purpose
- Clear inputs and outputs
- Very little shared or hidden state
- Tests that express expected behaviour
These qualities lower the amount of information needed before making a change, making it easier to improve or replace individual parts without affecting unrelated areas.
Patterns That Encourage Clear Boundaries
Certain well‑known patterns naturally encourage clear boundaries and focused behaviour. They are useful in environments where interaction, state, and visual changes happen often.
- State machines – By expressing behaviour through states and transitions, logic becomes visible and easy to understand. Each state forms a clear and predictable unit.
- Feature modules – Features can be created as independent pieces that attach to the rest of the system through simple boundaries, reducing unnecessary entanglement.
- Pure functions – Small pure functions free of side effects form dependable building blocks. They are easy to test and easy for AI tools to understand because their entire behaviour is contained within the function itself.
These approaches are not tied to any specific framework or paradigm; they simply support the idea of shaping software as smaller, understandable parts.
Problematic Patterns
Certain styles tend to require far more context before making any change, raising the cognitive load for both humans and AI systems:
- Deep inheritance structures
- Classes with many responsibilities
- Behaviour scattered across many files
- Shared global state and heavy side effects
These patterns are not inherently wrong, but they make it harder for AI tools to operate efficiently.
Gradual Refactoring Approach
Adopting smaller units does not require a full rebuild; it can grow gradually:
- Extract focused areas from large, complex sections.
- Introduce clear boundaries where communication happens.
- Reduce unnecessary shared state.
- Use tests to express the intended behaviour.
- Improve structure continuously rather than all at once.
Smaller units produce smaller prompts for AI tools, reduce cost, and improve reliability. The same structure also benefits long‑term maintenance and human understanding.
Conclusion
As AI becomes a more active part of software creation, our architectural choices may shift toward structures that support clearer interaction between humans and AI systems. Smaller self‑contained units offer a promising direction: they reduce cognitive effort, strengthen boundaries, and make behaviour easier to understand.