📦 BoxFinder — Finding Stuff in Real Life with GitHub Copilot CLI
Source: Dev.to
Submission for the GitHub Copilot CLI Challenge
What I Built
I built BoxFinder, an iOS app that helps track physical storage containers—those mystery boxes in closets, garages, or storage units that everyone forgets about.
The MVP focuses on three core questions:
- What does this box look like? → box photo
- Where is it stored? → location photo
- What’s inside? → item photos + tags
Users can
- Create boxes with photos
- Add photos of the contents
- Manually tag items (auto‑tagging later 😉)
- Search by keywords to quickly find which box contains something—and where that box lives
Demo
📸 Screenshots
Main tabs: Boxes, Search, and Settings. The Boxes screen lists stored containers with photos and locations, the Search screen lets users browse by box name or location, and the Settings screen shows app info and support options.
Screenshots of creating and managing a box, including adding photos of the box and its location, viewing items inside, and deleting a box with a confirmation alert.
My Experience with GitHub Copilot CLI
🧭 Starting from a Spec
Before writing code, I drafted a simple product spec to guide development:
# BoxFinder MVP Spec (iOS 17+)
Goal:
- Track storage containers ("boxes") with:
1. box photo (how the box looks)
2. location photo (where the box is stored)
- Add item photos to a box (photo of contents).
- Each item photo has tags (auto later; manual for MVP).
- Search by keyword → show matching boxes + where they are.
Tech:
- SwiftUI
- SwiftData
- Photos stored in app Documents directory; DB stores relative file paths.
Core screens:
- TabView:
- Boxes: list, create box, box detail
- Search: search by tags/name, show results
I fed this spec to Copilot CLI with the following command:
copilot -i Read SPEC.md. Propose the minimal set of Swift files to implement:
- SwiftData models (Container, ItemPhoto)
- a PhotoStore to save/load images to Documents and return relative paths
- a basic TabView with Boxes list and Search screen
Output a step‑by‑step plan with file names and code blocks per file.
Copilot generated a reasonable file layout, SwiftData models, and a first pass at the UI structure. I then loaded everything into Xcode and iterated by pasting compiler errors and simulator issues back into Copilot.
🛠 Debugging & Platform Friction
A real‑world challenge: I wanted to test on my own phone, which only supports iOS 16. My original spec said iOS 17+, so a lot of generated code used newer APIs, producing errors such as:
Views only available in iOS 17: AddItemView, BoxesListView, ItemPhotosGridView, SearchView
I had to:
- Convert APIs back to iOS 16‑compatible patterns.
- Ask Copilot to downgrade the features.
- Eventually update the spec itself to say iOS 16+.
That back‑and‑forth took more time than expected, but it highlighted how strongly the model follows the spec—sometimes too strongly.
📷 Camera vs. Photo Picker
Another iteration was enabling taking photos directly, not just selecting from the photo library. I explicitly asked:
“Why can I only select photos? Add the function for taking photos.”
This required several rounds of fixes across the image picker and permissions logic.
🎨 UI Feedback (Where It Struggled)
I’ll be honest: UI polish was the weakest part of the experience. Even after asking Copilot CLI (using the claude‑haiku‑4.5 model) to optimize the UI, the app still felt:
- Visually dated
- Very default‑SwiftUI
- Lacking modern spacing, hierarchy, and personality
Describing UI problems purely in text was tough. I often wanted to attach screenshots and say, “This part looks weird—how do I fix it?” That’s something ChatGPT does better today, since I can show visuals directly.
🌍 Multilingual Surprises
I’m a native Chinese speaker, so I tested asking some questions in Chinese mid‑development. It mostly worked, but at one point Copilot changed UI strings into Chinese inside the app 😅. Not wrong, but definitely unexpected—the model didn’t always stay consistent about localization boundaries.
🤔 Overall Takeaways
Where GitHub Copilot CLI struggled:
- UI refinement
- Long debugging loops for certain functions
- No way to reason over screenshots
Compared to my recent experiments building three other iOS tools with ChatGPT, GitHub Copilot CLI required more manual fixes and cleanup… Of course! I also think it’s possible that these problems occurred because I’m still learning how to use the tool 🥲. Still, it was impressive to drive an entire MVP mostly from the spec and a few CLI prompts.
Final Thoughts
BoxFinder is still rough, but it already answers a problem I actually have:
“Which box did I put that cable in… and where is it stored?”
This challenge pushed me to:
- write clearer specs
- rely on CLI‑based AI workflows
- treat Copilot more like a junior engineer that needs careful direction
Thanks for the challenge, this was a fun way to stress‑test an AI‑first iOS workflow.

