We just shipped a mobile-friendly version of Oh Dear . It touched 226 files, added over 5,000 lines, and modified 160+ Blade templates. The PR took three weeks from first commit to merge.
AI got us maybe 60% of the way there in a single evening. The remaining 40% took three weeks of manual work from three developers.
That ratio is a pretty good summary of where AI is useful today, and where it isn’t.
Chief and the initial scaffold#
We used Chief , a CLI tool that breaks big projects into numbered tasks and runs Claude Code in a loop until they’re done. You describe the project, Chief splits it into user stories, and each story gets its own commit.
For the Oh Dear mobile project, Chief generated 15 user stories:
- US-001: Add viewport meta tag, remove the hardcoded minimum width
- US-002: Make the navbar responsive with a hamburger menu
- US-003 through US-012: Make every section responsive (monitor list, detail pages, check results, settings, status pages, billing, profile, wizard)
- US-013: Make modals and dropdowns mobile-friendly
- US-014: Make maintenance periods and reports responsive
- US-015: Ensure touch targets meet the 44px minimum
All 15 stories were committed in a single evening. Each commit is co-authored by Claude Opus 4.6. Then a second pass of 6 more commits fixed issues from the first round.
That’s 21 AI-generated commits, touching the entire application, in roughly 3 hours.
Where AI falls short#
The AI output was structurally correct. It added responsive Tailwind classes. It wrapped tables in scrollable containers. It stacked layouts vertically on small screens. If you squinted at it on a phone, it looked like a mobile app.
But it didn’t feel right. And feeling right is what separates a mobile app from a responsive afterthought.
Here’s what AI doesn’t know about Oh Dear: when your site goes down at 3am and you’re checking the alert on your phone, you need specific information immediately. Which check failed? When did it start? What’s the error? The AI made everything fit on a phone screen, but it didn’t know what mattered most in each context.
That’s domain knowledge. That’s years of building and using the product. No prompt can substitute for it.
The human work#
After the AI scaffold, three of us (Freek, Nick, and I) spent the next three weeks doing the actual mobile work. Looking at the git log, the pattern is clear: the AI commits are neat numbered user stories, and then it’s weeks of commits titled “wip”, “tweaks”, “cleanup”, and “fixing what was broken”.
Some examples of what humans had to do:
Rethinking layouts, not just shrinking them. Nick built entirely separate mobile card layouts for the monitor list page. The AI had just squished the desktop table into a narrower viewport. The right answer was a completely different component, showing status indicators, tags, and performance summaries as cards instead of table columns.
Fixing bugs the AI introduced. I found a malformed class attribute (missing quote) in the monitor list row. A wrong segment check (‘sites’ vs ‘monitors’) in the mobile menu active state. Duplicated @php blocks. A {!! $slot !!} that should have been {{ $slot }}. Dead CSS breakpoint classes that referenced a mg: prefix that doesn’t exist.
UX decisions that require product context. We renamed “Request new run” to “Check again”, because that’s what a client would say, not a developer. We consolidated the “last checked X hours ago” indicator into a pill on the page title instead of a separate line. We made entire cron check rows clickable on mobile, because on a small screen you shouldn’t have to tap a tiny link.
Designing for urgency. We added mobile back buttons on detail pages, because when you’re deep in a certificate health report on your phone, you need a way out. We made the “post a new status update” button prominent on mobile status pages, because during an incident, that’s the one action that matters. We refactored copy-to-clipboard so it works properly on mobile, because when you’re debugging on your phone and need to share a URL or error with a colleague, that has to work on the first tap.
Attention to detail. The scrollable table fade-out effect (a gradient hint on the right edge that disappears when you’ve scrolled to the end) went through multiple iterations. Alert boxes and notification bubbles were redesigned for smaller screens.
None of this came from prompts. It came from using the product on a phone and noticing what felt wrong.
The technical approach#
Two patterns emerged for handling mobile layouts:
Dual layouts for complex pages. The monitor list, for instance, has completely separate desktop and mobile templates. On desktop you get a data-dense table. On mobile you get cards. This isn’t a CSS media query toggle; it’s separate Blade components rendered conditionally. More code, but a much better experience.
Scrollable tables for simpler data. Pages like DNS history or cron log items use a horizontal scroll container with that gradient fade hint I mentioned. It’s a lighter approach that works well when the data structure is inherently tabular.
We also replaced the traditional navbar with a floating action button in the bottom-right corner. On mobile, navigation needs to be within thumb reach, not hidden behind a hamburger icon at the top of the screen.
What this taught me about AI-assisted development#
I’ve written before about how AI made web development fun again. I stand by that. But this project showed me something more nuanced.
AI is great at the tedious parts. Adding responsive utility classes to 160 templates? Nobody wants to do that by hand. Wrapping every table in a scroll container? Perfect AI work. Making sure touch targets are 44px minimum? Tedious, repetitive, and exactly the kind of thing an LLM excels at.
AI is bad at the creative parts. What information should a mobile user see first? Should this be a card or a table? Is “Request new run” the right label? Should the back button be in the header or inline? These are product decisions that require context an AI simply doesn’t have.
The mistake would be thinking the AI part is the hard part. It’s not. The scaffolding is the easy part. The hard part is the three weeks of tweaking, testing on real devices, arguing about layouts, and refining details until it feels right.
AI changes where you spend your time#
Before AI, making Oh Dear mobile-friendly would have been a project I’d estimate at 6-8 weeks. With AI, it took 3 weeks. The savings are real, but they didn’t come from eliminating work. They came from skipping the boring parts so we could focus on the parts that actually matter.
The 21 AI-generated commits were the foundation. The 50+ human commits were the house. You need both, but only one of them is the reason people will enjoy using Oh Dear on their phone.
If you want to see the end result, Oh Dear is now mobile-friendly . Go check your sites on your phone.