Erfahrung

sideQuest: When AI Meets the Open Road

Building an on-device AI road trip companion at TUM's iPraktikum - with 8 students, one industry partner, and a toy Tesla

17 Min. Lesezeit08.04.2026Justin LanfermannLive
Team sideQuest on stage at the iPraktikum CAT presentation

At TUM, there's a course where you don't get a textbook. You get a client. iPraktikum pairs teams of students with real industry partners, hands them a vague brief and three months of sprint cycles, and expects a functional prototype by the end. No hand-holding, no toy assignments, just a room full of people who need to ship software that someone actually asked for.

I'd come off a good start. A few weeks earlier, during the introcourse, I'd built RadioAtlas and somehow walked away with first place. That felt great, but the introcourse is a solo sprint. iPraktikum is a completely different beast: eight people, a coach, an industry partner with real expectations, and weekly deadlines that don't care if you had a rough Tuesday. The stakes weren't just grades. We were building something a company wanted to actually use.

Our partner was Quartett mobile, and the brief sounded deceptively simple: build a smart road trip companion for iOS. That's our team. The app would suggest personalized stops along your route using on-device AI. No servers, no cloud, no data leaving the phone. Just Apple Intelligence running locally on your device, trying to figure out where you and your passengers might want to stop.

The product name was sideQuest. The motto: "Turn Miles into Memories." And for the next three months, it kept us busy.

sideQuest promotional poster on a corkboard showing the app UI with a route to Neuschwanstein
The sideQuest poster: personalized stops, on-device AI, CarPlay integration.

The Brief: Turn Miles into Memories

Quartett mobile's pitch was rooted in a simple observation: road trips are boring between the starting point and the destination. You drive for hours, maybe stop at a gas station, and the most interesting thing that happens is an argument over which podcast to play. They wanted an app that could turn the in-between into the actual experience.

sideQuest Plan Trip screen showing a route from Munich to Berlin with four passengers
Planning a road trip to Berlin with four passengers.

The brief came with two persona scenarios that grounded everything we built. The first was Eve, a university student planning a trip from Cologne to Amsterdam with two friends. They want fun stops and to learn something along the route, with a 9pm arrival deadline. The second was Charlie, a parent of two kids aged six and ten, driving the family from Cologne to Munich. They create a profile, set an 8pm arrival time, and the app suggests events and small stops along the way, nudging them to get going again when time runs low. Beyond the personas, the optional requirements pushed further: CarPlay integration with suggestions appearing as action sheets on the infotainment display, and a Live Activity on the lock screen showing distance and a countdown timer.

And then the constraint that shaped everything: no servers. All AI processing had to happen on-device using Apple's FoundationModels framework. No data leaves the phone. No API keys, no server costs, no rate limits. Complete privacy. This sounds liberating until you realize an on-device model has a fraction of the context window and reasoning capability of something like GPT-5 or Claude. We were building an AI-powered app with what is essentially a very smart calculator.

The team immediately started poking holes. Jonathan pointed out that MapKit doesn't support turn-by-turn navigation or voice guidance, two things people expect from anything map-related. I asked about battery consumption, because running foundation models on-device is not exactly gentle on your phone. Christian raised questions about handling time constraints and route deviations. And there was this interesting tension baked into the requirements: "no server component" on one hand, but "use reliable internet sources to enhance suggestions" on the other. We spent a solid chunk of the first week just figuring out what the brief actually meant.

Designing the Brain: The AI Itinerary Engine

The core of sideQuest is a 4-phase AI pipeline that turns freeform passenger preferences into ranked, actionable stops along your route. Getting here took weeks of iteration, dozens of broken prompts, and more than a few moments of wondering if on-device AI was even up to the task.

Phase 0: Passenger Analysis

sideQuest suggesting Garching Forschungszentrum as a stop with Add Stop and Skip buttons
The AI suggests a stop. Skip it or add 30 minutes to your trip.

Each passenger has a freeform preferences field. Something like "loves old castles, can't walk long distances" or "needs to stretch legs, traveling with a toddler." The on-device LLM parses this into structured data: 5-8 searchable keywords per passenger, plus dealbreakers. "Loves the outdoors and quiet spots" becomes ["park", "forest", "nature_reserve", "viewpoint", "hiking"]. It's surprisingly good at this, even on-device.

Dealbreakers work the other way: they're hard filters that eliminate results before the AI even sees them. "Wheelchair user" flags all stops without accessibility data. "Traveling with a dog" deprioritizes indoor venues. The system doesn't just find what you want, it knows what to avoid.

Phase 1: POI Search (The Tool-Calling Pattern)

This is where it gets interesting. The AI doesn't call APIs directly. Instead, it generates a SearchPlan specifying city + category pairs, and the app executes those searches in parallel against the Overpass API. It's a tool-calling pattern: the AI decides what to search for, but the app does the actual network calls. Up to 6 concurrent searches, 5 results each, spanning at least 3 different cities along the route.

Before the AI can plan anything, the app extracts 5-8 city waypoints from the route polyline using reverse geocoding. The sampling interval adapts to trip length: roughly 40km for short trips under 200km, scaling up to 100-120km for drives over 600km. This gives the AI a geographic vocabulary to work with.

Apple's @Generable macro was central to making this work. It lets you define Swift structs that the on-device model knows how to produce as structured output. The @Guide annotations are essentially inline prompt engineering, telling the model exactly what each field means and what constraints to respect:

The beauty of this pattern is that the AI's output is type-checked at generation time. If the model produces something that doesn't match the struct, it fails immediately instead of silently corrupting downstream logic. Here's how the actual search execution works, using Swift's structured concurrency to fire off all searches in parallel:

Phase 2: Stop Selection and Ranking

Once all searches return, the AI receives every cached POI and selects the best 2-3 diverse stops. Each stop gets a score (0.0-1.0), a reason, a suggested duration, and pre-planned activities. Deduplication ensures each stop is in a different city with a different primary purpose. Maximum one food stop, no repeats.

After selection, an enrichment phase queries nearby facilities within 600m (toilets, parking, WiFi, wheelchair access) and activities within 1200m (museums, viewpoints, parks). Wikipedia and Wikidata provide descriptions and images through three fallback strategies. Activities get priority-scored, with museums at the top and shopping at the bottom.

Buffer-Time Monitoring

The pipeline doesn't stop after the initial suggestions. sideQuest continuously monitors your time budget. After a 20-minute initial delay (to let the route stabilize), it checks every 10 minutes. If you have more than 20 minutes of buffer remaining, it suggests an additional stop. If your buffer jumps by 30+ minutes, say because you skipped a 45-minute stop, it triggers a complete replan with fresh context.

The Full Stack: One Map, Many Surfaces

Bild wird geladen...
Apple Maps showing the full route from Munich to Berlin with waypoints
The full Munich-to-Berlin route in Apple Maps.

If you opened sideQuest's codebase expecting a typical SwiftUI app with tab bars and navigation stacks, you'd be confused. The entire UI is a single full-screen map with a sheet on top. No TabView, no router, no coordinator pattern. Just a SheetState enum driving which view appears in the sheet, while the map stays interactive behind it via .presentationBackgroundInteraction(.enabled).

That enum has nine cases, each mapping to a specific view and presentation detent. From the initial profile setup, through search and ride details, to navigation, at-stop experiences, and trip completion, the entire user journey is a state machine. ContentView owns around 20 @State properties, with state flowing down via bindings and environment, and events flowing up via closures. It sounds heavy, but the single-view architecture means you never lose context. The map is always there, always interactive, always showing your route.

Dual POI System with Failover

For POI data, we built a dual-source system. The Overpass API (OpenStreetMap) gives us exhaustive spatial coverage with flexible tag queries. Apple Maps via MKLocalSearch gives us rich metadata and curated results. The app queries Overpass for what exists, then enriches each result with Apple Maps metadata. Since Overpass runs on community servers that can go down or throttle you, we implemented a 3-endpoint failover chain: if the main server fails, it tries two backups before giving up.

CarPlay Integration

CarPlay runs in a completely separate UIScene managed by its own delegate. It has no access to SwiftUI's view hierarchy or @Environment. Communication between the main app and CarPlay happens entirely through NotificationCenter: the phone pushes trip and suggestion updates to CarPlay, and CarPlay sends back suggestion decisions. The phone is always the single source of truth. CarPlay is read-only. Suggestion notifications auto-dismiss after 15 seconds, because drivers shouldn't be staring at prompts.

Live Activities and the SwiftUI Timer Trick

The lock screen Live Activity shows your current stop, distance, progress, and a countdown timer. The timer updates every second, even when the app is in the background, and it doesn't need push notifications. How? One line of SwiftUI:

That's it. SwiftUI's built-in timer rendering handles the rest. The Dynamic Island shows the same data in four surface sizes: expanded (with full details and progress bar), compact (icon + distance), and minimal (just the icon). If the app crashes, a 15-minute stale interval auto-expires the activity so your lock screen doesn't show zombie data forever.

Software Theatre: Where Code Meets Stage

iPraktikum has a tradition that sounds insane if you've never experienced it. Twice during the semester, every team has to present their work, but not just with slides and demos. Each team performs a software theatre: a scripted scene, acted live on stage, that demonstrates the app in a realistic scenario. Think of it as a product demo dressed up as a short film, complete with dialogue, props, and stage directions. There are two milestones: the Design Review (DR) midway through and the Client Acceptance Test (CAT) at the end.

Design Review: The Road to Berlin

For the Design Review, our four actors (Kartik, Cyrine, Raj, and Christian) performed a road trip from Garching to Berlin. Raj suggests using sideQuest to plan the trip. Cyrine wants nature, specifically "not a parking lot with one sad tree." Christian wants to see the Audi Forum in Ingolstadt. They set up the app, the AI suggests three stops, and they hit the road with our prop.

About that prop. We had a miniature red Tesla ride-on toy car. The kind a five-year-old would zoom around a driveway in. We used it as our "vehicle" for every stage scene. The dry run feedback included the note: "Nitpicking: clean the Tesla." We did not clean the Tesla.

After the theatre, Ha Vy and I presented the architecture. The dry run feedback for our section was encouraging: "AOM: Well done! Easy to follow, really good job!" That felt good. We'd spent a lot of time making complex decisions sound simple on a slide.

CAT Theatre: The Meta Road Trip

The CAT theatre was more ambitious. The concept: the entire presentation IS a sideQuest stop. The audience doesn't know this at first.

Scene 1. Corporate meeting room. Four of us (me, Ha Vy, Jonathan, Atalay) hunched over MacBooks. Max (our project lead) pokes his head through the door: "Hey, meeting's cancelled!" We all close our MacBooks in synchronization. Jonathan: "Finally. I was running out of ways to look busy."

I pull up sideQuest and try to search for a trip to Bielefeld. No results. "Ah true, forgot that it doesn't exist," I say. (For non-Germans: there's a running conspiracy joke that the city of Bielefeld doesn't actually exist.) I switch to Disneyland. But sideQuest suggests a stop in... Garching. Fifteen minutes away.

It says there's an... ongoing CAT presentation.

Ha Vy: "CAT? As in... the excavator thingies?" Me: "No. Don't you remember what course you picked this semester?" Ha Vy: "Why would anyone voluntarily go to a CAT presentation on their road trip to Disneyland?"

And then the line that got the biggest laugh of the night:

Note: If you skip this stop, you will receive a 5.0.

Scene 3. The sound of an engine grows louder. The presenter on stage looks confused. And then a Tesla glides onto the stage from stage right. I'm in the driver's seat. Ha Vy, Jonathan, and Atalay walk behind the vehicle. I park it right next to the podium.

Bild wird geladen...
Justin walking on stage during the CAT theatre scene, earpiece in place, Tesla prop visible behind
Walking on stage during the CAT theatre scene, earpiece in place, ready to "park" next to the podium.

From the audience, Dima (our coach) screams: "You can't park there!" Scripted. I hold up my phone: "According to sideQuest, this is the best entertainment possible." I tap the screen. The atStopView appears with a 30-minute countdown timer. Atalay reads the "Things to Do" section: "Watch presentation. Take notes. Question everything." I inspect the facilities: "That one's... a coffee cup with a question mark? And that one appears to be a crying emoji next to a textbook." Atalay: "Accurate."

Bild wird geladen...
sideQuest stop detail showing Watch Presentation, Take Notes, Question Everything as activities
The CAT as a sideQuest stop: "Watch Presentation. Take Notes."
Bild wird geladen...
sideQuest at-stop view with a 29-minute countdown timer and Continue Journey button
The 29-minute countdown timer ticking down during the presentation.

We sit in the front row. The Tesla stays parked beside the podium for the entire presentation. And at the end, when the actual content is done, we come back on stage. I whisper: "Wait, was this entire thing about the app?" Ha Vy: "I think this was a presentation about the thing we're using." Atalay: "Meta."

It was genuinely one of the most fun things I've done at university. The dry run feedback acknowledged it was "on the risky side" but concluded with "they like risks, it's fine." The audience gave us a standing ovation.

The Team Behind sideQuest

Eight developers, one coach, one project lead, one industry partner. Our coach was Dima Dmukh, and our project lead was Maximilian Anzinger.

I wore a few hats. Architecture presenter with Ha Vy at the Design Review. Theatre actor in both presentations, and the driver in the CAT.

What made the team work wasn't the process. It was the culture. We had weekly sprints with a new functional version each week, GitLab merge requests with automatic reviewer assignments, a 4-stage CI pipeline (assign reviewers, lint, build, release), and SwiftLint keeping everyone's code style consistent. We used XcodeGen to generate project files and avoid the classic merge-conflict nightmare that comes with .xcodeproj files.

But the sprints were just the structure. The soul of the team was the stuff that happened between the deadlines. Bowling nights. Pool nights where Dima played suspiciously precise shots. A winter trip to the Allianz Arena to film the trailer. Elevator selfies with twelve people crammed into a steel box in winter jackets, thumbs up, everyone laughing.

For the CAT, we had custom sideQuest jerseys. White shirts and numbered jerseys with "Turn Miles into Memories" and the sideQuest branding. The feedback from dry runs consistently pointed out that our branding was "the strongest of all teams." That felt earned. The jerseys, the motto, the Tesla prop, the theatre concept. Every piece of it was deliberate.

Looking Back: Lessons from the Road

Three months of building sideQuest left me with a much sharper understanding of where on-device AI shines and where it stumbles. The small context window means the model can produce repetitive or generic suggestions. It occasionally misjudges preferences in entertaining ways, like suggesting a brewery for a family with small children. We mitigated this by capping waypoints to 5-8 cities, limiting POI results to 5 per search, and writing increasingly specific prompts in the SelectionPromptBuilder. Very long trips with detailed passenger profiles could still trigger a contextWindowExceeded error. That's the ceiling you hit with on-device models, and there's no clever engineering around a hard hardware limit.

MapKit was both a blessing and a frustration. It gave us a beautiful, performant map with clustering and route rendering out of the box. But it doesn't support turn-by-turn navigation or voice guidance. Jonathan flagged this in his initial questions, and he was right. It meant sideQuest could suggest stops and show routes, but it could never be a full navigation replacement. We had to accept that and design around it.

The Overpass API was indispensable for spatial queries but unreliable in practice. Community-run servers that return HTTP 429 when they're busy or go down entirely during peak hours. The 3-endpoint failover chain saved us multiple times during demos, but it's the kind of infrastructure dependency that keeps you nervous during live presentations.

What went right? The branding. The feedback from every dry run pointed to it: "Beautiful slides, the general slide design is on point." "You have such good branding, it is the strongest of all teams, use it more." The pipeline architecture. Separating AI decisions from API execution was the single best design choice we made. And the theatre, weaving the app into a live performance that doubled as a demo, made our work memorable in a way no slide deck could.

What was hard? Prompt engineering an on-device model is fundamentally different from working with cloud-based AI. You can't just throw more context at the problem. You have to be surgical about what information the model sees, when it sees it, and how you validate its output. The @Generable macro helped enormously with the validation side, but the prompt crafting was pure trial and error.

On a personal level, the theatre work pushed me out of my comfort zone more than any code review ever has. Standing on a stage with an earpiece, driving a toy car past a confused presenter, delivering scripted lines to a room of people. That's not something I thought I'd do at a technical university. The fact that it worked, that the audience genuinely laughed and clapped and gave us a standing ovation, was validation that software doesn't have to be presented like software.

Bild wird geladen...
Two sideQuest teammates seated on stage with laptops during the CAT theatre scene
Seated on stage during the CAT theatre, laptops open, pretending to look busy.

What Could Be Different

No project is done, it's just due. If we had another semester, here's what I'd push for:

  1. Server-side AI as an option. Cloud-based models would unlock larger context windows, better reasoning, and more consistent output. The repetitive suggestion problem largely disappears when you're not constrained by on-device compute. An opt-in cloud mode with the on-device path as a privacy-first fallback would be the best of both worlds.
  2. Proper turn-by-turn navigation. MapKit's limitations meant we couldn't compete with Apple Maps or Google Maps on navigation. Integration with a third-party navigation SDK, or Apple expanding MapKit's capabilities, would make sideQuest a true all-in-one road trip app.
  3. Real preference learning. The app already feeds accepted and declined suggestions back into the prompt for the next round. With more time, this could become a true learning loop, weighting preferences across multiple trips rather than just the current session.
  4. Full offline mode for POI data. Core journey data is persisted, but POI search still needs internet. Pre-caching POI data for a planned route before departure would make sideQuest usable in areas with spotty coverage.
  5. Weather-aware suggestions. We modeled weather conditions in a WeatherCondition enum but never connected it to a real weather API. Outdoor stops in the rain are a bad recommendation.
  6. Group profiles. Saving recurring passenger groups like "Family with Kids" or "Weekend Friends" for quick selection instead of re-entering preferences every trip.
  7. Apple Watch support. A companion app for at-stop information and basic trip controls from your wrist.

iPraktikum as a course is unlike anything else I've experienced in university. The combination of real clients, sprint deadlines, live presentations, and team dynamics creates pressure that no exam can replicate. You learn things about yourself that pure coding never teaches: how you communicate under stress, how you give and receive feedback, and whether your architectural decisions hold up when someone else has to build on top of them.

Bild wird geladen...
The sideQuest team on stage at the Client Acceptance Test in custom jerseys
The sideQuest team takes the stage at the Client Acceptance Test, wearing their custom "Turn Miles into Memories" jerseys.

Six teams presented at the CAT that semester: Bayerische Polizei (police fleet management), m3 management consulting (network measurement with Apple Watch + CarPlay), Quartett mobile (smart road trip companion), Siemens (machine sensor analysis on iPadOS), Maiß (gamified AI learning), and TUM LifeLong Learning (AI coaching companion). Every team shipped something real. Every theatre was different. Every demo had at least one moment where you could see months of work crystallize into something tangible on screen.

sideQuest started as a brief about making road trips fun. It ended as a working AI itinerary engine with CarPlay, Live Activities, dual POI sources, a toy Tesla, and a meta theatre concept that turned its own presentation into a stop on its route. And a meta presentation that had so many memes it might've made our Project Lead uncomfortable. The code is done for now. You can find the official project page on TUM's website.

Bild wird geladen...
The full iPraktikum Winter 2025/26 cohort in matching blue hoodies
The full iPraktikum cohort. Six teams, one semester, and a lot of shipped software.

Skills Improved

Swift +35
SwiftUI +20
MapKit +15
Presentation +10
  1. Diese Seite wurde leider noch nicht vollständig übersetzt.