Create a Dining Recommendation Micro App Using LLMs, Geolocation, and Group Preferences
Build a mobile-friendly dining micro app in 2026 — backend APIs, LLM prompt templates, geolocation, and PWA deploy steps to stop group decision fatigue.
Stop the "Where should we eat?" loop — build a tiny app that decides for you
Decision fatigue in group chats is real: everyone has an opinion, nobody wants to decide, and the thread dies. In late 2025 Rebecca Yu shipped a one-week micro app called Where2Eat to solve just that — a compact app that aggregates friends' tastes and returns a few solid dining picks. This tutorial recreates that idea for 2026: a developer-first walkthrough to build a dining micro app (PWA) that uses LLMs, geolocation, and prompt-driven group preference aggregation. You’ll get backend APIs, concrete prompt templates, and deployment steps so you can ship a mobile-friendly PWA fast.
Why now? What changed by 2026
Several platform and AI trends in late 2025 → early 2026 make this micro app pattern especially practical:
- LLMs are cheaper and faster: inference at edge/regions and compact models (GPT-4o-class and newer open families) mean lightweight prompt aggregation is cost-effective.
- Web APIs improved: better PWA support, more granular geolocation permissions, and push/web-share refinements make mobile-first micro apps snappy.
- On-device options: small LLMs can run locally for private apps; choose cloud for group aggregation, local for single-user privacy.
- Micro apps are mainstream: people build and publish tiny, personal apps (Rebecca Yu's Where2Eat is a canonical example) — fast feedback loops and iterative releases work well here.
High-level architecture
We’ll keep the stack minimal and practical for teams and solo devs:
- Frontend: React (or plain JS) PWA that collects preferences, shows map/location, and hits backend endpoints.
- Backend: Serverless function / Edge API that aggregates preferences using an LLM, queries a Places API, and returns recommendations.
- Places source: Google Places / Foursquare / OpenStreetMap-based service for POI data.
- Optional: vector DB for caching and RAG for caching and RAG, user store for feedback and personalization.
Data model (example)
{
"sessionId": "uuid-1234",
"location": { "lat": 37.7749, "lng": -122.4194 },
"radius_m": 1500,
"members": [
{ "id": "alice", "prefs": "no-spicy, vegetarian-friendly, likes ramen" },
{ "id": "bob", "prefs": "likes spicy, budget <$20, no seafood" }
],
"context": "lunch, weekday"
}
Backend API — endpoints and behavior
Build a minimal API surface so the frontend stays tiny. The key endpoints:
- POST /api/aggregate — Accepts member preferences, returns an aggregated preference JSON (LLM output).
- GET /api/search — Uses aggregated preferences + geolocation to search Places and return scored results.
- POST /api/feedback — Stores feedback to improve recommendations and tune prompts.
Example: Express + TypeScript serverless handler (skeleton)
import express from 'express'
import fetch from 'node-fetch'
const app = express()
app.use(express.json())
// POST /api/aggregate
app.post('/api/aggregate', async (req, res) => {
const { members, context } = req.body
// build prompt and call LLM (example uses a generic fetch to LLM provider)
const prompt = buildAggregationPrompt(members, context)
const llmResp = await callLLM({ prompt })
// parse JSON output (see template later)
const aggregated = JSON.parse(llmResp.output)
res.json({ aggregated })
})
app.listen(3000)
Prompt templates for group preference aggregation
A good prompt outputs structured JSON — predictable keys make downstream logic simple. Below are two templates: one to aggregate preferences, and one to rank restaurants.
1) Aggregation prompt (system + user style)
System: You are an assistant that summarizes a group's dining preferences into a compact JSON object.
Output only valid JSON. Do not add extra text.
User: Here are the members and their short preference descriptions:
{members}
Context: {context}
Return JSON with keys: "group_summary", "must_avoid", "preferred_cuisines", "price_sensitivity" (low/medium/high), "dietary_constraints", "tone" (casual/formal/date/quick), and "confidence" (0-1).
Example input -> output pair:
Input:
[ {"id":"alice","prefs":"vegetarian, likes ramen"}, {"id":"bob","prefs":"budget <$20, spicy ok"} ], context: lunch
Output:
{"group_summary":"Casual lunch with vegetarian options and inexpensive spicy-friendly choices","must_avoid":[],"preferred_cuisines":["Japanese","Asian fusion"],"price_sensitivity":"high","dietary_constraints":["vegetarian"],"tone":"casual","confidence":0.9}
Replace {members} and {context} programmatically. The key is to enforce a strict JSON schema so your backend can parse without NLP heuristics.
2) Recommendation prompt (rank suggested places)
System: You are a recommendation engine that ranks nearby restaurants according to the group's aggregated preferences. Output only JSON array of restaurants with keys: name, reason (one-sentence), score (0-1), source_id, url.
User: Aggregated preferences: {aggregated}
Location: {lat},{lng}
Radius_m: {radius}
Places: {places_search_results_json}
Return top 5 restaurants with precise reasons tied to the aggregated preferences.
In production, ensure the LLM is bounded — include explicit instructions for score ranges and required keys.
Practical Places integration and geolocation
For POIs you can choose between provider tradeoffs:
- Google Places — best POI coverage and metadata (hours, photos), paid.
- Foursquare — good local discovery, simpler licensing.
- OpenStreetMap / Nominatim — free, requires more enrichment work.
Browser geolocation (mobile friendly)
async function getLocation(){
return new Promise((resolve, reject) => {
navigator.geolocation.getCurrentPosition(
pos => resolve({lat: pos.coords.latitude, lng: pos.coords.longitude}),
err => reject(err),
{enableHighAccuracy: true, timeout: 5000}
)
})
}
Ask for the least privileged permission scope and explain why to the user. On web, prompt UX matters: say "Share your location to get nearby picks" instead of a blind permission request.
PWA front-end: mobile-first UX and service worker
The PWA is the user's face: make the flow one-click for groups.
- Collect a minimal set of answers (name, one-line prefs) shareable via a group link or QR.
- Aggregate on the backend (POST /api/aggregate) once all members respond.
- Show ranked picks with map, open-hours, and ETA.
Manifest and service worker (skeleton)
{
"name": "Where2Eat Clone",
"short_name": "Where2Eat",
"start_url": "/?source=pwa",
"display": "standalone",
"icons": [{"src":"/icons/192.png","sizes":"192x192","type":"image/png"}],
"background_color": "#ffffff",
"theme_color": "#1a73e8"
}
// service-worker.js (very small cache-first strategy)
self.addEventListener('install', event => {
event.waitUntil(caches.open('static-v1').then(c => c.addAll(['/','/index.html','/bundle.js'])) )
})
React snippet: submit prefs and show results
function PreferenceForm({sessionId}){
const [name, setName] = useState('')
const [prefs, setPrefs] = useState('')
async function submit(){
await fetch('/api/join', {method:'POST', body: JSON.stringify({sessionId, name, prefs}), headers:{'Content-Type':'application/json'}})
}
return (())
}
Deployment: serverless + edge options
For micro apps, shipping fast matters more than over-architecting:
- Deploy frontend to Vercel, Netlify, or Cloudflare Pages (fast CDN + PWA support).
- Backend as Edge Functions (Vercel Edge, Cloudflare Workers) for low-latency LLM calls near users.
- Places APIs called from Edge/Server region to avoid CORS and expose keys safely.
- Use secrets manager for keys and rotate frequently.
Scaling & cost tips
- Cache places results for a session; don’t call LLM for every small change.
- Run aggregation LLM only after all or enough members have joined; provide manual refresh.
- Use a smaller LLM for routine tasks (aggregation) and a more capable model for final ranking if needed.
Testing, metrics and feedback loop
Deploy with observability and a simple feedback channel:
- Collect click-throughs and “picked” vs “rejected” flags for recommendations.
- Log which prompt produced which recommendation for prompt tuning and reproducibility.
- Use A/B testing to compare different prompt templates or scoring heuristics.
Security, privacy and consent
Micro apps are lightweight but still must respect privacy:
- Minimize storing raw member text — persist only aggregated, anonymized signals when possible.
- Show clear consent for location usage and explain retention policies.
- Encrypt secrets and use short-lived tokens for Places/LLM APIs.
Advanced strategies — personalization and RAG
Once you have usage and feedback, upgrade intelligently:
- Vector store + RAG: Keep a small vector DB of places and past feedback for faster, context-aware suggestions.
- Personalization signals: weight recommendations by recent picks, saved favorites, or team-specific habits.
- On-device inference: for privacy-first users, run the aggregation LLM on-device and only send anonymized signals to the server.
Example full flow (sequence)
- Host creates session — returns sessionId and share link.
- Members open link → send one-line preferences and optional dietary tags.
- Host clicks "Aggregate" → frontend calls POST /api/aggregate with members.
- Backend calls LLM with aggregation template → returns structured JSON.
- Backend queries Places API with aggregated preferences → obtains candidate list.
- Backend calls ranking prompt to score candidates → returns top 5 picks to frontend.
- Members vote or provide feedback; backend records to improve future sessions.
Prompt template examples you can copy
// Aggregation template (fill placeholders programmatically)
const aggregationPrompt = `System: You are an assistant that summarizes a group's dining preferences into JSON only.
User: Members: ${JSON.stringify(members)}
Context: ${context}
Produce JSON with keys: group_summary, must_avoid, preferred_cuisines, price_sensitivity, dietary_constraints, tone, confidence.`
// Ranking template
const rankingPrompt = `System: Rank restaurants as JSON array using the aggregated preferences: ${JSON.stringify(aggregated)}
Candidates: ${JSON.stringify(candidates)}
Return top 5 with keys: name, reason, score, source_id`;
Lessons from Where2Eat and micro app best practices
Rebecca Yu built Where2Eat in a week: small scope, clear problem, rapid iterations. Micro apps succeed when they solve one high-friction pain point well.
Apply that here: start with a single mode (e.g., lunch near me), optimize the flow for one click, then expand features.
2026 trends to watch (short list)
- Federated personalization — models personalize without centralizing PII.
- Edge LLMs — ultra-low latency suggestions for geo-bound apps.
- Improvements in WebXR & Maps — richer map widgets and spatial POI discovery.
- Policy & privacy — improved browser geolocation controls and standardized permission UIs.
Actionable checklist — ship in a day
- Create a minimal PWA with a one-screen preference form.
- Implement POST /api/aggregate that returns strict JSON using an LLM.
- Hook to a Places provider and implement GET /api/search.
- Deploy frontend to Vercel/Netlify and backend as Edge Functions.
- Collect feedback and iterate on the prompts — keep the JSON schema stable.
Conclusion — why this micro app pattern wins
Micro apps combine narrow scope with powerful AI primitives. In 2026 you can build a compact, privacy-conscious dining recommender that cuts group decision time dramatically. Use structured prompt outputs, short serverless APIs, and a mobile-first PWA to iterate fast. Rebecca Yu's Where2Eat is proof: start small, ship fast, and optimize based on real usage.
Ready to build? Clone the starter repo, drop in your LLM and Places keys, and follow the checklist above to ship your first group dining micro app this afternoon.
Related Reading
- From Micro-App to Production: CI/CD and Governance for LLM-Built Tools
- Developer Productivity and Cost Signals in 2026: Polyglot Repos, Caching and Multisite Governance
- Observability in 2026: Subscription Health, ETL, and Real‑Time SLOs for Cloud Teams
- Indexing Manuals for the Edge Era (2026): Advanced Delivery, Micro‑Popups, and Creator‑Driven Support
- Speed Up Your Work Phone: 4-Step Mobile Routine for Remote Developers
- How Credit Union Real Estate Tools Can Help Travelers Find Better Long Stays
- Designing a Calming Yoga Space: Using Art, Lighting and Textiles to Boost Your Practice
- Pandan in the Pantry: How Southeast Asian Aromatics Elevate Your Morning Cereal
- A Cinematic Soundtrack for Relaxation: Using Film Scores (Yes, Even Hans Zimmer) in Massage Sessions
Related Topics
programa
Contributor
Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.
Up Next
More stories handpicked for you
From Our Network
Trending stories across our publication group