SurviveAI
Offline survival assistant powered by on-device LLMs — a full agentic loop running on your phone with zero connectivity required. Voice in, voice and markdown out, with a real compass, GPS, and SOS-pattern flashlight built into the app.
On-device survival Q&A
Natural-language survival questions answered by a local LLM (Qwen 1.7B via the Cactus SDK). Streams tokens, renders markdown, shows tokens-per-second so you trust what's happening.
Hands-free speech-to-text
Whisper running on-device for voice input — for the moments your hands are full, cold, or bloody. Mic button on the chat surface, no network round-trip.
Grounded in FM 21-76
The model can call into a local knowledge base via tool-use: first aid, water, shelter, fire, food, navigation, signaling, psychology — all derived from the US Army Survival Manual.
Knows where it is
Every prompt gets device context injected — GPS coordinates, elevation, battery, time of day. The model's answers are scoped to what's actually possible from where you stand.
I'm Lost · I'm Injured · Wildlife
Three pre-built flows for the moments you can't think. S.T.O.P. protocol when lost, triage when injured, animal-specific guidance for bears, cougars, and snakes.
Compass · GPS · SOS flashlight
Real magnetometer compass, precise coordinates in multiple formats, and a flashlight with an actual SOS morse pattern. The tools you'd reach for first, in one place.
Pre-built + custom checklists
Common scenarios shipped as starter checklists; users can build their own. Progress persists locally so a dropped phone doesn't lose the loadout you built last week.
No cloud, no signal needed
Every model call, every search, every flashlight signal happens on-device. AsyncStorage for state. Zero network traffic by design — the whole app works in airplane mode at 12,000 feet.
The idea
I've spent significant time in the wilderness and in regions with zero cellular connectivity. Sometimes you have questions out there — about health, cooking, plant identification, basic survival — that would be helpful to answer in the moment. With everything LLMs have compressed into their weights, it felt obvious that you should be able to access that knowledge without a cell tower.
What I built
A React Native + Expo app running Qwen 1.7B on-device through the Cactus SDK, with tool-calling into a local knowledge base derived from the US Army Survival Manual (FM 21-76). The chat surface streams tokens, renders markdown, and shows live performance metrics. Whisper runs locally for hands-free speech input.
Around the AI sits a hardware layer — a real magnetometer compass, GPS coordinates in multiple formats, a flashlight with an SOS morse pattern — plus three emergency flows (Lost, Injured, Wildlife) for the moments where free-text isn't the right interaction. State is held in Zustand, persisted with AsyncStorage. The whole thing works in airplane mode.
Why it matters
This points to where the world is heading. As small language models become more powerful, there's a growing case for keeping compute on-device — for privacy, for cost efficiency, and for reliability. If you architect these systems well with agentic tool calling, they can be surprisingly capable even at small parameter counts.
Beyond the wilderness use case, there's a broader infrastructure argument: systems that depend entirely on cloud connectivity are a single point of failure.
What I learned
Building SurviveAI taught me a lot about the current capabilities and limitations of smaller parameter models, how to construct effective agentic tool-calling patterns at the edge, and how to design UX for high-stress situations where simplicity is everything. It was also a nice full-circle connection to those 40 days in Alaska where I had nothing but a topographical map and my own judgment.