Back to Case Studies
Internal Product iOS + iPadOS + watchOS

Building on Apple Intelligence

How we built a production-grade AI fitness application using nothing but Apple's native frameworks — and what it proves about the platform everyone said was falling behind.

{

The Premise

}

By 2024, the tech narrative was settled: Apple had fallen behind on AI. While competitors shipped cloud-based chatbots and generative models, Apple seemed to have nothing to show. The consensus was that the company had bet wrong — that on-device processing was a dead end, and the future belonged to whoever had the most GPU clusters.

We disagreed. The question we asked was simple: could you build a genuinely useful, technically complex AI application — one that processes video in real time, understands human movement, and adapts to individual users — using nothing but what Apple ships in the box? No third-party ML libraries. No cloud inference endpoints. No API keys.

{

KineticForm

}

KineticForm is a native fitness platform for iPhone, iPad, and Apple Watch. At its core is a real-time form analysis engine: point your camera at yourself while lifting, and the app scores your technique frame-by-frame, counts reps, detects movement phases, and provides audio feedback — all while recording every set, tracking nutrition, managing a gamification system, and streaming live heart rate data from your wrist.

It's not a demo. It ships with 12 fully analyzed exercises, each with sport-science-derived angle rules gated to specific movement phases. It has a personalized calibration system that adjusts thresholds to your body proportions. It handles multi-person scenes by tracking the primary subject through occlusion and frame drops. And it does all of this at camera frame rate on a standard iPhone.

{

The Approach

}

The entire application was built on Apple's native stack. Every piece of intelligence runs on-device, powered by the Neural Engine. Here's how the platform came together:

Vision Framework — Real-Time Pose Detection

VNDetectHumanBodyPoseRequest provides 2D joint positions from every camera frame. On iOS 17+ devices with the Elite tier, VNDetectHumanBodyPose3DRequest adds depth. We built a full analysis pipeline on top: angle calculation between joint triplets, a LiftActivityDetector that classifies movement into Idle, Setup, Concentric, Eccentric, and Resting phases, and phase-gated rules that only fire when biomechanically relevant. A 5-frame rolling average smooths the scores to prevent flicker.

HealthKit + WatchConnectivity — Biometric Intelligence

Apple Watch creates a live HKWorkoutSession and streams heart rate to the iPhone every 5 seconds via WatchConnectivity. The iPhone calculates real-time RPE using the Karvonen formula — mapping %HRR to a 1-10 scale with resting HR pulled from HealthKit. When a set is completed, the current HR and estimated RPE are stamped to the SetLog for historical analysis. HealthKit also provides sleep, HRV, steps, and body mass data that feed into the progress dashboard.

SwiftUI — Adaptive Multi-Device UI

A single codebase adapts across iPhone, iPad, and Apple Watch. The app uses iOS 26's new Tab API with iPad sidebar navigation, falling back to TabView on iOS 16+. iPad layouts use LazyVGrid and split views. The Watch app provides a complete workout companion: exercise list, set logging via digital crown, rest timer with haptic feedback, and a session summary — all synced to the phone in real time.

SwiftData + StoreKit 2 — Persistence & Monetization

SwiftData @Model classes handle local persistence on iOS 17+, with a JSON-in-UserDefaults fallback for iOS 15-16 via a CompatibilityService router. StoreKit 2 powers a three-tier subscription model (Free/Pro/Elite) with feature gating, plus a la carte credit packs for form analysis sessions. The entire data layer is local-first — no backend required for core functionality.

{

What This Proves

}

Building KineticForm answered our original question decisively. Apple's on-device AI stack isn't just viable — for certain categories of application, it's superior to the cloud-first alternative. Here's what we took away:

Latency matters more than model size. Real-time form analysis requires frame-rate inference. A 50ms cloud round-trip would make the skeleton overlay feel laggy and the rep counter unreliable. On-device inference on the Neural Engine is effectively zero-latency.

Privacy isn't a limitation — it's a feature. Users are sharing their body, their heart rate, their health data. Nobody wants that streaming to a server. On-device processing is the only architecture that makes sense for biometric fitness data. Apple got this right from the start.

The ecosystem is the moat. No other platform lets you capture video on a phone, stream biometrics from a watch, adapt layouts for a tablet, unify health data across all three, and manage subscriptions — all through first-party frameworks with a single codebase. This kind of cross-device integration is what Apple built its platform for.

The "AI gap" was always a framing problem. Apple wasn't behind. They were building a different kind of AI — one that runs on your device, respects your data, and works without an internet connection. KineticForm ships with zero cloud dependencies for its core intelligence. That's not a compromise. That's the point.

{

The Stack

}

Vision

2D/3D pose detection, text recognition (OCR import)

HealthKit

HR, HRV, sleep, workouts, body metrics

SwiftUI

Adaptive UI for iPhone, iPad, Apple Watch

SwiftData

Local-first persistence with legacy fallback

StoreKit 2

Subscriptions, IAP, credit packs

WatchConnectivity

Real-time HR streaming to iPhone

12+

Exercises with AI rules

6

Apple frameworks used

0

Cloud AI dependencies

3

Device form factors

Try KineticForm early

Get founding-member pricing and direct input on the roadmap.