Essay
Notice

The Body Knows First

Notice is in active beta — try it now.Join the TestFlight →
01The Gap

The Gap

You have felt this before. Your shoulders are tight for twenty minutes before you notice you are bracing. Your heart rate climbs through a conversation you tell yourself is fine. Your breathing shallows during a meeting and you only register it when someone asks if you are okay.

The body leads. The story follows. There is always a gap.

The gap is measurable. Heart rate variability drops before conscious anxiety arrives. Heart rate rises before excitement registers. Skin conductance shifts before you can name the feeling. Bud Craig’s work on insular cortex mapping shows that interoceptive signals — the body’s report on its own internal state — travel a dedicated neural pathway, but conscious access to those signals varies enormously between individuals. The core finding: interoceptive accuracy is not a fixed trait. It is a skill. And it is trainable.

Notice exists to train that skill.

> The most valuable structures in our lives — emotional patterns, somatic intelligence, relational dynamics — are invisible until something makes them navigable.

This is part of a broader thesis I have been developing across two projects. Scholion makes the dependency structure of scientific claims visible and navigable — the hidden load-bearing walls behind published findings. Notice does the same thing for the dependency structure of your inner life: the connections between what your body is doing, what you are feeling, and how you are relating to it all. The full technical essay — You’re Already Feeling Something You Haven’t Noticed Yet — covers the architecture and interaction design in depth. Different domains, same problem. The structures that matter most are the ones you cannot see.

02What Notice Is

What Notice Is

The core interaction is called a Frame Snap. When you notice something shifting — a tightening in your chest, a surge of energy, a settling you cannot quite name — you tap your Watch or phone. That tap captures a biometric snapshot: heart rate, heart rate variability, and contextual signals assembled by on-device intelligence. Then you debrief.

The same snap-debrief-reflection loop works across Apple Watch, Garmin, and iPhone — and Oura Ring feeds overnight baseline context into the system without requiring a conscious snap. Three hardware ecosystems, three different transport layers, three different trust topologies — all normalized through a single protocol boundary so downstream intelligence never knows which wrist the data came from.

Core Interaction

The Frame Snap

Tap

Notice a shift

Watch or iPhone

Capture

HR, HRV, context

HealthKit + EventKit + CoreLocation

Name

Describe, label, intensity

Subjective before objective

Reflect

See data + AI reflection

Claude streams contemplative response

Key Design Decision

You commit your subjective assessment before you see the biometric data. You say "I feel tense" and then discover your HRV is 22ms below your weekly average. Over time, that feedback loop trains calibration between felt sense and physiology.

Every Frame Snap is a micro-dose of a well-documented regulatory technique, embedded in a gesture that takes three seconds.

···

The debrief is not a form. It is a felt-sense encounter. An emotion picker organized by somatic texture invites you to name what you notice. Six groups, three labels each, organized by where you feel them in your body:

Felt-Sense Taxonomy
Eighteen labels, organized by where you feel them in your body
Alive
energized, excited, vibrant
Settled
calm, grounded, peaceful
Open
curious, tender, receptive
Heavy
weary, sad, numb
Stirred
restless, anxious, agitated
Tight
frustrated, guarded, constricted
Grouped by somatic texture, not valence. Every snap is a micro-dose of affect labeling.

The taxonomy is sized to a research sweet spot. Lieberman et al. (2007) showed that affect labeling — the act of putting feelings into words — downregulates amygdala reactivity via right ventrolateral prefrontal cortex activation. But the effect depends on granularity. Six labels are too coarse. Unrestricted free text is too effortful on a Watch screen. Eighteen labels, organized by somatic texture and grounded in Gendlin’s Focusing tradition, hits the range where the intervention works. Every Frame Snap is a micro-dose of a well-documented regulatory technique, embedded in a gesture that takes three seconds.

Then Claude — Anthropic’s AI — generates a contemplative reflection. Not advice. Not a diagnosis. A mirror. The reflection orients toward relation — how you are meeting your experience — never toward object — what the experience supposedly is.

> “You said calm, but your heart rate variability was lower than usual. That is not a contradiction — it is information. What happens when you hold both?”

This distinction, drawn from Barrett’s constructed emotion theory and Jhourney’s contemplative pedagogy, is the deepest design constraint in the product.

Emotion labels are frames, not facts. The app’s job is to support noticing without grasping.

···

Claude reflects at three timescales: a brief sentence at snap time — a small act of witnessing; an exploratory paragraph during debrief — an invitation toward curiosity; and daily and weekly synthesis — longitudinal pattern detection that surfaces what you cannot see from inside a single moment. The weekly reflections consume daily syntheses hierarchically, so the AI reasons over compressed patterns rather than re-aggregating raw data. Over weeks and months, these syntheses surface the recurring shapes of your inner life — the patterns you did not know you had.

03What's Built

What’s Built

Notice is not a concept. It is a working product in closed beta via TestFlight. The core loop has been built three times across three hardware ecosystems — Apple Watch via WatchConnectivity, Garmin Enduro 3 via Connect IQ Companion SDK over BLE, and Oura Ring 3 via cloud REST API — each with different transport mechanics, different epoch formats, and different trust boundaries.

What made this tractable is the BiometricSnapshot protocol: a single abstraction that normalizes heart rate, HRV (SDNN and RMSSD tracked as separate fields), stress indicators, and device-specific metrics into one value type. Downstream consumers — AI reflections, historical analysis, user display — consume the snapshot, not the source. The architecture separates what data means from where it came from.

Voice-initiated snaps via Siri and AirPods let you capture a moment without looking at a screen. On-device intelligence handles context assembly — HealthKit trends, calendar, location, recent snaps — without any data leaving the device.

04Privacy by Architecture

Privacy by Architecture

Notice’s privacy model is not a policy. It is an architecture. And with three hardware sources, the question is no longer whether data is private but what path it traveled and what boundaries it crossed.

Architecture

Privacy by design. AI at two scales.

On-Device — The Read

Apple Foundation Models · ~3B parameters · Free, offline, no API keys

HealthKit

HR, HRV, 7-day trends

EventKit

Calendar context ±2h

CoreLocation

Semantic location

SwiftData

Recent snap patterns

Felt-Sense Interpreter

"Tight jaw, buzzy" → suggests Stirred texture group → matching labels

Privacy Boundary
Raw health data, calendar, contacts, location
De-identified structured summary only

Cloud — The Reflection

Claude API (Sonnet) · Streaming SSE · Contemplative system prompt

Longitudinal pattern analysis

Contemplative reframing

Scaffolding-aware reflection

Dam Model vocabulary

What Claude receives

emotion: "tense" · intensity: 0.7 · description: "tight jaw, shoulders up"
biometric: HR elevated, HRV 22ms below weekly avg
context: "before a work meeting, third similar pattern this week"

Notice’s privacy model is not a policy. It is an architecture.

···

Three trust paths converge on a single protocol:

Apple Watch — single trust domain. Data travels wrist → phone via WatchConnectivity. No network hop. Raw heart rate, HRV, and accelerometer data never leave the device pairing.

Garmin — single trust domain, different mechanics. Data travels wrist → phone via BLE through the Connect IQ Companion SDK. Different transport, different encoding (Garmin epoch, flat dictionaries), same topological guarantee: no cloud hop.

Oura Ring — cloud hop. Data travels ring → Oura app → Oura Cloud → REST API → iPhone. Authentication via OAuth. The raw data reaches the phone but never touches Notice infrastructure — it is a client-side fetch, topologically equivalent to the user reading their own Oura dashboard.

The SnapBiometricSource protocol normalizes these three paths. Relative descriptor functions — relativeHRV, relativeHrvRMSSD, relativeStressScore, relativeBodyBattery — translate device-specific values into contextual language. By the time data reaches Claude, all three architecturally distinct paths have been reduced to the same format: “HRV lower than your baseline.” “Stress elevated.” Never a raw number. Never a source identifier. The full architectural analysis — how three trust topologies converge on a single protocol — is documented in Trust Topologies.

A stateless Cloudflare Worker proxy holds the API key server-side and validates device identity via Apple’s App Attest before forwarding any request.

This is not just a privacy choice. It is a regulatory strategy. The FDA’s General Wellness Guidance classifies Notice as non-invasive wellness only if it avoids disease claims — one diagnostic-sounding phrase could trigger FDA jurisdiction. The FTC’s Health Breach Notification Rule applies because Notice combines biometric data with emotional self-reports: any unauthorized third-party disclosure triggers breach notification at $53,000 per violation. The topology-aware architecture strengthens the compliance posture: each data path has a documentable trust boundary, and the protocol normalization layer is where raw values get stripped — not by policy, but by the type system itself.

05What the Science Says

What the Science Says

Three research threads converge in the product.

Affect Labeling

The act of putting feelings into words is not just expressive. It is neurologically active. Affect labeling activates the right ventrolateral prefrontal cortex and downregulates amygdala reactivity — a specific regulatory mechanism distinct from cognitive reappraisal or suppression. The emotion picker is an affect labeling intervention. Every Frame Snap is a micro-dose of this technique.

Emotional Granularity

Barrett’s research shows that people who make finer distinctions between emotional states — distinguishing irritated from frustrated from exasperated — demonstrate better emotion regulation. This is emotional granularity, and it is trainable. Notice measures it: label diversity over time is a behavioral proxy for granularity development. As users’ felt-sense vocabulary expands, their regulatory capacity should expand with it. The data can show whether it does.

Interoceptive Lead Time

This is the metric I am most excited about, and as far as I can tell, no one else is measuring it.

Notice already collects both data streams: continuous biometric samples from HealthKit running in the background, and discrete conscious reports from Frame Snaps — the moment you notice a shift. With multiple devices, the system now tracks SDNN (Apple Watch) and RMSSD (Garmin, Oura) as separate fields with separate descriptor functions — preventing the category error of comparing metrics that look similar but measure different physiological signals. The temporal gap between when your body shifts and when you consciously register it is your interoceptive lead time. And it is trainable. Shrinking that gap is interoceptive development.

A month ago, your body would shift 40 minutes before you snapped. Now it is 15 minutes. You are noticing sooner.

···
Interoceptive Lead Time
The gap between when your body shifts and when you notice
45min
average lead time · Week 1
Body shifts
HRV drops
the gap
You notice
Frame Snap
Week 1drag to scrub through time →Week 12
Your body shifts 45 minutes before you consciously notice. That gap is your interoceptive lead time.
No other product measures this. Notice collects both data streams — continuous biometrics and discrete conscious reports — making the temporal correlation visible for the first time at ecological scale.

That is a training outcome you can feel — not a score on a dashboard, but a mirror that shows you something true about your own development. If interoceptive lead time reduction correlates with MAIA-2 score improvement across beta users, Notice has demonstrated measurable interoceptive training. That is a peer-reviewable finding and the strongest possible evidence for the product’s core claim.

06An App That Gets Quieter

An App That Gets Quieter

Most apps optimize for engagement. More time on screen. More sessions. More data. Notice is designed to do the opposite.

Scaffolding decay is the deliberate, progressive reduction of app support as you develop independent interoceptive capacity. The app starts rich and present. As your capacity grows, it steps back.

Scaffolding Decay
The app gets quieter as you get better
Full Support
Reduced
Minimal
← App presence
User capacity →
Weeks 1–8
Full Support
Reflections after every snap. Active felt-sense suggestions. Full biometric context.
Months 3–6
Reduced
Reflections shift to on-demand. Suggestions fade. Biometrics simplify to trend arrows.
Month 6+
Minimal
No automatic reflections. The app becomes a quiet archive you consult when you choose.
Phase transitions are triggered by behavioral signals and confirmed by the user. The app never decides for you that you are ready. It notices, and invites.

The app gets quieter as you get better. That is the design.

···

Full support: reflections after every snap, active felt-sense suggestions, full biometric context. This is the current app. Reduced: reflections shift to on-demand, suggestions fade, biometric display simplifies to trend arrows. Minimal: no automatic reflections, the app becomes a quiet archive you consult when you choose to. Your interoceptive capacity is the primary instrument. Notice is documentation.

Phase transitions are triggered by behavioral signals — snap count thresholds, vocabulary stabilization, biometric-label convergence — and confirmed by the user. The app never decides for you that you are ready. It notices, and invites.

This is a genuine commercial bet. An app that trains its users to need it less sounds counterintuitive in an industry built on retention metrics. But the value of Notice is not in the screen time it captures. It is in the capacity it builds. Users do not churn because they are bored. They graduate because they have developed the skill. And the on-device AI model that personalizes to their practice over months creates a switching cost no competitor can replicate.

07The On-Device Future

The On-Device Future

The Claude API is powerful, but it introduces ongoing cost, latency, and a data pathway outside the phone. The north star is every reflection tier running locally. No API dependency. No data disclosure. No marginal cost per reflection.

On-device LoRA adaptation means the model learns your phenomenological vocabulary, your somatic patterns, your characteristic ways of relating to experience. Not a generic wellness model shaped by population averages — a contemplative mirror that becomes more precise the longer you practice with it. The personalized weights never leave the phone. The privacy-maximizing architecture turns out to be the quality-maximizing architecture too — a rare alignment.

Timeline: on-device brief reflections are days away from shipping. Exploratory reflections on-device are plausible within weeks. Daily and weekly synthesis may require a generation of model capability improvement. The cloud API remains available as fallback throughout.

08The Market

The Market

Three product categories surround the space Notice occupies. None connect all three layers.

Product Landscape

Three categories. None training the underlying capacity.

Meditation Apps

Headspace, Calm, Waking Up

Mood Trackers

How We Feel, Daylio, Bearable

Health Platforms

Oura, WHOOP, Apple Health

Subjective focus, no biometrics
Wellness intent, no granularity
Data + labeling, no depth
Notice

Notice

All three — plus AI reflection

Capability

Meditation

Mood

Health

Biometric sensing

Emotion labeling

Contemplative grounding

AI reflection

Subjective-first

Notice is not a meditation app that added HRV, or a tracker that added journaling. It is a new thing.

···

Calm and Headspace own guided meditation. WHOOP and Oura own biometric tracking. How We Feel and Daylio own mood logging. Rosebud owns AI journaling. None connect all three layers — and none support multi-device biometric integration. WHOOP reads only WHOOP. Oura reads only Oura. Calm and Headspace do not read biometrics at all. Notice works across Apple Watch, Garmin, and Oura Ring with unified AI reflection, which means users are not locked to one hardware ecosystem and the addressable market is the union of all three device populations.

The window to establish this position is narrowing. Calm is adding HRV biofeedback. Headspace is integrating with Oura Ring. WHOOP is publishing peer-reviewed mental health research and adding journal prompts. Each competitor is extending toward the triad from their corner. None have arrived yet, but the trajectories are visible. Multi-device support is a structural moat: each new integration widens the addressable market and increases the architectural distance competitors must cross to match it. Time-to-market for the core loop matters more than feature completeness.

Premium, anchored against WHOOP and Oura, not meditation apps.

09The Larger Vision

The Larger Vision

Notice’s trajectory follows a natural widening of the aperture of awareness — mirroring the developmental arc of contemplative practice itself.

Category Expansion
Each ring is gated by the one inside it
Individual Interoception
now
Relational Attunement
later
Collective Field Awareness
speculative
now
Individual Interoception
You learn to read your own internal states: noticing shifts, labeling felt sense, seeing patterns.
later
Relational Attunement
Co-regulation extends the mirror to the space between two people. Staged with kill criteria at each phase.
speculative
Collective Field Awareness
If dyadic co-regulation works, the same architecture extends to small groups — a sangha, a therapy group, a team.
At the individual level, Notice competes with mood trackers and meditation apps. At the relational level, there are no competitors. At the collective level, the field does not exist yet.

Individual Interoception — Now

The current product. You learn to read your own internal states: noticing shifts, labeling felt sense, seeing patterns in how you relate to experience. The mirror faces inward.

Relational Attunement — Later

Co-regulation expands the mirror to face the space between two people. The full research program is documented in What If You Could Feel Someone Breathing From Across the Room. The research basis is Feldman’s bio-behavioral synchrony: mothers and infants synchronize heart rhythms, partners’ respiration converges during empathetic touch. The engineering question is whether you can transmit one person’s breathing rate — derived from Apple Watch accelerometers — to a partner in real-time via spatial audio and haptics, and whether the body responds to that mediated signal the way it responds to another body.

The research program is staged with kill criteria at each phase. Phase 0: can the hardware extract breathing rate within ±2 BPM? Phase 1: a sham-controlled study with 30 dyads to distinguish real entrainment from placebo. Phase 2: at-home ecological validation. Phase 3: regulatory pathway assessment. Polyvagal theory faces empirical criticism, individual variability is high, the hardware cost per couple is $3,600+. These are load-bearing constraints, not footnotes.

Collective Field Awareness — Speculative

The furthest horizon. If dyadic co-regulation works, the same architecture extends to small groups — a meditation sangha, a therapy group, a team. Group physiological coherence is measurable. Notice could surface how a collective field forms and dissolves, who anchors it, how individual states propagate. This is genuinely speculative. But it is the logical terminus of the thesis: making invisible structures visible and navigable, applied to the most invisible structure of all — the felt sense of being in a room together.

Each expansion is gated by the one before it.

At the individual level, Notice competes with mood trackers. At the relational level, there are no competitors. At the collective level, the field does not exist yet.

···

Each expansion widens the moat.

10What Comes Next

What Comes Next

WEEK 1
Validate Foundation Models on hardware. Incorporate beta feedback. Deploy API proxy. Begin on-device training pipeline.
WEEK 2
Ship on-device brief reflections. Implement scaffolding decay. Measure interoceptive lead time across beta cohort.
WEEK 3+
Expand beta. Launch notice.tools. App Store submission.

The on-device reflection model is the key technical milestone. Brief reflections moving on-device eliminates the largest cost center, makes the Core tier economically viable at zero marginal cost, and delivers the privacy promise in its strongest form.

For Apple Watch and Garmin snaps with on-device reflections, nothing leaves your phone — the entire path from wrist to insight stays within a single trust domain. For Oura baseline context, the data transits Oura’s cloud but never touches Notice infrastructure. The privacy guarantee is topological, not absolute: you can trace exactly which boundaries each datum crossed.

”Nothing leaves your phone” becomes literally true for the most common interaction.

···

Notice is built on a conviction I keep returning to: the most important thing you can learn is how to read your own experience with honesty and precision. Not to fix it. Not to optimize it. To see it clearly enough that you can choose how to respond rather than being carried by reflex.

The technology is in service of that learning. A temporary scaffold that builds a permanent capacity. The biometric data is a mirror for the body you already have. The emotion label is a frame for the feeling you already feel. The AI reflection is an invitation to look at what you already know but have not yet noticed.

The app gets quieter as you get better. That is the design.

If this resonates — as a user, investor, or collaborator — reach out.Join the TestFlight →
thbrdy@pando.industries

APPENDICES

Tiered model anchored against biometric wearables, not meditation apps.
CoreFull
Price~$80/year$149–199/year
Reflection tiersBrief (on-device)Brief + Exploratory + Daily + Weekly synthesis
AI runtimeOn-device model (zero marginal cost)Claude API (cloud)
Biometric contextBasicFull enrichment via Foundation Models
Anchor comparisonOura ($70/yr + $300 hardware)WHOOP ($239/yr)
The economic logic: once the on-device model ships brief reflections, the Core tier costs nothing to serve at scale. The premium tier gates longitudinal pattern analysis — daily and weekly synthesis reflections that require Claude’s reasoning capacity — not the core snap-debrief loop. A 7-day free trial with full features is the conversion mechanism. Gate nothing during the aha window.
Willingness-to-pay research suggests 2–4% conversion at the premium tier from a qualified audience. This is viable if the funnel delivers contemplative practitioners and serious quantified-self trackers, not casual meditation-curious downloaders.
Three-phase channel sequence, ordered by community-product fit and cost per qualified user.
Phase 1 — Community validation (now). Jhourney contemplative community: 1,000–3,000 practitioners, high philosophical alignment, produces testimonial quality needed to seed other channels. Simultaneously: r/quantifiedself and r/ouraring — technically literate, wearable-native, actively seeking what Notice does in different vocabulary (“HRV-correlated emotional pattern tracking”). Highest ROI per post of any channel.
Phase 2 — Niche amplification (post-launch). Podcasts: Buddhist Geeks, Technology for Mindfulness, Quantified Self podcast, Ten Percent Happier. Contemplative press: Tricycle Magazine. Communities: r/biohackers, Oura partner program, QS local meetups. Beta-listing sites (Product Hunt, BetaList) timed to App Store launch.
Phase 3 — Adjacent expansion (months 3–6). Insight Timer community, Waking Up / Sam Harris community, r/meditation. Each community requires segment-specific vocabulary: contemplative practitioners hear “noticing practice,” biohackers hear “biometric-emotional correlation,” the QS audience hears “interoceptive accuracy training.” The product doesn’t change; the framing does.
Key insight from competitive analysis: the QS/biohacker audience may be the faster growth channel, not just the later one. They’re already tracking HRV, already frustrated by the numbers-without-meaning problem, and already primed for exactly the layer Notice adds.
The north star is every reflection tier running locally — no API dependency, no data disclosure, no marginal cost per reflection.
Runtime reality. MLX is blocked for 3B models on iPhone. MLX’s memory overhead (~15 GB for Llama 3.2 3B 4-bit vs. llama.cpp’s ~3.67 GB) stems from three architectural decisions: no memory-mapped weight loading, an aggressive buffer cache that never returns memory to the OS, and per-operation Metal buffer allocation for intermediates. The theoretical minimum for Llama 3.2 3B at 4-bit with 1K context is ~1.95 GB; llama.cpp gets within 25–50% of this floor. Two viable paths: llama.cpp for 3B models (4–8 second generation, higher quality) or MLX for 1B–1.7B models (under 2 seconds, lower ceiling).
Model candidates. SmolLM3-3B is the leading candidate — purpose-built for on-device, strong instruction-following, Apache 2.0. Llama 3.2 3B is the safe default. Qwen3 1.7B for the MLX/small-model path. Recommendation: benchmark SmolLM3-3B via llama.cpp against Qwen3 1.7B via MLX on target hardware. Let quality evaluation decide.
Training pipeline. Teacher-student distillation from Claude API. Target: 1,200 reflection examples covering diverse snap sequences plus 150 correction examples that demonstrate constraint boundaries — what the model should not say. The correction examples are critical: LoRA fine-tuning on domain-specific output degrades general instruction-following if training data doesn’t include instruction-following examples alongside contemplative reflections.
Hybrid routing. .brief reflections: on-device primary, cloud fallback. .exploratory: cloud primary for now, on-device plausible for 3B models as training data accumulates. .daily and .weekly synthesis: always cloud — requires analytical reasoning over variable-length sequences that exceeds small model capability. On-device .brief covers ~80% of API calls.
Three-tier evaluation before shipping. Tier 1: automated constraint gate (no diagnostic language, no raw biometric values, no prescriptive framing). Tier 2: LLM-as-judge scoring relational orientation, phenomenological precision, novelty, tone. Tier 3: blind A/B with experienced practitioners. Ship threshold: Tier 1 >99% pass, Tier 2 within 15% of Claude baseline, Tier 3 preference >40%.
Long-term differentiator. On-device LoRA adaptation: the model learns your phenomenological vocabulary, your somatic patterns, your ways of relating to experience. The personalized weights never leave the phone. This is architecturally impossible with a cloud API and represents a genuine moat — the value increases monotonically with use, tied to your specific device and practice history.
WeekMilestoneKey Deliverables
1Foundation & FeedbackValidate Foundation Models on physical hardware (interpreter <1s, context assembly <3s). Incorporate beta taxonomy feedback. Deploy Cloudflare Worker proxy with App Attest. Push new TestFlight build (sessions 11–15). Begin synthetic training corpus generation.
2On-Device & MeasurementShip on-device brief reflections via llama.cpp or MLX (runtime fork resolved by benchmark). Implement scaffolding decay phase 1 triggers. Begin measuring interoceptive lead time across beta cohort. Fix keyboard drawer default on debrief screen.
3+Launch PreparationExpand beta beyond Jhourney (QS/Reddit channels). Launch notice.tools landing page. Finalize App Store metadata and screenshots (FDA-compliant language). Privacy settings view. App Store submission.
Each week’s deliverables are gated by the previous week’s outcomes. If Foundation Models latency exceeds budget, the fallback to direct framework calls is straightforward. If the on-device model doesn’t meet the three-tier evaluation threshold, cloud reflections remain the primary path with no user-visible degradation.
Keyword strategy targets low-competition, high-fit terms that no major competitor owns.
Primary keywords: “interoception,” “somatic awareness,” “HRV journal,” “biofeedback journal,” “felt sense.” These are high-intent, low-competition — the contemplative-tech and quantified-self audiences search for them, but Calm and Headspace don’t target them.
Secondary keywords: “body awareness,” “emotional awareness,” “mindful HRV.”
Avoid: “mindfulness” and “meditation” — saturated, dominated by competitors with 8-figure marketing budgets.
App Store subtitle (30 chars): “Body Awareness & HRV Journal” or “Somatic Awareness Training.”
Screenshot story: Lead with wisdom, not dashboard. First screenshot: a reflection that names something the user didn’t consciously know. Second: the emotion picker with biometric context. Third: the privacy architecture diagram. The aha moment sells; the data display supports.
Category: Health & Fitness (primary), Lifestyle (secondary).
Critical constraint: No disease-specific language anywhere in metadata, descriptions, or screenshots. The FDA General Wellness Guidance classification depends on what the app says, not just what it measures.