Presage Breathing Model Card


THE FOLLOWING INFORMATION, AND PRESAGE'S SDK AND APP, ARE OFFERED FOR GENERAL WELLNESS AND INFORMATIONAL PURPOSES ONLY. NEITHER HAVE BEEN CLEARED BY THE FDA AND NEITHER MAY BE USED FOR MEDICAL DIAGNOSIS OR TREATMENT.

1. Model Details

Basic info: Presage vitals by video analysis generates breathing rate metrics from a video of a subject.

Organization: Presage Technologies

Model date: 2026-03-18

Model version: 3.0.0-rc.8

Model type: A proprietary computer vision and signal processing pipeline that estimates breathing rate and breathing waveform from video of a subject.

License: The algorithm is currently proprietary, and licenses are granted with predefined agreement.

Contact: Questions can be sent to: support@presagetech.com


2. Intended Use

Model Uses

This breathing model was intended for use in the analysis and non-diagnostic utility of breathing mechanics. It was intended to be used with a video from a stationary device that contains the subject's face, chest and shoulders in view. It requires approximately 45 seconds of uninterrupted data. It is only intended to measure breathing rate values in the range of 4-40 breaths per minute.

Out-of-Scope Uses

As noted above. The Presage breathing model is not intended for diagnostic purposes. No alarms are provided and it is not an apnea monitoring or detection model.


3. Validation

Reference Standard: ETCO2 capnography via Biopac MP160. For each SDK breathing-rate timestamp t, the CO2 reference breathing rate is computed from CO2 peaks detected in a 30-second lookback window ending at t:

BRCO2 = 60 · (N − 1) / (tlast peaktfirst peak)
where N is the number of CO2 peaks in the window, tlast peak is the time of the last peak, and tfirst peak is the time of the first peak. This produces a continuous ground-truth breathing rate aligned to each SDK output timestamp, rather than a simple count of peaks per window.

Comparison Methodology: Breathing measurements from the camera-based system were compared against time-aligned reference measurements. Ground truth signals were checked for quality using labeled signal annotations; segments with poor signal quality were excluded from analysis.


4. Data Demographics

Category Distribution
Total 93 subjects, 184 videos
Camera (videos) Logitech C920: 93, Samsung S24 Rear 24mm (tripod): 91
Sex (subjects) Female: 49, Male: 44
Age Group (subjects) 18-25: 31, 26-35: 38, 36-45: 11, 46-55: 7, 56-65: 5, 65+: 1
Fitzpatrick (subjects) Type 1: 14, Type 2: 13, Type 3: 7, Type 4: 24, Type 5: 22, Type 6: 13
Lighting (videos) Ring Light: 184

Reference standard: ETCO2 via biopac template peak detection for breathing rate ground truth.


5. Data Provenance

Reference Instrumentation: Biopac research-grade physiological sensors: ETCO2 capnography (for breathing rate ground truth).

Camera Devices Tested: Samsung S24 Rear 24mm (tripod), Logitech C920 (tripod).

Average Camera Distance: Logitech C920: 36", Samsung S24 Rear 24mm: 39"

Data Handling: All subject data is de-identified. Derived metrics and anonymized identifiers are retained. Data is securely stored with access restricted to trained researchers.


6. Factors

The breathing metric model requires face and pose detection to identify the subject's chest region for motion analysis.


These factors can affect model performance:


Lighting Conditions Tested:


Other factors:


7. Metrics

(at 80% Return Rate, Confidence >= 70)

  1. MAE: 0.55 BrPM
  2. RMSE: 0.95 BrPM

8. Quantitative Analysis

Computed vs Ground Truth at Confidence Thresholds

Bland-Altman Plot

Confidence Lookup Table

Confidence >= MAE (BrPM) RMSE (BrPM) Pearson r Return Rate (%) N (samples)
0 0.83 1.72 0.901 100.0 1556
10 0.80 1.58 0.916 99.3 1545
20 0.76 1.42 0.932 97.4 1516
30 0.73 1.36 0.936 95.3 1483
40 0.71 1.33 0.937 93.4 1454
50 0.69 1.28 0.941 91.1 1417
60 0.62 1.09 0.957 86.4 1345
65 0.56 0.96 0.966 82.1 1278
70 0.55 0.95 0.968 80.0 1250
75 0.54 0.92 0.969 78.4 1220
80 0.52 0.87 0.973 76.6 1192
85 0.51 0.86 0.973 75.1 1169
90 0.49 0.84 0.974 72.5 1128
95 0.49 0.84 0.974 72.2 1123

Performance

(at 80% Return Rate, Confidence >= 70)

By Camera Type

Camera Type N (samples) Return Rate (%) MAE (BrPM) RMSE (BrPM) Pearson r
Samsung S24 Rear 24mm (tripod) 613 80.7 0.49 0.88 0.971
Logitech C920 637 80.0 0.61 1.01 0.966

By Fitzpatrick Skin Type

Fitzpatrick N (samples) Return Rate (%) MAE (BrPM) RMSE (BrPM) Pearson r
Type I 163 79.5 0.70 1.06 0.970
Type II 163 79.5 0.36 0.52 0.990
Type III 72 62.6 0.62 0.97 0.935
Type IV 304 79.6 0.41 0.71 0.987
Type V 332 86.2 0.55 0.97 0.966
Type VI 216 81.8 0.76 1.30 0.884

Note: All Fitzpatrick types were tested under Ring Light only.

By Sex

Sex N (samples) Return Rate (%) MAE (BrPM) RMSE (BrPM) Pearson r
Male 609 78.9 0.61 1.07 0.962
Female 641 81.8 0.49 0.81 0.975

By Age Group

Age Group N (samples) Return Rate (%) MAE (BrPM) RMSE (BrPM) Pearson r
18-25 431 76.1 0.55 0.93 0.963
26-35 478 83.3 0.60 1.05 0.965
36-45 163 85.8 0.36 0.51 0.985
46-55 112 79.4 0.72 1.19 0.962
56-65 66 85.7 0.35 0.51 0.994

By Lighting Type

Lighting N (samples) Return Rate (%) MAE (BrPM) RMSE (BrPM) Pearson r
Ring Light 1250 80.3 0.55 0.95 0.968

Waveform Example

Waveform confidence metric coming soon.


9. Fairness & Equity

Bias Assessment Methodology: Performance is stratified by Fitzpatrick skin type (I-VI), sex, camera type, and age group. Per-group metrics and Confidence averages are reported in the Quantitative Analysis tables above.


10. Ethical Considerations

As a remote sensing device, the risks posed to the subjects in the trial are minimal, including the association of each subject with corresponding biometric data. Mitigation of these risks include de-identifying all subject data, including videos, prior to saving it. Additionally, all data is securely stored with access to a select number of trained researchers.

The model is not intended for human life-critical decisions, diagnostics or prognostication.

11. Limitations and Tradeoffs