Presage Breathing Model Card


THE FOLLOWING INFORMATION, AND PRESAGE'S SDK AND APP, ARE OFFERED FOR GENERAL WELLNESS AND INFORMATIONAL PURPOSES ONLY. NEITHER HAVE BEEN CLEARED BY THE FDA AND NEITHER MAY BE USED FOR MEDICAL DIAGNOSIS OR TREATMENT.

1. Model Details

Basic info: Presage vitals by video analysis generates breathing rate metrics from a video of a subject.

Organization: Presage Technologies

Model date: 2026-04-01

Model version: 3.0.0-rc.11

Model type: A proprietary computer vision and signal processing pipeline that estimates breathing rate and breathing waveform from video of a subject.

License: The algorithm is currently proprietary, and licenses are granted with predefined agreement.

Contact: Questions can be sent to: support@presagetech.com


2. Intended Use

Model Uses

This breathing model was intended for use in the analysis and non-diagnostic utility of breathing mechanics. It was intended to be used with a video from a stationary device that contains the subject's face, chest and shoulders in view. It requires approximately 45 seconds of uninterrupted data. It is only intended to measure breathing rate values in the range of 4-40 breaths per minute.

Out-of-Scope Uses

As noted above. The Presage breathing model is not intended for diagnostic purposes. No alarms are provided and it is not an apnea monitoring or detection model.


3. Validation

Reference Standard: ETCO2 capnography via Biopac MP160. For each SDK breathing-rate timestamp t, the CO2 reference breathing rate is computed from CO2 peaks detected in a 30-second lookback window ending at t:

BRCO2 = 60 · (N − 1) / (tlast peaktfirst peak)
where N is the number of CO2 peaks in the window, tlast peak is the time of the last peak, and tfirst peak is the time of the first peak. This produces a continuous ground-truth breathing rate aligned to each SDK output timestamp, rather than a simple count of peaks per window.

Comparison Methodology: Breathing measurements from the camera-based system were compared against time-aligned reference measurements. Ground truth signals were checked for quality using labeled signal annotations; segments with poor signal quality were excluded from analysis.


4. Data Demographics

Category Distribution
Total 94 subjects, 185 videos
Camera (videos) Logitech C920: 93, Samsung S24 Rear 24mm (tripod): 92
Sex (subjects) Female: 50, Male: 44
Age Group (subjects) 18-25: 31, 26-35: 39, 36-45: 11, 46-55: 7, 56-65: 5, 65+: 1
Fitzpatrick (subjects) Type 1: 14, Type 2: 14, Type 3: 7, Type 4: 24, Type 5: 22, Type 6: 13
Lighting (videos) Ring Light: 185

Reference standard: ETCO2 via biopac template peak detection for breathing rate ground truth.


5. Data Provenance

Reference Instrumentation: Biopac research-grade physiological sensors: ETCO2 capnography (for breathing rate ground truth).

Camera Devices Tested: Samsung S24 Rear 24mm (tripod), Logitech C920 (tripod).

Average Camera Distance: Logitech C920: 36", Samsung S24 Rear 24mm: 39"

Data Handling: All subject data is de-identified. Derived metrics and anonymized identifiers are retained. Data is securely stored with access restricted to trained researchers.


6. Factors

The breathing metric model requires face and pose detection to identify the subject's chest region for motion analysis.


These factors can affect model performance:


Lighting Conditions Tested:


Other factors:


7. Metrics

(at 80% Return Rate, Confidence >= 44)

  1. MAE: 0.62 BrPM
  2. RMSE: 1.45 BrPM

8. Quantitative Analysis

Computed vs Ground Truth at Confidence Thresholds

Bland-Altman Plot

Confidence Lookup Table

Confidence >= MAE (BrPM) RMSE (BrPM) Pearson r Return Rate (%) N (samples)
0 0.97 2.25 0.839 100.0 1562
10 0.81 1.89 0.875 95.6 1493
20 0.77 1.87 0.877 90.7 1417
30 0.71 1.76 0.890 86.6 1352
40 0.67 1.71 0.895 82.8 1294
44 0.62 1.45 0.923 80.0 1254
50 0.61 1.43 0.925 79.6 1243
60 0.56 1.30 0.940 71.4 1115
65 0.54 1.15 0.953 68.0 1062
70 0.52 1.03 0.963 64.1 1002
75 0.50 1.01 0.963 61.5 960
80 0.50 1.02 0.964 59.5 930
85 0.48 0.97 0.966 57.0 890
90 0.49 0.99 0.966 54.1 845
95 0.49 1.00 0.966 53.8 840

Performance

(at 80% Return Rate, Confidence >= 44)

By Camera Type

Camera Type N (samples) Return Rate (%) MAE (BrPM) RMSE (BrPM) Pearson r
Samsung S24 Rear 24mm (tripod) 644 83.3 0.50 0.93 0.969
Logitech C920 610 77.3 0.75 1.85 0.881

By Fitzpatrick Skin Type

Fitzpatrick N (samples) Return Rate (%) MAE (BrPM) RMSE (BrPM) Pearson r
Type I 169 74.8 0.65 1.04 0.971
Type II 161 75.9 0.54 0.81 0.978
Type III 83 75.5 1.37 2.61 0.687
Type IV 314 85.6 0.41 0.60 0.991
Type V 346 88.7 0.63 2.02 0.838
Type VI 181 70.4 0.67 1.19 0.879

Note: All Fitzpatrick types were tested under Ring Light only.

By Sex

Sex N (samples) Return Rate (%) MAE (BrPM) RMSE (BrPM) Pearson r
Male 597 77.0 0.78 1.93 0.880
Female 657 83.5 0.47 0.80 0.973

By Age Group

Age Group N (samples) Return Rate (%) MAE (BrPM) RMSE (BrPM) Pearson r
18-25 424 76.3 0.57 0.94 0.963
26-35 481 80.8 0.75 2.04 0.875
36-45 162 89.5 0.40 0.64 0.975
46-55 129 86.6 0.70 1.16 0.953
56-65 56 70.9 0.30 0.43 0.994
65+ 2 100.0 0.54 0.54 1.000

By Lighting Type

Lighting N (samples) Return Rate (%) MAE (BrPM) RMSE (BrPM) Pearson r
Ring Light 1254 80.3 0.62 1.45 0.923

Waveform Example

Waveform confidence metric coming soon.


9. Fairness & Equity

Bias Assessment Methodology: Performance is stratified by Fitzpatrick skin type (I-VI), sex, camera type, and age group. Per-group metrics and Confidence averages are reported in the Quantitative Analysis tables above.


10. Ethical Considerations

As a remote sensing device, the risks posed to the subjects in the trial are minimal, including the association of each subject with corresponding biometric data. Mitigation of these risks include de-identifying all subject data, including videos, prior to saving it. Additionally, all data is securely stored with access to a select number of trained researchers.

The model is not intended for human life-critical decisions, diagnostics or prognostication.

11. Limitations and Tradeoffs