Basic info:
Presage vitals by video analysis generates breathing rate metrics such as breath rate and breath trace from a video clip containing a subject’s face and chest. An API has also been developed to allow users easy access to compute these metrics for commercial and scientific applications.
Organization developing model: Presage Technologies
Model date: 20250911t173426
Model version: 1.6.0
Model type:A deterministic computer vision model with two primary stages. The first identifies and tracks key feature points on the subject’s chest. The second stage uses signal processing to analyze the temporal movement of these features to isolate and quantify the physiological breathing trace and rate.
License: The algorithm is currently proprietary, and licenses are granted with predefined agreement.
Where to send questions: Questions can be sent to: support@presagetech.com
Model uses:
This aggregate breathing rate model was intended for use by qualified clinicians and researchers for the analysis and non-diagnostic utility of breathing and respiration mechanics. It was intended to be used with a video from a stationary device (such as a handheld, mobile or laptop camera), that contains the subject’s face, chest and shoulders in view, and be of at least 15 consecutive seconds in length with at least 5 fps in frame rate. The user’s face, chest and shoulders must be unobstructed for at least 15 consecutive seconds within the video. It is only intended to measure breathing rate values in the range of 8-31 bpm.
Out-of-scope uses:
Presage breathing rate and breathing trace model is not intended for diagnostic purposes. Do not self-diagnose or self-medicate on the basis of the measurements. No alarms are provided, and it is not an apnea monitoring or apnea detection model. It is currently not intended for use in highly dynamic environments, or with a highly moving camera. We ensure all users have acknowledged and agreed to our license agreement and terms of service for usage prior to use.
Breathing metric model first requires Mediapipe’s face detection algorithm to identify the face and pose of the subject. Thus, if these features are not identifiable by Mediapipe’s algorithm, then breath metrics will not be calculable.
These factors can affect the ability to detect the subject's face and pose. Reference the Mediapipe model cards can be found here: Full Range Face detection model card and lite pose detection:
These factors can affect model performance:
Other factors:
The evaluation data consists of a set of 223 videos. Corresponding quantities of breath rate were measured from a Biopac research grade strain gauge breathing sensor. A clip of 30s from each video was run through the Presage breath rate model for evaluation and a single measurement returned, leading to a total number of 642 samples. Each video was acquired on a different user covering a range of demographic variability, including age, gender and Fitzpatrick scale.
Distribution of error figures:
Skin Tone (Fitzpatrick) | % of Dataset (num samples) | RMSD | MAE [95% CI] | Mean Return Rate |
---|---|---|---|---|
1 | 0.18 (115) | 2.60 | 1.70 [1.23, 2.17] | 0.68 |
2 | 0.14 (91) | 2.30 | 1.59 [1.12, 2.06] | 0.67 |
3 | 0.10 (61) | 1.98 | 1.44 [0.95, 1.94] | 0.51 |
4 | 0.15 (97) | 1.96 | 1.28 [0.89, 1.67] | 0.58 |
5 | 0.12 (74) | 2.41 | 1.62 [1.14, 2.10] | 0.72 |
6 | 0.15 (94) | 1.92 | 1.29 [0.90, 1.69] | 0.73 |
Sex | % of Dataset (num samples) | RMSD | MAE [95% CI] | Mean Return Rate |
---|---|---|---|---|
M | 0.36 (232) | 2.40 | 1.56 [1.26, 1.86] | 0.62 |
F | 0.47 (300) | 2.12 | 1.45 [1.21, 1.69] | 0.68 |
Camera Type | % of Dataset (num samples) | RMSD | MAE [95% CI] | Mean Return Rate |
---|---|---|---|---|
Android | 0.49 (315) | 2.48 | 1.64 [1.37, 1.91] | 0.74 |
Econ | 0.51 (327) | 2.14 | 1.42 [1.18, 1.65] | 0.57 |