The EU AI Act became fully applicable to high-risk AI systems on 2 August 2026. If you’re building or integrating an AI system that touches human body measurements — for sizing, ergonomics, health screening, or fitness — there’s a reasonable chance you need to care about this.
“A reasonable chance” is worth being more precise about. The Act’s requirements differ substantially depending on where your system sits in its risk classification scheme. Here’s the breakdown.
The risk classification scheme
The EU AI Act divides AI systems into four risk tiers. Where your system lands determines what you must do.
Unacceptable risk (prohibited): Systems that deploy subliminal manipulation, exploit vulnerabilities, or enable mass surveillance. No body measurement application falls here under normal circumstances.
High risk: AI systems used in specific regulated domains: employment decisions, credit scoring, essential services access, education, law enforcement, border control, and — critically — biometric identification and categorization. This is the tier where most body measurement applications need to carefully assess their classification.
Limited risk: Systems like chatbots or emotion recognition with transparency obligations but no conformity assessment requirements.
Minimal risk: Most AI systems. No specific requirements beyond general product safety law.
Is a body measurement API “high risk”?
This depends on its purpose and how it’s used.
Biometric categorization is explicitly high-risk. The Act defines biometric categorization as “assigning natural persons to specific categories on the basis of their biometric data.” If your system uses body measurements to categorize people (for example: assigning people to risk categories for insurance pricing, or filtering job applicants by physical requirements), that’s high-risk biometric categorization.
Sizing and fitting — predicting a person’s likely clothing size from height and weight — is closer to the minimal/limited risk end. The purpose is product fit, not biometric identification or categorization of people. However, if the same system is repurposed (for example, by an employer using body dimensions to screen job candidates), the downstream use could trigger high-risk classification.
Health screening applications that use body measurements to assess health risk (BMI-based screening, growth monitoring for malnutrition detection) are in ambiguous territory. If they influence clinical decisions or access to health services, high-risk classification is plausible.
The Act applies to the deployer (the organization putting the system into use) as well as the provider (the developer). A stateless prediction API that returns body dimensions is easier to classify as minimal risk; an integrated system that uses those dimensions to make consequential decisions about individuals is harder.
What high-risk classification requires
If your system is high-risk, the obligations are substantial:
Risk management system: A documented process for identifying and mitigating risks throughout the system lifecycle.
Data governance: Documentation of training data provenance, coverage across demographic groups, and bias assessment results.
Technical documentation: Detailed description of the system’s purpose, intended use cases, performance metrics, accuracy by population segment, and known limitations.
Transparency: Users must be informed that they’re interacting with an AI system.
Human oversight: High-risk systems must be designed to allow human intervention and override.
Accuracy, robustness, and cybersecurity: Documented accuracy metrics with breakdown by relevant population groups.
Conformity assessment: Registration in the EU AI Act database and either self-assessment (for most high-risk categories) or third-party audit (for biometric identification systems).
What minimal/limited risk requires
If your body measurement system is minimal risk — which covers most pure sizing applications — the requirements are light:
- No prohibited techniques (no subliminal manipulation)
- Standard product safety law compliance
- No registration requirement
If it’s limited risk (for example, systems that interact with humans in ways users might not recognize as AI), transparency requirements apply: users must be informed they’re interacting with an AI system.
Practical steps for integrators (developers using body measurement APIs)
If you’re integrating a third-party body measurement API into your product:
1. Document the use case. What decisions does the system influence? Sizing recommendations have lower regulatory weight than health risk assessments. If your intended use is high-risk, document that you’re operating under the high-risk framework.
2. Review the API provider’s documentation. A compliant API provider should make their technical documentation available — training data provenance, accuracy metrics by demographic group, known limitations, intended use cases. If this documentation doesn’t exist, that’s a risk signal.
3. Assess population coverage. The Act requires that AI systems meet adequate accuracy across demographic groups. For body measurement APIs, this means coverage across genders, ages, and ethnic populations. Ask specifically: what datasets were used for each population, and what are the accuracy metrics for your target user base?
4. Avoid prohibited repurposing. If your system starts in a minimal-risk sizing use case but you later consider using the body data for something consequential (insurance, hiring), that’s a new system requiring a fresh classification assessment.
5. Keep the data minimal. The Act aligns with GDPR’s data minimization principle. Only process body measurements that are necessary for the stated purpose.
What API providers must do
If you’re providing a body measurement API to other developers:
Publish technical documentation. At minimum: the training datasets used, population coverage, accuracy metrics disaggregated by sex and region, known limitations and failure modes, intended and prohibited use cases.
Label intended use cases clearly. Describe what the API is designed for (sizing, ergonomics) and what it’s not designed for (biometric identification, health diagnosis).
Provide information for downstream classification. Developers integrating your API need to classify their own systems. Give them the information they need to do so accurately.
Accuracy disaggregation. Accuracy metrics must be broken down by population group. A single overall accuracy number is insufficient. Report separately for male/female, by age category, and by regional population group.
The EU AI Act and GDPR interaction
The EU AI Act doesn’t replace GDPR — it adds to it. Body measurement data that constitutes biometric data under GDPR Article 4(14) requires explicit consent and Article 9 processing grounds under GDPR, independently of AI Act classification.
The AI Act adds documentation, transparency, and human oversight obligations on top of GDPR’s data protection requirements. For high-risk systems, both frameworks apply simultaneously.
The practical implication: your compliance roadmap should treat GDPR and the AI Act as parallel workstreams, not alternatives. A stateless API architecture that minimizes personal data processing helps with both.
Timeline
- February 2025: Prohibited AI practices provisions applied
- August 2025: GPAI (general-purpose AI) model obligations applied
- August 2026: High-risk AI system obligations fully apply
- 2027: Obligations for AI systems regulated by other EU product safety legislation
If you’re building a body measurement application launching after August 2026 and it might be high-risk, compliance work should already be underway. The conformity assessment process, technical documentation, and registration in the EU AI Act database take months.
The EU AI Act’s practical impact on most body measurement applications is modest — sizing and ergonomics tools are unlikely to be classified high-risk in typical use cases. The more significant risk is downstream repurposing: a system built for sizing that later gets used for consequential decisions about individuals. Build with use case documentation from the start, and you have a clear record of intended purpose if classification is ever questioned.