Applications that handle body measurement data sit in a category that attracts both security scrutiny and regulatory attention. Whether the data is technically special-category biometric data under GDPR depends on the processing purpose, but the defensive approach is to treat it as sensitive regardless.
The security architecture for a body measurement application has a few specific concerns that differ from generic API security: the upstream prediction API must be called server-side (never from the client), stored profiles deserve access controls beyond simple authentication, and audit trails matter for erasure compliance.
Rule 1: Never call the prediction API from the client
The most important security rule for body measurement applications: the API key for the upstream prediction service must live on your server, never in client-side code.
Why this matters:
- Client-side code (JavaScript, mobile app binary) can be inspected. An API key in client-side code is public.
- A leaked API key means unauthorized parties can make prediction requests billed to your account
- Client-to-API calls bypass your own access controls and audit logging
The correct architecture:
Client (browser/mobile)
│
│ POST /api/sizing/recommend
│ Authorization: Bearer <user_JWT>
▼
Your Server (holds API key securely)
│
│ POST /v1/predict
│ X-RapidAPI-Key: <YOUR_API_KEY>
▼
Prediction API
Your server validates the user’s JWT, enforces access control, calls the prediction API with your server-side key, and returns results. The prediction API key never leaves your infrastructure.
JWT design for measurement applications
JWT (JSON Web Token) is a good fit for user authentication in measurement applications because it can carry claims that control access without a database lookup per request.
Design your JWT claims to support least-privilege access for different access paths:
from datetime import datetime, timedelta, timezone
import jwt # PyJWT library
from typing import Any
SECRET_KEY = "your-secret-key-min-32-bytes"
ALGORITHM = "HS256"
def create_user_token(
user_id: str,
measurement_profile_id: str | None = None,
scopes: list[str] | None = None
) -> str:
"""
Create a JWT for a user with claims controlling measurement data access.
Scopes available:
- "sizing:read" — read own size recommendations
- "sizing:write" — update own measurement profile
- "sizing:delete" — delete own measurement profile (erasure request)
- "admin:read" — read any user's data (admin only)
"""
now = datetime.now(timezone.utc)
payload: dict[str, Any] = {
"sub": user_id,
"iat": now,
"exp": now + timedelta(hours=24),
"scope": scopes or ["sizing:read"],
"jti": f"{user_id}-{int(now.timestamp())}", # Unique token ID for revocation
}
# Include profile ID if available — allows profile-specific access control
if measurement_profile_id:
payload["profile_id"] = measurement_profile_id
return jwt.encode(payload, SECRET_KEY, algorithm=ALGORITHM)
def verify_token(token: str, required_scope: str) -> dict:
"""
Verify a JWT and check that it has the required scope.
Raises ValueError if invalid.
"""
try:
payload = jwt.decode(token, SECRET_KEY, algorithms=[ALGORITHM])
except jwt.ExpiredSignatureError:
raise ValueError("Token expired")
except jwt.InvalidTokenError as e:
raise ValueError(f"Invalid token: {e}")
token_scopes = payload.get("scope", [])
# Handle both list and space-separated string formats
if isinstance(token_scopes, str):
token_scopes = token_scopes.split()
if required_scope not in token_scopes and "admin:read" not in token_scopes:
raise ValueError(f"Insufficient scope: {required_scope} required")
return payload
Server-side API key management
Your prediction API key should be stored in environment variables or a secrets manager, never in source code:
import os
from functools import lru_cache
import boto3 # or equivalent for your cloud provider
@lru_cache(maxsize=1)
def get_api_key() -> str:
"""
Retrieve the prediction API key from the appropriate source.
Cached after first call.
"""
# In production: retrieve from secrets manager
if os.environ.get("ENVIRONMENT") == "production":
client = boto3.client("secretsmanager", region_name="eu-west-1")
response = client.get_secret_value(SecretId="prediction-api-key")
return response["SecretString"]
# In development: use environment variable
api_key = os.environ.get("PREDICTION_API_KEY")
if not api_key:
raise RuntimeError("PREDICTION_API_KEY not set")
return api_key
Rotate API keys periodically. Most prediction API providers support multiple active keys or key rotation — use this capability rather than running on a single long-lived key indefinitely.
Row-level access control for measurement profiles
A user’s JWT should only grant access to their own measurement profiles. Implement this at the database level with row-level security (PostgreSQL example), so that even application-layer bugs can’t expose another user’s data:
-- Enable RLS on measurement profiles
ALTER TABLE measurement_profiles ENABLE ROW LEVEL SECURITY;
-- Users can only read their own profiles
CREATE POLICY profile_select ON measurement_profiles
FOR SELECT
USING (
subject_id IN (
SELECT id FROM measurement_subjects
WHERE external_user_id = current_setting('app.current_user_id')
)
);
-- Users can only insert/update their own profiles
CREATE POLICY profile_write ON measurement_profiles
FOR ALL
USING (
subject_id IN (
SELECT id FROM measurement_subjects
WHERE external_user_id = current_setting('app.current_user_id')
)
);
-- Admins bypass RLS
CREATE POLICY admin_bypass ON measurement_profiles
USING (current_setting('app.is_admin', true)::boolean = true);
Set the session variable before any query:
def run_query_as_user(conn, user_id: str, query: str, params: tuple):
with conn.cursor() as cur:
cur.execute("SET LOCAL app.current_user_id = %s", (user_id,))
cur.execute(query, params)
return cur.fetchall()
Access control for third-party integrations
If you expose sizing APIs to partner brands or developers (rather than end users), use OAuth 2.0 Client Credentials flow for machine-to-machine authentication:
from fastapi import FastAPI, Depends, HTTPException
from fastapi.security import OAuth2AuthorizationCodeBearer
import httpx
app = FastAPI()
async def verify_client_token(authorization: str) -> dict:
"""
Verify a client_credentials token from a partner brand.
"""
if not authorization.startswith("Bearer "):
raise HTTPException(status_code=401, detail="Missing bearer token")
token = authorization[7:]
# Introspect token with your authorization server
async with httpx.AsyncClient() as client:
response = await client.post(
"https://auth.yourdomain.com/oauth/introspect",
data={"token": token},
auth=("introspect_client_id", "introspect_client_secret")
)
if response.status_code != 200:
raise HTTPException(status_code=401, detail="Token verification failed")
token_data = response.json()
if not token_data.get("active"):
raise HTTPException(status_code=401, detail="Token inactive or expired")
return token_data
@app.get("/v1/sizing/recommend")
async def recommend_size(
authorization: str,
gender: str,
height_cm: float,
weight_kg: float
):
client_data = await verify_client_token(authorization)
# Log access with client identity for audit
log_api_access(client_data["client_id"], "sizing_recommend", {
"gender": gender,
"height_range": f"{int(height_cm // 10) * 10}-{int(height_cm // 10) * 10 + 9}cm"
# Don't log exact values in access logs — log ranges
})
# ... proceed with sizing recommendation
Audit logging for erasure compliance
GDPR right to erasure requires that you can demonstrate erasure was completed. This requires an audit trail of data access and deletion events. The audit log itself should be append-only — no DELETE permissions on the audit table:
import json
from datetime import datetime, timezone
def log_measurement_event(
event_type: str, # "access", "create", "update", "delete", "erasure_complete"
user_id: str,
actor_id: str, # who performed the action (could be same as user_id, or admin)
metadata: dict | None = None
) -> None:
"""
Write an immutable audit record. This function should only write, never read.
The audit table has no UPDATE or DELETE grants for the application role.
"""
record = {
"event_type": event_type,
"user_id": user_id,
"actor_id": actor_id,
"timestamp": datetime.now(timezone.utc).isoformat(),
"metadata": json.dumps(metadata or {})
}
# Write to append-only audit table
# Implementation depends on your database setup
_write_to_audit_log(record)
The principle of least privilege
The architectural summary: each component of your system should access only what it needs for its specific function.
| Component | What it should access |
|---|---|
| Client (browser/mobile) | None — no direct access to measurement data or prediction API |
| Your API server | User’s own profile (via RLS), prediction API (via server-side key) |
| Background job (prewarm cache) | Prediction API only — no user data |
| Admin dashboard | All profiles (via admin JWT with elevated scope) |
| Analytics system | Aggregate metrics only — not individual profiles |
| Erasure worker | Delete permission on subject’s records — no read of other subjects |
This design limits blast radius: a compromised client doesn’t expose the API key; a compromised cache-warming job doesn’t expose user data; a compromised analytics service doesn’t expose individual measurement profiles.
Security architecture for measurement applications isn’t fundamentally different from other sensitive data applications — it’s the same principles of least privilege, authentication, authorization, and audit logging applied to a domain where the data deserves more care than most. Start with the rule about API keys (server-side only), add RLS for profile isolation, and you’ve covered the most likely failure modes before they become incidents.