Fair Skin Care Essentials: Products, Protection, and Common Concerns

Fair Skin Care Essentials: Products, Protection, and Common Concerns

Published May 15, 202621 min read

Why Fair-Skin Logic Belongs in Your Ingredient API Layer, Not Your Frontend

Your PM walks over with a Figma frame and a question: "Can we show users with fair skin different ingredient warnings than users with deeper tones?" The naive path is a switch statement in the React layer — if (profile.tone === 'fair') warn('fragrance') — and you can ship it Friday. By Q2, that switch becomes a 400-line file. By Q3, regulatory guidance shifts on octinoxate, an ingredient appears under three INCI synonyms in your dataset, and nobody remembers which threshold rule covers β-pinene vs. Linalool L vs. 2,6-dimethyl-2,7-octadien-6-ol. The fair skin feature your team scoped as a UI ticket has quietly become an unmaintainable rules engine living in the wrong layer.

The fix is structural. Skin-tone-aware features — including any fair skin sensitivity scoring you ship — are an ingredient-data contract problem, not a frontend problem. The decisions your product needs to make ("flag this ingredient for this profile") depend on fields that only a properly indexed cosmetic ingredient database returns: irritancy_score on a 0–5 scale, comedogenicity_score on a 0–5 scale, a severity label, a safety_status aggregate, plus CAS and EC identifiers and a synonyms array that collapses the Linalool problem into one record.

Dermalytics API exposes those primitives over a documented REST contract. Two endpoints — a single-ingredient lookup at GET /v1/ingredients/{name} and a batch INCI analyzer at POST /v1/analyze — cover the read patterns nearly every skincare app actually needs. The index covers 25,000+ ingredients, draws from FDA, EU CosIng, and Health Canada, holds sub-100ms median latency, ships under a 99.9% uptime SLA, and bills only on successful matches. Official SDKs live on npm and PyPI; the OpenAPI 3 contract is published at api.dermalytics.dev for teams generating clients in Go, Rust, Swift, or anything else.

The architectural payoff: when fair-skin thresholds live next to the data that resolves "Vitamin E" and "α-Tocopherol" to a single CAS number, the rules become auditable. A compliance reviewer can trace any user-facing flag back to a regulator-sourced score. A backend engineer can change one threshold in config and see consistent behavior across iOS, Android, and the e-commerce PDP. A new hire can read the rule file and understand what the product does.

Fair-skin sensitivity is not a frontend feature. It is an ingredient-data contract, and it belongs in the layer that already knows what Linalool is.
A developer's monitor showing a JSON response payload from a skincare API in a VS Code dark-theme editor, with a smartphone beside it displaying a consumer-facing ingredient scanner UI flagging "fragrance" and "linalool" with sens

What follows is a working guide for engineering teams building those features: the endpoint contracts you'll integrate against, a configurable threshold model for fair-skin sensitivity logic, the INCI parsing pitfalls that will break your pipeline if you ignore them, a competitive view of what's available in the cosmetic ingredient API market, performance and cost patterns for high-volume scanners, three persona walkthroughs, and a sprint-sized build checklist.

Table of Contents


The Two Endpoints That Power Skin-Tone-Aware Skincare Features

The API surface is intentionally narrow. Two endpoints cover the read patterns nearly every skincare app actually needs, and constraining the surface keeps client integrations boring — which is what you want when the data behind them is regulatory and the consumer-facing claims are sensitive.

GET /v1/ingredients/{name} — single-ingredient lookup

This is the endpoint behind autocomplete fields, detail-pane drill-downs, and barcode flows where OCR has already isolated one ingredient at a time. The name path parameter accepts INCI names and registered synonyms, which means a client can pass Linalool, 2,6-dimethyl-2,7-octadien-6-ol, or any other variant in the synonyms array and resolve the same canonical record.

A successful response returns the fields a fair-skin logic layer cares about: severity (the qualitative label), irritancy_score and comedogenicity_score on the 0–5 scale, cas and ec identifiers for downstream regulatory cross-referencing, the synonyms array, and the per-record safety_status.

GET /v1/ingredients/linalool
Authorization: Bearer $DERMALYTICS_KEY
{
  "name": "Linalool",
  "cas": "78-70-6",
  "ec": "201-134-4",
  "synonyms": ["2,6-dimethyl-2,7-octadien-6-ol", "Linalol"],
  "severity": "moderate",
  "irritancy_score": 3,
  "comedogenicity_score": 0,
  "safety_status": "restricted",
  "sources": ["EU CosIng", "Health Canada"]
}

Billing is charged only on successful matches. A typo, a trade-name token, or an OCR artifact that fails to resolve does not consume credits — which materially changes how you design ingest pipelines.

POST /v1/analyze — batch INCI list analyzer

This is the workhorse for any flow that handles a full product label. A user uploads a moisturizer's ingredient panel; your backend posts the tokenized list; the response returns a per-ingredient record array plus an aggregate safety_status for the formulation.

Batch analysis is where fair-skin modeling pays off, because you post-process the array against profile-specific thresholds and surface a single verdict to the user. One billable call covers a list of 40–80 ingredients, which is the typical INCI length for a finished cosmetic.

POST /v1/analyze
Authorization: Bearer $DERMALYTICS_KEY
Content-Type: application/json

{"ingredients": ["Aqua", "Glycerin", "Linalool", "Fragrance", "Tocopherol"]}

Capability comparison

CapabilityGET /v1/ingredients/{name}POST /v1/analyze
Input shapeOne INCI name or synonymArray of INCI tokens
Typical use caseAutocomplete, detail page, scan-resolveFull label analysis, audits
Returned fieldsSingle record with all fieldsArray + aggregate safety_status
Fair-skin modeling fitCache warming, ingredient drill-downProfile-thresholded label verdict
Credit billingOne charge per successful matchOne charge per analyzed list
Latency profileSub-100ms medianSub-100ms median

The architectural rule: use the lookup endpoint for one ingredient, the analyze endpoint for one product. Mixing them — calling lookup in a loop across an INCI list — is a common antipattern that inflates request count, breaks per-request rate limits, and produces no aggregate safety_status field. The batch endpoint exists precisely because cosmetics analysis is inherently a list operation.

Layering fair-skin logic on top of either endpoint follows the same pattern: the API returns the score; your application defines the threshold. Pseudocode for the simplest case looks like if (profile.fair && ingredient.irritancy_score >= 2) flag(ingredient). The threshold value is a business rule. It is not a clinical recommendation, and the API doesn't pretend to make it one. That separation — score in the data layer, threshold in the application layer — is what keeps the system maintainable as regulations shift and product requirements evolve.


Modeling Fair-Skin Sensitivity as a Configurable Rule Layer

The technical core of any fair skin-aware skincare feature is a pipeline with five stages: profile capture, threshold configuration, API call, response post-processing, and severity override. Each stage is small. The discipline is in keeping them decoupled so non-engineers can tune thresholds without redeploying client code.

Stage 1: User profile capture

Collect three fields at onboarding. skin_tone is a self-selected enum (fair, medium, deep). sensitivity_level is low, moderate, or high. concerns is an array of optional flags like rosacea_prone, acne_prone, or photodamage_concern. Frame this clearly in your UI as self-classification — your app is collecting preferences, not performing clinical assessment.

interface SkinProfile {
  skin_tone: 'fair' | 'medium' | 'deep';
  sensitivity_level: 'low' | 'moderate' | 'high';
  concerns: Array<'rosacea_prone' | 'acne_prone' | 'photodamage_concern'>;
}

Stage 2: Threshold configuration

Map profile combinations to integer cutoffs on the 0–5 scales the API returns. Keep this map in a config file — YAML, JSON, or a feature-flag service — so a product manager or compliance reviewer can adjust thresholds without touching application code.

profiles:
  fair_high_sensitivity:
    irritancy_threshold: 2
    comedogenicity_threshold: 3
  fair_acne_prone:
    irritancy_threshold: 3
    comedogenicity_threshold: 2
  fair_rosacea_prone:
    irritancy_threshold: 2
    comedogenicity_threshold: 4

These values are illustrative engineering defaults. They are not dermatology recommendations, and your release notes should say so. The point is the structure: thresholds are data, not code.

Stage 3: Call /v1/analyze

Post the tokenized INCI list to the batch endpoint. Treat the response as the source of truth for ingredient identity and scores. Do not enrich, override, or rescore client-side — that's what makes the system auditable.

const response = await dermalytics.analyze({ ingredients: tokens });

Stage 4: Response post-processing

Iterate the response array. For each ingredient, compare its irritancy_score and comedogenicity_score against the active profile's thresholds and assign one of three states: pass, warn, or block. Surface the worst offender and the count of flags.

function evaluate(item, thresholds) {
  if (item.irritancy_score >= thresholds.irritancy_threshold + 1) return 'block';
  if (item.irritancy_score >= thresholds.irritancy_threshold) return 'warn';
  if (item.comedogenicity_score >= thresholds.comedogenicity_threshold) return 'warn';
  return 'pass';
}

Stage 5: Severity override and source attribution

The API also returns a severity field independent of the numeric scores. Treat severity == "high" as an automatic block regardless of profile — this is the layer where regulator-flagged ingredients get blocked unconditionally. Surface the regulator name (FDA, EU CosIng, or Health Canada) alongside the flag so the user sees provenance, not opinion. "Restricted by EU CosIng" reads very differently from "Our app doesn't like this ingredient."

The API returns the score. The application decides the threshold. That separation is what makes fair-skin logic auditable instead of opinionated.

Worked example

Consider a sunscreen INCI panel posted to the analyze endpoint. The response includes Avobenzone (severity: moderate, irritancy_score: 3), Zinc Oxide (severity: low, irritancy_score: 1, comedogenicity_score: 1), and Fragrance (severity: moderate, irritancy_score: 4).

For the fair_high_sensitivity profile defined above (irritancy threshold 2, comedogenicity threshold 3):

  • Fragranceblock. Irritancy score 4 exceeds threshold + 1.
  • Avobenzonewarn. Irritancy score 3 exceeds threshold but not block tier.
  • Zinc Oxidepass. Both scores below thresholds.

The UI surfaces: "Not recommended for your profile. Flagged: Fragrance (high irritancy, EU CosIng). 1 warning: Avobenzone." The user sees the why, the source, and the specifics — not a black-box verdict.

This pipeline scales. Adding a new profile is a config edit. Tightening thresholds after a regulatory update is a config edit. Adding a fourth concern flag is a profile-table change plus a UI checkbox. Nothing in the API contract changes, which means client apps on iOS, Android, and web all consume the same rules without coordinated redeploys.


INCI Parsing Pitfalls That Break Fair-Skin Detection

The analyze endpoint is only as accurate as the token list you post to it. Most failures developers hit in production come from parsing INCI labels — not from the API itself. The synonyms array and CAS/EC identifiers resolve most identity ambiguity, but only if your tokenizer produces clean inputs.

  • Synonym fragmentation. A product label says "Tocopherol"; another says "Vitamin E"; a third says "α-Tocopherol." A naive string-match lookup misses two of three and produces inconsistent fair-skin flags across products that contain the same antioxidant. The API's synonyms array normalizes all variants to one canonical record with one CAS number (10191-41-0 for DL-α-Tocopherol), but only if your token reaches the endpoint intact. Don't pre-filter "unknown" tokens client-side — let the API resolve them.
  • Delimiter chaos. INCI lists arrive comma-separated, semicolon-separated, line-broken, with bullet points, or with inline parentheticals like Water (Aqua). Tokenize defensively before posting:
    const tokens = raw
      .replace(/\([^)]*\)/g, '')        // strip parentheticals
      .split(/[,;\n•]/)                  // multiple delimiters
      .map(t => t.trim().toLowerCase())
      .filter(Boolean);
    
  • Trade-name contamination. Marketing copy on labels sometimes includes registered trade names like "Matrixyl 3000" or "Hyaluronic Filling Spheres™" that aren't INCI entries. These won't resolve and won't burn credits — billing fires only on successful matches. Log unresolved tokens as a label-quality signal; high unresolved rates correlate with marketing-heavy formulations where the underlying INCI listing was probably truncated or paraphrased.
  • Concentration stripping. Some labels embed percentages inline ("Niacinamide 5%", "Salicylic Acid 0.5%"). Strip numeric suffixes before lookup — concentration is not part of INCI identity and will break resolution.
    token = token.replace(/\s*\d+(\.\d+)?\s*%?\s*$/, '');
    
  • Case and Unicode normalization. "PEG-100 Stearate", "peg-100 stearate", and "PEG‑100 Stearate" (with a Unicode en-dash, U+2011) are three different strings to a byte-level matcher. Lowercase, normalize Unicode hyphens to ASCII hyphens, and collapse whitespace before posting. The single Unicode-dash bug is responsible for a surprising fraction of "ingredient not found" reports in production scanners.
  • Order encodes concentration, but only outside the API call. INCI lists are sequenced in descending concentration. An irritant in position 2 of a leave-on product matters more than the same ingredient in position 40. The API doesn't weight by position — your post-processing layer should. Multiply the per-ingredient warning weight by a position factor (e.g., positions 1–5 = 1.0×, 6–15 = 0.6×, 16+ = 0.3×) when computing the aggregate verdict surfaced to the user. Keep this logic in your post-processor, not in the API call.

A clean tokenizer pays for itself within the first hundred products you analyze. Dirty inputs don't just produce false negatives — they produce inconsistent verdicts across products that contain the same ingredient under different labels, which is the single fastest way to lose user trust in a fair-skin filtering feature.


Cosmetic Ingredient Data Sources for Developers

The cosmetic ingredient data market has a mix of consumer-facing tools, B2B APIs, and scraped community databases. The developer-meaningful questions are narrow: is there a public REST contract, are SDKs available, does batch INCI analysis exist as a single call, are regulatory sources disclosed, and how does billing work?

The table below records only what each provider publishes on its own public site. Where information isn't published, the cell reads "Not published" — invented values would defeat the purpose of a comparison.

ProviderPublic REST APIOfficial SDKsBatch INCI AnalysisRegulatory SourcesBilling Model
Dermalytics APIYes (OpenAPI 3)npm, PyPIYes (/v1/analyze)FDA, EU CosIng, Health CanadaCredit on successful match
cosmethics.comNot publishedNot publishedNot publishedNot publishedNot published
skincareapi.devYesNot publishedNot publishedNot publishedNot published
pro.incibeauty.comYes (B2B portal)Not publishedYesNot publishedContact sales
incidecoder.comNo (web only)N/AWeb tool onlyNot publishedN/A (consumer site)
api.storeMarketplace listingVaries by listingVariesVariesVaries

Three developer-meaningful differentiators come out of this view.

First, the credit-on-successful-match billing model de-risks experimentation with messy INCI inputs. OCR-driven scanners produce noisy tokens; a per-request billing model penalizes you for label-quality problems that aren't in your control. A match-only model means dirty inputs are diagnostic rather than expensive — failed lookups become label-quality telemetry instead of cost overruns.

Second, regulatory-source plurality matters for apps with international users. Single-source databases (typically FDA-only or EU-only) produce inconsistent verdicts depending on where the user is. An ingredient restricted by EU CosIng but unrestricted by FDA needs to surface that nuance to a user in Berlin differently than a user in Atlanta. Apps that ship in multiple markets either need multiple data subscriptions or a provider that has already done the union.

Third, a published OpenAPI 3 contract lets your team generate type-safe clients in any language. The official SDKs cover JavaScript/TypeScript and Python — which captures most web and backend work — but mobile teams writing native Swift or Kotlin, or backend teams in Go or Rust, can generate clients directly from the spec at api.dermalytics.dev. That eliminates the "we're blocked on SDK availability" delay that frequently stalls integration projects in non-mainstream stacks.

What the table does not show, and what your evaluation should include, is API stability and changelog discipline. Read each provider's changelog or release notes before committing. Cosmetic regulation is dynamic — Health Canada's Hotlist updates several times a year, EU CosIng publishes opinions on a rolling basis — and an API provider that doesn't version response contracts will break your client on a Thursday afternoon when you have something else to do.


Latency, Caching, and Cost Patterns for High-Volume Scanners

Consumer ingredient scanners are latency-sensitive in a way that surprises teams used to building B2B tooling. A barcode-scan-to-result flow on mobile typically has a 400ms perceptual budget — beyond that, users perceive the app as sluggish and abandon the scan. Spending 100ms on ingredient resolution leaves headroom for network round-trip, JSON parsing, and render. The sub-100ms median latency Dermalytics publishes is engineered for exactly this constraint; the 99.9% uptime SLA gives you a defensible availability story when partner teams ask.

Caching the lookup endpoint

Ingredient records are slow-changing. Regulatory updates happen on the order of months, not minutes, which makes /v1/ingredients/{name} an ideal candidate for aggressive caching. A 24-hour TTL on the client or edge captures the freshness/cost tradeoff well for most consumer apps. The cache key should be the normalized ingredient token (lowercase, ASCII-only, no leading/trailing whitespace), not the raw user input — otherwise Tocopherol and tocopherol produce two cache entries with identical payloads.

const key = name.normalize('NFKD').toLowerCase().trim();
const cached = await cache.get(`ing:${key}`);
if (cached) return JSON.parse(cached);

const fresh = await dermalytics.ingredients.get(key);
await cache.set(`ing:${key}`, JSON.stringify(fresh), { ttl: 86400 });
return fresh;

Why not to cache /v1/analyze responses

INCI lists are long-tail and effectively unique per product. Caching whole-list responses produces low hit rates and large cache footprints. The better pattern: after each analyze response, iterate the per-ingredient breakdown and populate your single-ingredient cache with the records. The analyze call effectively pre-warms the lookup cache for the next thousand scans, which compounds the value of the one billable analyze invocation.

Credit-on-successful-match implications

The billing model changes scanner architecture in one specific way: unknown ingredients are free. For OCR-driven flows where camera noise produces garbage tokens ("Vvater", "G1ycerin"), the API penalizes only resolved tokens. This means you can be liberal with token submission and treat the API itself as a partial validator. Build a lightweight token-validation pass (minimum length, character class) before posting if you want to keep request payloads small, but you don't need to build a full INCI dictionary client-side to control cost.

99.9% uptime in practice

99.9% allows for roughly 8.76 hours of downtime per year. That's a real number — graceful degradation matters. When the API is unreachable, fall back to your cached records and surface a "limited data — last updated X hours ago" UI state. Failing the user's scan because your dependency had a bad five minutes is a worse outcome than surfacing slightly stale data with a transparent label.

SDK and client choice

The npm package serves JS/TS apps and Node backends; PyPI serves Python data pipelines and ML preprocessing. For other languages — Go services, Rust backends, native Swift on iOS, Kotlin on Android — generate clients from the OpenAPI 3 spec at api.dermalytics.dev using the standard generator for your toolchain (openapi-generator-cli, oapi-codegen, etc.). The generated client is type-safe, version-pinned to the contract revision you generated against, and gives you compile-time safety on response field access — which matters when those fields drive user-facing safety verdicts.

Unknown ingredients are free. That single line in the pricing model changes how you design an OCR-driven scanner.

The cost model nudges toward sensible architecture. Cache aggressively at the ingredient level. Use the batch endpoint to amortize the cost of full-label analysis across many cached lookups. Treat unresolved tokens as label-quality telemetry, not failure. Build graceful degradation. None of this is exotic engineering — it's the discipline that separates a scanner that holds up at 100,000 daily scans from one that quietly breaks at 10,000.


Three Developer Personas Shipping Fair-Skin Features This Quarter

Concrete persona walkthroughs make the integration shape easier to evaluate against your own roadmap. Each persona ships a different feature, calls a different endpoint pattern, and makes a different tradeoff.

Split-screen mockup: left side shows a mobile scanner UI with a green "Suitable for fair skin" badge over a moisturizer product; right side shows a backend dashboard with a catalog table and ingredient-suitability columns populated. Clean,

The Mobile Scanner Engineer

Ships an iOS app that scans product barcodes, runs OCR on the back-of-pack INCI list, and surfaces a per-product verdict for the user's profile.

  • Endpoint pattern: one POST /v1/analyze call per scanned product, with the OCR-tokenized list as payload.
  • Fair-skin rule: profile captured at onboarding stores skin_tone: fair, sensitivity_level: high. Thresholds set to irritancy: 2, comedogenicity: 3. Severity-high ingredients auto-block.
  • Client generation: Swift client generated from the OpenAPI 3 contract — no native SDK needed for iOS.
  • Caching: per-ingredient records cached with 24-hour TTL using URLCache; analyze responses are not cached but pre-warm the ingredient cache.
  • Tradeoff: accepted higher cold-start latency on rare ingredients to keep the on-device cache footprint under 50 MB. Cache LRU evicts ingredients unused for 14 days.

The DTC Beauty Brand Backend Lead

Adds an ingredient-transparency widget to the product detail pages of a Shopify-hosted beauty brand serving customers across the US and EU.

  • Endpoint pattern: nightly cron job posts each SKU's INCI list to /v1/analyze once per catalog refresh; results persisted to Postgres.
  • Fair-skin rule: PDP renders a "Suitable for fair / sensitive skin" badge computed at ingestion time, not request time. Zero per-pageview API cost.
  • Source surfacing: the regulator that flagged any restricted ingredient (FDA / EU CosIng / Health Canada) is displayed in the "Why?" expandable, building trust through provenance.
  • Caching: full per-product result stored in Postgres; refreshed only when the upstream catalog changes the INCI list for that SKU.
  • Tradeoff: accepted that regulatory updates lag by up to 7 days (the refresh cadence) in exchange for predictable, capped API spend per catalog cycle.

The Wellness App Data Engineer

Enriches a product catalog of 50,000 SKUs feeding a wellness app's "find products matching your profile" feature.

  • Endpoint pattern: Python ETL pipeline using the PyPI SDK iterates products, posts each INCI list to /v1/analyze, persists irritancy_score, comedogenicity_score, severity, and safety_status per ingredient to a normalized schema.
  • Fair-skin rule: derived columns fair_skin_score and acne_prone_score computed in dbt from the persisted scores; product search re-ranks results based on the active user profile.
  • Cost shape: approximately 50,000 successful analyze calls at catalog ingest; incremental cost is roughly proportional to catalog churn (typically 5–10% per quarter).
  • Pipeline structure: idempotent — re-running against an unchanged INCI list produces an identical result row, allowing safe retries on transient failures.
  • Tradeoff: chose to denormalize scores into product rows rather than join at query time. Storage cost is trivial; query latency dropped from ~80ms to ~12ms on the recommendation endpoint.

The common thread across all three: the API call is at the ingestion or scan boundary, the threshold logic is in application config, and the user-facing verdict is computed deterministically from persisted or cached scores. None of these teams is making API calls per page view or per UI interaction. That's not an accident — it's the architecture the credit-billing model rewards, and it's also the architecture that holds up under load.


Build Checklist: Ship a Fair-Skin-Aware Feature in One Sprint

A two-week sprint is enough time to ship a working fair-skin filter on top of an existing skincare product. The checklist below is ordered for execution: each section depends on the one before it, and every item is verifiable.

Account and contract setup

  • Register at dermalytics.dev and obtain an API key
  • Pull the OpenAPI 3 spec from api.dermalytics.dev
  • Install the official SDK (npm i @dermalytics/sdk or pip install dermalytics) OR generate a typed client in your stack's language from the OpenAPI spec
  • Verify auth with a single GET /v1/ingredients/water smoke test
  • Add the API key to your secrets manager (not committed to source control)

Schema and profile design

  • Define the user profile schema with skin_tone, sensitivity_level, and concerns[] fields
  • Define the threshold map: per-profile irritancy_threshold and comedogenicity_threshold integers on the 0–5 scale
  • Document thresholds in a config file (YAML, JSON, or feature-flag service) — not in application code
  • Add a release-notes line stating that thresholds are configurable engineering defaults, not clinical recommendations

INCI ingestion

  • Implement the INCI tokenizer: strip parentheticals, normalize delimiters, lowercase, normalize Unicode hyphens to ASCII, strip percentage suffixes
  • Log unresolved tokens to your observability stack for label-quality monitoring
  • Add a unit-test suite of 20 known-messy labels covering each tokenizer edge case

Analysis pipeline

  • Call POST /v1/analyze with the tokenized list
  • Iterate the response array and tag each ingredient pass | warn | block per the active profile thresholds
  • Auto-block any ingredient where severity == "high" regardless of profile
  • Apply position weighting in post-processing (concentration descends with INCI order)
  • Compute and persist the aggregate verdict and the worst-offender ingredient name

UX and provenance

  • Surface the worst-offender ingredient name in the UI when blocking
  • Display the regulator source (FDA / EU CosIng / Health Canada) alongside each flag
  • Implement graceful degradation: cached fallback when the API is unreachable, with a clear "limited data" UI state

Performance and cost

  • Cache /v1/ingredients/{name} responses with a 24-hour TTL keyed on normalized ingredient name
  • Backfill the lookup cache from /v1/analyze response arrays
  • Set up credit-usage monitoring and alert at 80% of the monthly budget
  • Add p50 and p95 latency dashboards for both endpoints

Pre-launch verification

  • Run analyze on 20 reference products with known irritants; confirm flag accuracy against expected results
  • Verify p95 end-to-end latency stays within your product budget (typically 400ms for mobile scan flows)
  • Confirm the graceful-degradation path renders correctly with the API disabled in staging
  • Run a final end-to-end test using a fair-skin + high-sensitivity profile against a known-irritating product and confirm the expected block verdict

Twenty-two checkboxes, four to six engineering days of work, and a fair-skin-aware ingredient feature ships to production. The structure outlasts the sprint: thresholds tune in config, regulatory updates propagate through API responses, and adding a fourth profile or a fifth concern flag is a data change rather than a code change. That's what putting the logic in the ingredient API layer — instead of the frontend — actually buys you.