Skip to main content

Situational API Roadmap — v1.1

The Situational API scores how well a given situation (role, environment, context) aligns with a person's Multiple Natures profile. It uses LLM-augmented reasoning to produce a narrative alignment report alongside quantitative scores.

Estimated availability: Q2 2027 (with engineering capacity).


What it will do

  • Accept a subject_id + a situation_context (free-text description of role, environment, team dynamics, etc.)
  • Return an async job (202 Accepted) — results delivered via webhook or polling
  • Provide quantitative alignment scores plus a narrative explanation
  • Apply LLM reasoning grounded in the MN framework (not generic AI output)

Async-only. LLM processing takes seconds to minutes; polling and webhook delivery are both supported.


Access model

  • Sandbox keys (test_*): free, unrestricted
  • Production keys (live_*): require a brief application and approval (per ADR-012). This is a lightweight verification, not a long vetting process — primarily to ensure responsible use and collect initial partner feedback on the LLM-augmented output.

Why it matters

The Situational API is the highest-differentiation product in the platform. No other psychometric API provider exposes situation-based alignment scoring. The theory grounding (Multiple Natures × situation context) is what distinguishes it from generic "culture fit" scoring tools.

If you have a use case that depends on the Situational API, reach out to support — we factor partner demand into the roadmap sequencing.


Planned endpoints

POST /v1/situational/assessments — start a situational assessment (async)
GET /v1/situational/assessments/{id} — retrieve result when complete

The full spec is published in the API Reference.