AWF Reflect Route
Inside the AWF runtime network, the AWF API proxy exposes GET /reflect at http://api-proxy:10000/reflect.
Use this route when building shared workflows, tools, or extensions that need runtime model routing.
Why use /reflect
Section titled “Why use /reflect”/reflect returns the currently configured inference providers and their model availability for the active run. This allows a shared workflow or tool to:
- Discover which gateway endpoints are available
- Check whether each endpoint is configured
- Read or refresh model availability
- Select a provider/model dynamically at runtime
Response shape
Section titled “Response shape”The response includes an endpoints array and a models_fetch_complete flag:
endpoints[].provider: provider identifier (e.g.,openai,anthropic,copilot,gemini)endpoints[].base_url: gateway base URL for inference callsendpoints[].configured: whether credentials/config are present for that providerendpoints[].models: discovered model IDs, ornullwhen model discovery is not yet completeendpoints[].models_url: gateway URL used to query models for that providermodels_fetch_complete: whether startup model discovery is complete
Recommended selection flow for shared tools
Section titled “Recommended selection flow for shared tools”- Query
/reflectat start of execution. - Filter endpoints to
configured: true. - Prefer endpoints with a non-empty
modelslist. - Match requested model aliases/patterns against available models.
- Route inference to the selected endpoint
base_url. - If
modelsisnull, retry discovery with bounded backoff (for example, every 3 seconds up to 5 attempts) before failing.
This keeps shared tooling portable across repositories and environments where available providers differ.
Example request
Section titled “Example request”curl -s http://api-proxy:10000/reflect