GitHub Agentic Workflows

AWF Reflect Route

Inside the AWF runtime network, the AWF API proxy exposes GET /reflect at http://api-proxy:10000/reflect.

Use this route when building shared workflows, tools, or extensions that need runtime model routing.

/reflect returns the currently configured inference providers and their model availability for the active run. This allows a shared workflow or tool to:

  • Discover which gateway endpoints are available
  • Check whether each endpoint is configured
  • Read or refresh model availability
  • Select a provider/model dynamically at runtime

The response includes an endpoints array and a models_fetch_complete flag:

  • endpoints[].provider: provider identifier (e.g., openai, anthropic, copilot, gemini)
  • endpoints[].base_url: gateway base URL for inference calls
  • endpoints[].configured: whether credentials/config are present for that provider
  • endpoints[].models: discovered model IDs, or null when model discovery is not yet complete
  • endpoints[].models_url: gateway URL used to query models for that provider
  • models_fetch_complete: whether startup model discovery is complete
Section titled “Recommended selection flow for shared tools”
  1. Query /reflect at start of execution.
  2. Filter endpoints to configured: true.
  3. Prefer endpoints with a non-empty models list.
  4. Match requested model aliases/patterns against available models.
  5. Route inference to the selected endpoint base_url.
  6. If models is null, retry discovery with bounded backoff (for example, every 3 seconds up to 5 attempts) before failing.

This keeps shared tooling portable across repositories and environments where available providers differ.

Terminal window
curl -s http://api-proxy:10000/reflect