GitHub Agentic Workflows

Cost Management

The cost of running an agentic workflow is the sum of two components: GitHub Actions minutes consumed by the workflow jobs, and inference costs charged by the AI provider for each agent run.

Every workflow job consumes Actions compute time billed at standard GitHub Actions pricing. A typical agentic workflow run includes at least two jobs:

JobPurposeTypical duration
Pre-activation / detectionValidates the trigger, runs membership checks, evaluates skip-if-match conditions10–30 seconds
AgentRuns the AI engine and executes tools1–15 minutes

Each job also incurs approximately 1.5 minutes of runner setup overhead on top of its execution time.

The agent job invokes an AI engine to process the prompt and call tools. Inference is billed by the provider:

EngineBilled toUnit
copilotAccount owning COPILOT_GITHUB_TOKENPremium requests (1–2 per run; see Copilot billing)
claudeAnthropic account for ANTHROPIC_API_KEYTokens
codexOpenAI account for OPENAI_API_KEYTokens

The gh aw logs command surfaces per-run metrics — elapsed duration, token usage, and estimated inference cost — before you decide what to optimize. Use gh aw audit <run-id> to deep-dive into a single run’s token usage, tool calls, and inference spend; its Metrics and Performance Metrics sections cover token counts, effective tokens, turn counts, and estimated cost in one place. For cost trends across multiple runs, use gh aw logs --format markdown [workflow] to generate a cross-run report with anomaly detection.

Terminal window
# Overview table for all agentic workflows (last 10 runs)
gh aw logs
# Narrow to a single workflow
gh aw logs issue-triage-agent
# Last 30 days for Copilot workflows
gh aw logs --engine copilot --start-date -30d

The overview table includes a Duration column showing elapsed wall-clock time per run. Because GitHub Actions bills compute time by the minute (rounded up per job), duration is the primary indicator of Actions spend.

Use --json to get structured output suitable for scripting or trend analysis:

Terminal window
# Write JSON to a file for further processing
gh aw logs --start-date -1w --json > /tmp/logs.json
# List per-run duration, tokens, and cost across all workflows
gh aw logs --start-date -30d --json | \
jq '.runs[] | {workflow: .workflow_name, duration: .duration, cost: .estimated_cost}'
# Total cost grouped by workflow over the past 30 days
gh aw logs --start-date -30d --json | \
jq '[.runs[]] | group_by(.workflow_name) |
map({workflow: .[0].workflow_name, runs: length, total_cost: (map(.estimated_cost) | add // 0)})'

Each run under .runs[] includes duration, token_usage, estimated_cost, workflow_name, and agent. For orchestrated workflows, the same JSON includes deterministic lineage under .episodes[] and .edges[] — see the next section.

gh aw logs --json emits three views of the same data: .runs[] (individual workflow runs), .episodes[] (related runs grouped into one logical execution — orchestrator, workers, workflow_call follow-ups, and reporting passes), and .edges[] (the inferred parent-child lineage). Use .runs[] to find which specific run was expensive; use .episodes[] to answer “what did this job cost end-to-end?”. For non-orchestrated workflows, an episode collapses to a single run and the two views are equivalent.

Useful episode fields for cost analysis:

FieldMeaning
total_runsWorkflow runs in the logical execution
total_tokens / total_effective_tokensRaw and effective token aggregates; prefer total_effective_tokens for Copilot
total_durationWall-clock duration across grouped runs
primary_workflowMain workflow label
resource_heavy_node_countRuns flagged as resource-heavy
blocked_request_countAggregate blocked-network pressure

For Copilot runs, treat total_estimated_cost as a heuristic — Copilot does not expose billing-grade cost data, so total_effective_tokens is the more reliable proxy.

Safe-output actuation also appears in both gh aw logs --json (run- and repo-level) and gh aw audit <run-id> (under safe_output_summary). The relevant fields — temporary_id_map_status, temporary_id_mappings, chained_target_count, chained_followup_action_count, delegated_temp_target_count, closed_temp_target_count, and their repo-level aggregates — show how often a workflow follows up on its own outputs. When temporary_id_map_status is missing or invalid, chain counts fall back to 0 rather than guessing from incomplete data.

Terminal window
# Top 10 heaviest logical executions over the past 30 days by effective tokens
gh aw logs --start-date -30d --json | \
jq '[.episodes[] | {episode: .episode_id, workflow: .primary_workflow, runs: .total_runs, effective_tokens: (.total_effective_tokens // 0)}]
| sort_by(.effective_tokens) | reverse | .[:10]'

The primary cost lever for most workflows is how often they run. Some events are inherently high-frequency:

Trigger typeRiskNotes
pushHighEvery commit to any matching branch fires the workflow
pull_requestMedium–HighFires on open, sync, re-open, label, and other subtypes
issuesMedium–HighFires on open, close, label, edit, and other subtypes
check_run, check_suiteHighCan fire many times per push in busy repositories
issue_comment, pull_request_review_commentMediumScales with comment activity
scheduleLow–PredictableFires at a fixed cadence; easy to budget
workflow_dispatchLowHuman-initiated; naturally rate-limited

Use Deterministic Checks to Skip the Agent

Section titled “Use Deterministic Checks to Skip the Agent”

The most effective cost reduction is skipping the agent job entirely when it is not needed. The skip-if-match and skip-if-no-match conditions run during the low-cost pre-activation job and cancel the workflow before the agent starts:

on:
issues:
types: [opened]
skip-if-match: 'label:duplicate OR label:wont-fix'
on:
issues:
types: [labeled]
skip-if-no-match: 'label:needs-triage'

Use these to filter out noise before incurring inference costs. See Triggers for the full syntax.

The engine.model field selects the AI model. Smaller or faster models cost significantly less per token while still handling many routine tasks:

engine:
id: copilot
model: gpt-4.1-mini
engine:
id: claude
model: claude-haiku-4-5

Reserve frontier models (GPT-5, Claude Sonnet, etc.) for complex tasks. Use lighter models for triage, labeling, summarization, and other structured outputs.

Inference cost scales with prompt size. Write focused prompts, avoid whole-file reads when only a few lines matter, cap result counts in tool calls, and use imports to compose a smaller subset of prompt sections at runtime.

Use user-rate-limit to cap how many times a user can trigger the workflow in a given window, and rely on concurrency controls to serialize runs rather than letting them pile up:

user-rate-limit:
max-runs-per-window: 3
window: 60 # 3 runs per hour per user

See Rate Limiting Controls and Concurrency for details.

Scheduled workflows fire at a fixed cadence, making cost easy to estimate and cap:

schedule: daily on weekdays

One scheduled run per weekday = five agent invocations per week. See Schedule Syntax for the full fuzzy schedule syntax.

The agentic-workflows MCP tool exposes the same operations as the CLI (logs, audit, status) to any workflow agent, so a scheduled meta-agent can inspect and optimize other agentic workflows automatically — fetching aggregate cost data, deep-diving into individual runs, and proposing frontmatter changes (cheaper model, tighter skip-if-match, lower user-rate-limit) via a pull request.

description: Weekly Actions minutes cost report
on: weekly
permissions:
actions: read
engine: copilot
tools:
agentic-workflows:
SignalAutomatic action
High token count per runSwitch to a smaller model (gpt-4.1-mini, claude-haiku-4-5)
Frequent runs with no safe-output producedAdd or tighten skip-if-match
Long queue times due to concurrencyLower user-rate-limit.max-runs-per-window or add a concurrency group
Workflow running too oftenChange trigger to schedule or add workflow_dispatch

These are rough estimates to help with budgeting. Actual costs vary by prompt size, tool usage, model, and provider pricing.

ScenarioFrequencyActions minutes/monthInference/month
Weekly digest (schedule, 1 repo)4×/month~1 min~4–8 premium requests (Copilot)
Issue triage (issues opened, 20/month)20×/month~10 min~20–40 premium requests
PR review on every push (busy repo, 100 pushes/month)100×/month~100 min~100–200 premium requests
On-demand via slash commandUser-controlledVariesVaries