Skip to content
GitHub Agentic Workflows

MemoryOps

MemoryOps enables workflows to persist state across runs using cache-memory and repo-memory. Build workflows that remember their progress, resume after interruptions, share data between workflows, and avoid API throttling.

Use MemoryOps for incremental processing, trend analysis, multi-step tasks, and workflow coordination.

Fast, ephemeral storage using GitHub Actions cache (7 days retention):

tools:
cache-memory:
key: my-workflow-state

Use for: Temporary state, session data, short-term caching
Location: /tmp/gh-aw/cache-memory/

Persistent, version-controlled storage in a dedicated Git branch:

tools:
repo-memory:
branch-name: memory/my-workflow
file-glob: ["*.json", "*.jsonl"]

Use for: Historical data, trend tracking, permanent state
Location: /tmp/gh-aw/repo-memory/default/

Track progress through large datasets with todo/done lists to ensure complete coverage across multiple runs.

Your goal: “Process all items in a collection, tracking which ones are done so I can resume if interrupted.”

How to state it in your workflow:

Analyze all open issues in the repository. Track your progress in cache-memory
so you can resume if the workflow times out. Mark each issue as done after
processing it. Generate a final report with statistics.

What the agent will implement: Maintain a state file with items to process (todo) and completed items (done). After processing each item, immediately update the state so the workflow can resume if interrupted.

Example structure the agent might use:

{
"todo": [123, 456, 789],
"done": [101, 102],
"errors": [],
"last_run": 1705334400
}

Real examples: .github/workflows/repository-quality-improver.md, .github/workflows/copilot-agent-analysis.md

Save workflow checkpoints to resume long-running tasks that may timeout.

Your goal: “Process data in batches, saving progress so I can continue where I left off in the next run.”

How to state it in your workflow:

Migrate 10,000 records from the old format to the new format. Process 500
records per run and save a checkpoint. Each run should resume from the last
checkpoint until all records are migrated.

What the agent will implement: Store a checkpoint with the last processed position. Each run loads the checkpoint, processes a batch, then saves the new position.

Example checkpoint the agent might use:

{
"last_processed_id": 1250,
"batch_number": 13,
"total_migrated": 1250,
"status": "in_progress"
}

Real examples: .github/workflows/daily-news.md, .github/workflows/cli-consistency-checker.md

Share data between workflows using repo-memory branches.

Your goal: “Collect data in one workflow and analyze it in other workflows.”

How to state it in your workflow:

Producer workflow:

Every 6 hours, collect repository metrics (issues, PRs, stars) and store them
in repo-memory so other workflows can analyze the data later.

Consumer workflow:

Load the historical metrics from repo-memory and compute weekly trends.
Generate a trend report with visualizations.

What the agent will implement: One workflow (producer) collects data and stores it in repo-memory. Other workflows (consumers) read and analyze the shared data using the same branch name.

Configuration both workflows need:

tools:
repo-memory:
branch-name: memory/shared-data # Same branch for producer and consumer

Real examples: .github/workflows/metrics-collector.md (producer), trend analysis workflows (consumers)

Cache API responses to avoid rate limits and reduce workflow time.

Your goal: “Avoid hitting rate limits by caching API responses that don’t change frequently.”

How to state it in your workflow:

Fetch repository metadata and contributor lists. Cache the data for 24 hours
to avoid repeated API calls. If the cache is fresh, use it. Otherwise, fetch
new data and update the cache.

What the agent will implement: Before making expensive API calls, check if cached data exists and is fresh. If cache is valid (based on TTL), use cached data. Otherwise, fetch fresh data and update cache.

TTL guidelines to include in your prompt:

  • Repository metadata: 24 hours
  • Contributor lists: 12 hours
  • Issues/PRs: 1 hour
  • Workflow runs: 30 minutes

Real examples: .github/workflows/daily-news.md

Store time-series data and compute trends, moving averages, and statistics.

Your goal: “Track metrics over time and identify trends.”

How to state it in your workflow:

Collect daily build times and test times. Store them in repo-memory as
time-series data. Compute 7-day and 30-day moving averages. Generate trend
charts showing whether performance is improving or declining over time.

What the agent will implement: Append new data points to a history file (JSON Lines format). Load historical data to compute trends, moving averages, and generate visualizations using Python.

Real examples: .github/workflows/daily-code-metrics.md, .github/workflows/shared/charts-with-trending.md

Use multiple memory instances for different purposes and retention policies.

Your goal: “Organize data with different lifecycles—temporary session data, historical metrics, configuration, and archived snapshots.”

How to state it in your workflow:

Use cache-memory for temporary API responses during this run. Store daily
metrics in one repo-memory branch for trend analysis. Keep data schemas in
another branch. Archive full snapshots in a third branch with compression.

What the agent will implement: Separate hot data (cache-memory) from historical data (repo-memory). Use different repo-memory branches for metrics vs. configuration vs. archives.

Configuration to include:

tools:
cache-memory:
key: session-data # Fast, temporary
repo-memory:
- id: metrics
branch-name: memory/metrics # Time-series data
- id: config
branch-name: memory/config # Schema and metadata
- id: archive
branch-name: memory/archive # Compressed backups

Append-only format ideal for logs and metrics:

Terminal window
# Append without reading entire file
echo '{"date": "2024-01-15", "value": 42}' >> data.jsonl

Document your data structure:

{
"dataset": "performance-metrics",
"schema": {
"date": "YYYY-MM-DD",
"value": "integer"
},
"retention": "90 days"
}

Prevent unbounded growth:

Terminal window
# Keep only last 90 entries
tail -n 90 history.jsonl > history-trimmed.jsonl
mv history-trimmed.jsonl history.jsonl

Check integrity before processing:

Terminal window
if [ -f state.json ] && jq empty state.json 2>/dev/null; then
echo "Valid state"
else
echo "Corrupt state, reinitializing..."
echo '{}' > state.json
fi

Safe practices:

Terminal window
# ✅ GOOD - Aggregate statistics
echo '{"open_issues": 42}' > metrics.json
# ❌ BAD - Individual user data
echo '{"user": "alice", "email": "alice@example.com"}' > users.json

Cache not persisting: Verify cache key is consistent across runs

Repo memory not updating: Check file-glob patterns match your files and files are within max-file-size limit

Out of memory errors: Process data in chunks instead of loading entirely, implement data rotation

Merge conflicts: Use JSON Lines format (append-only), separate branches per workflow, or add run ID to filenames