MemoryOps
MemoryOps enables workflows to persist state across runs using cache-memory and repo-memory. Build workflows that remember their progress, resume after interruptions, share data between workflows, and avoid API throttling.
Use MemoryOps for incremental processing, trend analysis, multi-step tasks, and workflow coordination.
How to Use These Patterns
Section titled “How to Use These Patterns”Memory Types
Section titled “Memory Types”Cache Memory
Section titled “Cache Memory”Fast, ephemeral storage using GitHub Actions cache (7 days retention):
tools: cache-memory: key: my-workflow-stateUse for: Temporary state, session data, short-term caching
Location: /tmp/gh-aw/cache-memory/
Repository Memory
Section titled “Repository Memory”Persistent, version-controlled storage in a dedicated Git branch:
tools: repo-memory: branch-name: memory/my-workflow file-glob: ["*.json", "*.jsonl"]Use for: Historical data, trend tracking, permanent state
Location: /tmp/gh-aw/repo-memory/default/
Pattern 1: Exhaustive Processing
Section titled “Pattern 1: Exhaustive Processing”Track progress through large datasets with todo/done lists to ensure complete coverage across multiple runs.
Analyze all open issues in the repository. Track your progress in cache-memoryso you can resume if the workflow times out. Mark each issue as done afterprocessing it. Generate a final report with statistics.The agent maintains a state file with items to process and completed items, updating it after each item so the workflow can resume if interrupted:
{ "todo": [123, 456, 789], "done": [101, 102], "errors": [], "last_run": 1705334400}Real examples: .github/workflows/repository-quality-improver.md, .github/workflows/copilot-agent-analysis.md
Pattern 2: State Persistence
Section titled “Pattern 2: State Persistence”Save workflow checkpoints to resume long-running tasks that may timeout.
Migrate 10,000 records from the old format to the new format. Process 500records per run and save a checkpoint. Each run should resume from the lastcheckpoint until all records are migrated.The agent stores a checkpoint with the last processed position and resumes from it each run:
{ "last_processed_id": 1250, "batch_number": 13, "total_migrated": 1250, "status": "in_progress"}Real examples: .github/workflows/daily-news.md, .github/workflows/cli-consistency-checker.md
Pattern 3: Shared Information
Section titled “Pattern 3: Shared Information”Share data between workflows using repo-memory branches. A producer workflow stores data; consumers read it using the same branch name.
Producer workflow:
Every 6 hours, collect repository metrics (issues, PRs, stars) and store themin repo-memory so other workflows can analyze the data later.Consumer workflow:
Load the historical metrics from repo-memory and compute weekly trends.Generate a trend report with visualizations.Both workflows reference the same branch:
tools: repo-memory: branch-name: memory/shared-dataReal examples: .github/workflows/metrics-collector.md (producer), trend analysis workflows (consumers)
Pattern 4: Data Caching
Section titled “Pattern 4: Data Caching”Cache API responses to avoid rate limits and reduce workflow time. The agent checks for fresh cached data before making API calls, using suggested TTLs: repository metadata (24h), contributor lists (12h), issues/PRs (1h), workflow runs (30m).
Fetch repository metadata and contributor lists. Cache the data for 24 hoursto avoid repeated API calls. If the cache is fresh, use it. Otherwise, fetchnew data and update the cache.Real examples: .github/workflows/daily-news.md
Pattern 5: Trend Computation
Section titled “Pattern 5: Trend Computation”Store time-series data and compute trends, moving averages, and statistics. The agent appends new data points to a JSON Lines history file and computes trends using Python.
Collect daily build times and test times. Store them in repo-memory astime-series data. Compute 7-day and 30-day moving averages. Generate trendcharts showing whether performance is improving or declining over time.Real examples: .github/workflows/daily-code-metrics.md, .github/workflows/shared/charts-with-trending.md
Pattern 6: Multiple Memory Stores
Section titled “Pattern 6: Multiple Memory Stores”Use multiple memory instances for different lifecycles — cache-memory for temporary session data, separate repo-memory branches for metrics, configuration, and archives.
Use cache-memory for temporary API responses during this run. Store dailymetrics in one repo-memory branch for trend analysis. Keep data schemas inanother branch. Archive full snapshots in a third branch with compression.tools: cache-memory: key: session-data # Fast, temporary
repo-memory: - id: metrics branch-name: memory/metrics # Time-series data
- id: config branch-name: memory/config # Schema and metadata
- id: archive branch-name: memory/archive # Compressed backupsBest Practices
Section titled “Best Practices”Use JSON Lines for Time-Series Data
Section titled “Use JSON Lines for Time-Series Data”Append-only format ideal for logs and metrics:
# Append without reading entire fileecho '{"date": "2024-01-15", "value": 42}' >> data.jsonlInclude Metadata
Section titled “Include Metadata”Document your data structure:
{ "dataset": "performance-metrics", "schema": { "date": "YYYY-MM-DD", "value": "integer" }, "retention": "90 days"}Implement Data Rotation
Section titled “Implement Data Rotation”Prevent unbounded growth:
# Keep only last 90 entriestail -n 90 history.jsonl > history-trimmed.jsonlmv history-trimmed.jsonl history.jsonlValidate State
Section titled “Validate State”Check integrity before processing:
if [ -f state.json ] && jq empty state.json 2>/dev/null; then echo "Valid state"else echo "Corrupt state, reinitializing..." echo '{}' > state.jsonfiSecurity Considerations
Section titled “Security Considerations”Memory stores are visible to anyone with repository access. Never store credentials, API tokens, PII, or secrets — only aggregate statistics and anonymized data.
# ✓ GOOD - Aggregate statisticsecho '{"open_issues": 42}' > metrics.json
# ✗ BAD - Individual user dataecho '{"user": "alice", "email": "alice@example.com"}' > users.jsonTroubleshooting
Section titled “Troubleshooting”Cache not persisting: Verify cache key is consistent across runs
Repo memory not updating: Check file-glob patterns match your files and files are within max-file-size limit
Out of memory errors: Process data in chunks instead of loading entirely, implement data rotation
Merge conflicts: Use JSON Lines format (append-only), separate branches per workflow, or add run ID to filenames
Related Documentation
Section titled “Related Documentation”- MCP Servers - Memory MCP server configuration
- Deterministic Patterns - Data preprocessing
- Safe Outputs - Storing workflow outputs
- Frontmatter Reference - Configuration options