Skip to content
GitHub Agentic Workflows

TaskOps Strategy

The TaskOps strategy is a scaffolded approach to using AI agents for systematic code improvements. This strategy keeps developers in the driver’s seat by providing clear decision points at each phase while leveraging AI agents to handle the heavy lifting of research, planning, and implementation.

The strategy follows three distinct phases:

A research agent (typically scheduled daily or weekly) investigates the repository under a specific angle and generates a comprehensive report. Using advanced Model Context Protocol (MCP) tools for deep analysis (static analysis, logging data, semantic search), it examines the codebase from a specific perspective and creates a detailed discussion or issue with findings, recommendations, and supporting data. Cache memory maintains historical context to track trends over time.

The developer reviews the research report to determine if worthwhile improvements were identified. If the findings merit action, the developer invokes a planner agent to convert the research into specific, actionable issues. The planner splits complex work into smaller, focused tasks optimized for copilot agent success, formatting each issue with clear objectives, file paths, acceptance criteria, and implementation guidance.

The developer reviews the generated issues and decides which ones to execute. Approved issues are assigned to Copilot for automated implementation and can be executed sequentially or in parallel depending on dependencies. Copilot creates a pull request with the implementation for developer review and merging.

Use this strategy when code improvements require systematic investigation before action, work needs to be broken down for optimal AI agent execution, or when research findings may vary in priority and require developer oversight at each phase.

Research Phase: static-analysis-report.md

Runs daily to scan all agentic workflows with security tools (zizmor, poutine, actionlint), creating a comprehensive security discussion with clustered findings by tool and issue type, severity assessment, fix prompts, and historical trends.

Plan Phase: Developer reviews the security discussion and uses the /plan command to convert high-priority findings into issues.

Assign Phase: Developer assigns generated issues to Copilot for automated fixes.

Example: Duplicate Code Detection → Plan → Refactor

Section titled “Example: Duplicate Code Detection → Plan → Refactor”

Research Phase: duplicate-code-detector.md

Runs daily using Serena MCP for semantic code analysis to identify exact, structural, and functional duplication. Creates one issue per distinct pattern (max 3 per run) that are assigned to Copilot (via assignees: copilot in workflow config) since duplication fixes are typically straightforward.

Plan Phase: Since issues are already well-scoped, the plan phase is implicit in the research output.

Assign Phase: Issues are created and assigned to Copilot (via assignees: copilot) for automated refactoring.

Adapt the TaskOps strategy by customizing the research focus (static analysis, performance metrics, documentation quality, security, code duplication, test coverage), frequency (daily, weekly, on-demand), report format (discussions vs issues), planning approach (automatic vs manual), and assignment method (pre-assign via assignees: copilot in workflow config, manual assignment through GitHub UI, or mixed).

The three-phase approach takes longer than direct execution and requires developers to review research reports and generated issues. Research agents may flag issues that don’t require action (false positives), and multiple phases require workflow coordination and clear handoffs. Research agents often need specialized MCPs (Serena, Tavily, etc.).