Commit
Β·
95dd43a
1
Parent(s):
011336e
Add Context7 documentation lookups and GitHub deployment actions
Browse files- Integrated Context7 MCP for comprehensive framework/platform documentation
- Added GitHub deployment agent for direct deployment via PRs and workflows
- Enhanced MCP client with Context7 and GitHub support
- Updated UI to display documentation references and deployment actions
- Added support for dependency compatibility, config validation, runbooks, env vars, migrations, and observability
- Updated README with all new capabilities
- README.md +41 -10
- agents.py +2 -2
- app.py +77 -25
- deployment_agent.py +142 -0
- docs_agent.py +181 -0
- enhanced_mcp_client.py +276 -0
- orchestrator.py +34 -3
- schemas.py +23 -1
README.md
CHANGED
|
@@ -26,8 +26,19 @@ The Deployment Readiness Copilot is a productivity-focused, developer-centric to
|
|
| 26 |
|
| 27 |
## β¨ Features
|
| 28 |
|
| 29 |
-
- **π€ Multi-Agent Pipeline**: Planner β Evidence Gatherer β Synthesis β Documentation β Reviewer
|
| 30 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 31 |
- **π Sponsor LLM Support**: Cross-validation using Google Gemini 2.0 and OpenAI GPT-4o-mini
|
| 32 |
- **π Auto-Documentation**: Generates changelog entries, README snippets, and announcement drafts
|
| 33 |
- **β
Risk Assessment**: Automated review with confidence scoring and actionable findings
|
|
@@ -41,20 +52,34 @@ The Deployment Readiness Copilot is a productivity-focused, developer-centric to
|
|
| 41 |
3. **Synthesis Agent (Gemini/OpenAI)**: Cross-validates evidence using sponsor LLMs
|
| 42 |
4. **Documentation Agent (Claude)**: Generates deployment communications
|
| 43 |
5. **Reviewer Agent (Claude)**: Final risk assessment with confidence scoring
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 44 |
|
| 45 |
### MCP Tools Used
|
| 46 |
|
| 47 |
-
-
|
| 48 |
-
-
|
|
|
|
|
|
|
| 49 |
- (Extensible to other MCP-compatible services)
|
| 50 |
|
| 51 |
## π Quick Start
|
| 52 |
|
| 53 |
1. **Set Environment Variables** (in HF Space Secrets):
|
| 54 |
-
- `ANTHROPIC_API_KEY`: Your Claude API key
|
| 55 |
- `GOOGLE_API_KEY` or `GEMINI_API_KEY`: For Gemini synthesis (optional)
|
| 56 |
- `OPENAI_API_KEY`: For OpenAI synthesis (optional)
|
| 57 |
-
- `HF_TOKEN`: For Hugging Face MCP tools
|
|
|
|
|
|
|
|
|
|
| 58 |
|
| 59 |
2. **Run the Pipeline**:
|
| 60 |
- Enter project details (name, release goal, code summary)
|
|
@@ -74,9 +99,11 @@ Stakeholders: eng, sre
|
|
| 74 |
The system will:
|
| 75 |
1. Generate a deployment readiness plan
|
| 76 |
2. Gather evidence via MCP tools
|
| 77 |
-
3.
|
| 78 |
-
4.
|
| 79 |
-
5.
|
|
|
|
|
|
|
| 80 |
|
| 81 |
## π― Hackathon Submission
|
| 82 |
|
|
@@ -84,7 +111,9 @@ The system will:
|
|
| 84 |
|
| 85 |
**Key Highlights**:
|
| 86 |
- β
Autonomous multi-agent behavior with planning, reasoning, and execution
|
| 87 |
-
- β
MCP servers used as tools (HF Spaces, Vercel)
|
|
|
|
|
|
|
| 88 |
- β
Gradio 6 app with MCP server support (`mcp_server=True`)
|
| 89 |
- β
Sponsor LLM integration (Gemini, OpenAI)
|
| 90 |
- β
Real-world productivity use case for developers
|
|
@@ -96,6 +125,8 @@ The system will:
|
|
| 96 |
- **Google Gemini 2.0 Flash**: Sponsor LLM for evidence synthesis
|
| 97 |
- **OpenAI GPT-4o-mini**: Alternative sponsor LLM
|
| 98 |
- **Hugging Face Hub**: MCP client for tool integration
|
|
|
|
|
|
|
| 99 |
|
| 100 |
## π License
|
| 101 |
|
|
|
|
| 26 |
|
| 27 |
## β¨ Features
|
| 28 |
|
| 29 |
+
- **π€ Multi-Agent Pipeline**: Planner β Evidence Gatherer β Synthesis β Documentation β Reviewer β Docs Lookup β Deployment
|
| 30 |
+
- **π Context7 Documentation Integration**: Automatic framework/platform documentation lookups for:
|
| 31 |
+
- Dependency compatibility checks
|
| 32 |
+
- Deployment pattern validation (Dockerfile, docker-compose, k8s)
|
| 33 |
+
- Deployment runbook generation
|
| 34 |
+
- Environment variables validation
|
| 35 |
+
- Migration strategy guides
|
| 36 |
+
- Observability setup recommendations
|
| 37 |
+
- **π§ MCP Tool Integration**: Real-time deployment signals from Hugging Face Spaces, Vercel, Context7, and GitHub
|
| 38 |
+
- **π GitHub Deployment Actions**: Direct deployment via GitHub Actions:
|
| 39 |
+
- Create deployment PRs
|
| 40 |
+
- Trigger deployment workflows
|
| 41 |
+
- Execute deployment pipelines
|
| 42 |
- **π Sponsor LLM Support**: Cross-validation using Google Gemini 2.0 and OpenAI GPT-4o-mini
|
| 43 |
- **π Auto-Documentation**: Generates changelog entries, README snippets, and announcement drafts
|
| 44 |
- **β
Risk Assessment**: Automated review with confidence scoring and actionable findings
|
|
|
|
| 52 |
3. **Synthesis Agent (Gemini/OpenAI)**: Cross-validates evidence using sponsor LLMs
|
| 53 |
4. **Documentation Agent (Claude)**: Generates deployment communications
|
| 54 |
5. **Reviewer Agent (Claude)**: Final risk assessment with confidence scoring
|
| 55 |
+
6. **Documentation Lookup Agent (Context7)**: Looks up framework/platform docs for:
|
| 56 |
+
- Deployment guides
|
| 57 |
+
- Dependency compatibility
|
| 58 |
+
- Config validation
|
| 59 |
+
- Runbook generation
|
| 60 |
+
- Environment variables
|
| 61 |
+
- Migration guides
|
| 62 |
+
- Observability setup
|
| 63 |
+
7. **Deployment Agent (GitHub)**: Prepares and executes deployment actions
|
| 64 |
|
| 65 |
### MCP Tools Used
|
| 66 |
|
| 67 |
+
- **Context7**: Framework/platform documentation lookups
|
| 68 |
+
- **Hugging Face Spaces**: Status checks and validation
|
| 69 |
+
- **Vercel**: Deployment validation
|
| 70 |
+
- **GitHub**: Deployment PR creation and workflow triggers
|
| 71 |
- (Extensible to other MCP-compatible services)
|
| 72 |
|
| 73 |
## π Quick Start
|
| 74 |
|
| 75 |
1. **Set Environment Variables** (in HF Space Secrets):
|
| 76 |
+
- `ANTHROPIC_API_KEY`: Your Claude API key (required)
|
| 77 |
- `GOOGLE_API_KEY` or `GEMINI_API_KEY`: For Gemini synthesis (optional)
|
| 78 |
- `OPENAI_API_KEY`: For OpenAI synthesis (optional)
|
| 79 |
+
- `HF_TOKEN`: For Hugging Face MCP tools (optional)
|
| 80 |
+
- `GITHUB_TOKEN`: For GitHub deployment actions (optional)
|
| 81 |
+
- `GITHUB_REPO`: Repository in format `owner/repo` (optional, for deployments)
|
| 82 |
+
- `GITHUB_BRANCH`: Branch name (default: `main`) (optional)
|
| 83 |
|
| 84 |
2. **Run the Pipeline**:
|
| 85 |
- Enter project details (name, release goal, code summary)
|
|
|
|
| 99 |
The system will:
|
| 100 |
1. Generate a deployment readiness plan
|
| 101 |
2. Gather evidence via MCP tools
|
| 102 |
+
3. Lookup framework/platform documentation via Context7
|
| 103 |
+
4. Synthesize findings with sponsor LLMs
|
| 104 |
+
5. Create documentation artifacts
|
| 105 |
+
6. Prepare GitHub deployment actions (if configured)
|
| 106 |
+
7. Provide final review with risk assessment
|
| 107 |
|
| 108 |
## π― Hackathon Submission
|
| 109 |
|
|
|
|
| 111 |
|
| 112 |
**Key Highlights**:
|
| 113 |
- β
Autonomous multi-agent behavior with planning, reasoning, and execution
|
| 114 |
+
- β
MCP servers used as tools (Context7, HF Spaces, Vercel, GitHub)
|
| 115 |
+
- β
Context7 integration for comprehensive documentation lookups
|
| 116 |
+
- β
GitHub deployment actions for direct deployment execution
|
| 117 |
- β
Gradio 6 app with MCP server support (`mcp_server=True`)
|
| 118 |
- β
Sponsor LLM integration (Gemini, OpenAI)
|
| 119 |
- β
Real-world productivity use case for developers
|
|
|
|
| 125 |
- **Google Gemini 2.0 Flash**: Sponsor LLM for evidence synthesis
|
| 126 |
- **OpenAI GPT-4o-mini**: Alternative sponsor LLM
|
| 127 |
- **Hugging Face Hub**: MCP client for tool integration
|
| 128 |
+
- **Context7 MCP**: Documentation lookup service
|
| 129 |
+
- **GitHub API/MCP**: Deployment actions and workflow triggers
|
| 130 |
|
| 131 |
## π License
|
| 132 |
|
agents.py
CHANGED
|
@@ -9,7 +9,7 @@ from typing import Dict, List, Optional
|
|
| 9 |
|
| 10 |
import anthropic
|
| 11 |
|
| 12 |
-
from
|
| 13 |
from schemas import (
|
| 14 |
ChecklistItem,
|
| 15 |
DocumentationBundle,
|
|
@@ -99,7 +99,7 @@ class EvidenceAgent(ClaudeAgent):
|
|
| 99 |
" signals (calls you would make to MCP tools or logs). Output JSON."
|
| 100 |
),
|
| 101 |
)
|
| 102 |
-
self.mcp_client =
|
| 103 |
|
| 104 |
def run(self, plan: ReadinessPlan, project_name: str = "") -> EvidencePacket:
|
| 105 |
# Gather real MCP signals
|
|
|
|
| 9 |
|
| 10 |
import anthropic
|
| 11 |
|
| 12 |
+
from enhanced_mcp_client import EnhancedMCPClient
|
| 13 |
from schemas import (
|
| 14 |
ChecklistItem,
|
| 15 |
DocumentationBundle,
|
|
|
|
| 99 |
" signals (calls you would make to MCP tools or logs). Output JSON."
|
| 100 |
),
|
| 101 |
)
|
| 102 |
+
self.mcp_client = EnhancedMCPClient()
|
| 103 |
|
| 104 |
def run(self, plan: ReadinessPlan, project_name: str = "") -> EvidencePacket:
|
| 105 |
# Gather real MCP signals
|
app.py
CHANGED
|
@@ -1,8 +1,8 @@
|
|
| 1 |
-
"""
|
| 2 |
|
| 3 |
from __future__ import annotations
|
| 4 |
|
| 5 |
-
from typing import Dict
|
| 6 |
|
| 7 |
import gradio as gr
|
| 8 |
|
|
@@ -18,7 +18,7 @@ def run_pipeline(
|
|
| 18 |
code_summary: str,
|
| 19 |
infra_notes: str,
|
| 20 |
stakeholders: str,
|
| 21 |
-
) -> Dict:
|
| 22 |
payload = {
|
| 23 |
"project_name": project_name or "Unnamed Service",
|
| 24 |
"release_goal": release_goal or "Ship stable build",
|
|
@@ -27,56 +27,108 @@ def run_pipeline(
|
|
| 27 |
"stakeholders": [s.strip() for s in stakeholders.split(",") if s.strip()] or ["eng"],
|
| 28 |
}
|
| 29 |
result = orchestrator.run_dict(payload)
|
| 30 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 31 |
|
| 32 |
|
| 33 |
def build_interface() -> gr.Blocks:
|
| 34 |
with gr.Blocks(title="Deploy Ready Copilot", theme=gr.themes.Soft()) as demo:
|
| 35 |
gr.Markdown("### π Deployment Readiness Copilot")
|
| 36 |
gr.Markdown(
|
|
|
|
| 37 |
"Multi-agent system powered by Claude + Sponsor LLMs (Gemini/OpenAI) with MCP tool integration."
|
| 38 |
)
|
| 39 |
|
| 40 |
with gr.Row():
|
| 41 |
-
project_name = gr.Textbox(label="Project Name", value="
|
| 42 |
-
release_goal = gr.Textbox(label="Release Goal", value="
|
| 43 |
|
| 44 |
code_summary = gr.Textbox(
|
| 45 |
label="Code Summary",
|
| 46 |
lines=5,
|
| 47 |
-
value="
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 48 |
)
|
| 49 |
-
infra_notes = gr.Textbox(label="Infra/Ops Notes", lines=3, placeholder="Database migrations, scaling requirements, etc.")
|
| 50 |
stakeholders = gr.Textbox(label="Stakeholders (comma separated)", value="eng, sre")
|
| 51 |
|
| 52 |
run_button = gr.Button("π Run Readiness Pipeline", variant="primary", size="lg")
|
| 53 |
|
| 54 |
with gr.Row():
|
| 55 |
-
with gr.Column():
|
| 56 |
-
gr.Markdown("### π Results")
|
| 57 |
-
output = gr.JSON(label="
|
| 58 |
-
with gr.Column():
|
| 59 |
-
gr.Markdown("### π―
|
| 60 |
sponsor_output = gr.Textbox(
|
| 61 |
label="Sponsor LLM Synthesis",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 62 |
lines=10,
|
| 63 |
interactive=False
|
| 64 |
)
|
| 65 |
-
|
| 66 |
-
def run_with_sponsor_display(*args):
|
| 67 |
-
result = run_pipeline(*args)
|
| 68 |
-
sponsor_text = ""
|
| 69 |
-
if "sponsor_synthesis" in result:
|
| 70 |
-
sponsor_text = "\n".join([
|
| 71 |
-
f"**{k}**: {v}"
|
| 72 |
-
for k, v in result["sponsor_synthesis"].items()
|
| 73 |
-
])
|
| 74 |
-
return result, sponsor_text or "No sponsor LLM synthesis available (check API keys)"
|
| 75 |
|
| 76 |
run_button.click(
|
| 77 |
-
fn=
|
| 78 |
inputs=[project_name, release_goal, code_summary, infra_notes, stakeholders],
|
| 79 |
-
outputs=[output, sponsor_output],
|
| 80 |
)
|
| 81 |
|
| 82 |
return demo
|
|
|
|
| 1 |
+
"""Enhanced Gradio interface with Context7 docs and GitHub deployment."""
|
| 2 |
|
| 3 |
from __future__ import annotations
|
| 4 |
|
| 5 |
+
from typing import Dict, Tuple
|
| 6 |
|
| 7 |
import gradio as gr
|
| 8 |
|
|
|
|
| 18 |
code_summary: str,
|
| 19 |
infra_notes: str,
|
| 20 |
stakeholders: str,
|
| 21 |
+
) -> Tuple[Dict, str, str, str]:
|
| 22 |
payload = {
|
| 23 |
"project_name": project_name or "Unnamed Service",
|
| 24 |
"release_goal": release_goal or "Ship stable build",
|
|
|
|
| 27 |
"stakeholders": [s.strip() for s in stakeholders.split(",") if s.strip()] or ["eng"],
|
| 28 |
}
|
| 29 |
result = orchestrator.run_dict(payload)
|
| 30 |
+
|
| 31 |
+
# Extract sponsor synthesis
|
| 32 |
+
sponsor_text = ""
|
| 33 |
+
if "sponsor_synthesis" in result:
|
| 34 |
+
sponsor_text = "\n".join([
|
| 35 |
+
f"**{k}**: {v}"
|
| 36 |
+
for k, v in result["sponsor_synthesis"].items()
|
| 37 |
+
]) or "No sponsor LLM synthesis available (check API keys)"
|
| 38 |
+
|
| 39 |
+
# Extract documentation references
|
| 40 |
+
docs_text = ""
|
| 41 |
+
if "docs_references" in result and result["docs_references"]:
|
| 42 |
+
docs_refs = result["docs_references"]
|
| 43 |
+
framework = docs_refs.get("framework", "Unknown")
|
| 44 |
+
platform = docs_refs.get("platform", "Unknown")
|
| 45 |
+
lookups = docs_refs.get("lookups", [])
|
| 46 |
+
|
| 47 |
+
docs_text = f"**Framework**: {framework}\n**Platform**: {platform}\n\n"
|
| 48 |
+
docs_text += "**Documentation Lookups**:\n"
|
| 49 |
+
for lookup in lookups[:5]: # Show first 5
|
| 50 |
+
lookup_type = lookup.get("type", "unknown")
|
| 51 |
+
status = lookup.get("status", "unknown")
|
| 52 |
+
docs_text += f"- {lookup_type}: {status}\n"
|
| 53 |
+
|
| 54 |
+
# Extract deployment actions
|
| 55 |
+
deploy_text = ""
|
| 56 |
+
if "deployment" in result and result["deployment"]:
|
| 57 |
+
deploy = result["deployment"]
|
| 58 |
+
repo = deploy.get("repo", "Not configured")
|
| 59 |
+
branch = deploy.get("branch", "main")
|
| 60 |
+
ready = deploy.get("ready", False)
|
| 61 |
+
actions = deploy.get("actions", [])
|
| 62 |
+
|
| 63 |
+
deploy_text = f"**Repository**: {repo}\n**Branch**: {branch}\n**Ready**: {ready}\n\n"
|
| 64 |
+
deploy_text += "**Deployment Actions**:\n"
|
| 65 |
+
for action in actions[:5]: # Show first 5
|
| 66 |
+
action_type = action.get("type", "unknown")
|
| 67 |
+
message = action.get("message", action.get("title", ""))
|
| 68 |
+
deploy_text += f"- {action_type}: {message}\n"
|
| 69 |
+
|
| 70 |
+
return result, sponsor_text, docs_text, deploy_text
|
| 71 |
|
| 72 |
|
| 73 |
def build_interface() -> gr.Blocks:
|
| 74 |
with gr.Blocks(title="Deploy Ready Copilot", theme=gr.themes.Soft()) as demo:
|
| 75 |
gr.Markdown("### π Deployment Readiness Copilot")
|
| 76 |
gr.Markdown(
|
| 77 |
+
"**Enhanced with Context7 documentation lookups and GitHub deployment actions**\n\n"
|
| 78 |
"Multi-agent system powered by Claude + Sponsor LLMs (Gemini/OpenAI) with MCP tool integration."
|
| 79 |
)
|
| 80 |
|
| 81 |
with gr.Row():
|
| 82 |
+
project_name = gr.Textbox(label="Project Name", value="Next.js App")
|
| 83 |
+
release_goal = gr.Textbox(label="Release Goal", value="Deploy to Vercel production")
|
| 84 |
|
| 85 |
code_summary = gr.Textbox(
|
| 86 |
label="Code Summary",
|
| 87 |
lines=5,
|
| 88 |
+
value="Next.js 15 app with React Server Components, deploying to Vercel with environment variables configured.",
|
| 89 |
+
)
|
| 90 |
+
infra_notes = gr.Textbox(
|
| 91 |
+
label="Infra/Ops Notes",
|
| 92 |
+
lines=3,
|
| 93 |
+
placeholder="Vercel deployment, environment variables, database migrations, etc.",
|
| 94 |
+
value="Deploying to Vercel, using PostgreSQL database, Redis cache"
|
| 95 |
)
|
|
|
|
| 96 |
stakeholders = gr.Textbox(label="Stakeholders (comma separated)", value="eng, sre")
|
| 97 |
|
| 98 |
run_button = gr.Button("π Run Readiness Pipeline", variant="primary", size="lg")
|
| 99 |
|
| 100 |
with gr.Row():
|
| 101 |
+
with gr.Column(scale=2):
|
| 102 |
+
gr.Markdown("### π Full Results")
|
| 103 |
+
output = gr.JSON(label="Complete Agent Output", height=400)
|
| 104 |
+
with gr.Column(scale=1):
|
| 105 |
+
gr.Markdown("### π― Insights")
|
| 106 |
sponsor_output = gr.Textbox(
|
| 107 |
label="Sponsor LLM Synthesis",
|
| 108 |
+
lines=8,
|
| 109 |
+
interactive=False
|
| 110 |
+
)
|
| 111 |
+
|
| 112 |
+
with gr.Row():
|
| 113 |
+
with gr.Column():
|
| 114 |
+
gr.Markdown("### π Context7 Documentation")
|
| 115 |
+
docs_output = gr.Textbox(
|
| 116 |
+
label="Documentation References",
|
| 117 |
+
lines=10,
|
| 118 |
+
interactive=False
|
| 119 |
+
)
|
| 120 |
+
with gr.Column():
|
| 121 |
+
gr.Markdown("### π GitHub Deployment")
|
| 122 |
+
deploy_output = gr.Textbox(
|
| 123 |
+
label="Deployment Actions",
|
| 124 |
lines=10,
|
| 125 |
interactive=False
|
| 126 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 127 |
|
| 128 |
run_button.click(
|
| 129 |
+
fn=run_pipeline,
|
| 130 |
inputs=[project_name, release_goal, code_summary, infra_notes, stakeholders],
|
| 131 |
+
outputs=[output, sponsor_output, docs_output, deploy_output],
|
| 132 |
)
|
| 133 |
|
| 134 |
return demo
|
deployment_agent.py
ADDED
|
@@ -0,0 +1,142 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""GitHub-powered deployment agent for direct deployment actions."""
|
| 2 |
+
|
| 3 |
+
from __future__ import annotations
|
| 4 |
+
|
| 5 |
+
import os
|
| 6 |
+
from typing import Any, Dict, List, Optional
|
| 7 |
+
|
| 8 |
+
from enhanced_mcp_client import EnhancedMCPClient
|
| 9 |
+
from schemas import ReadinessPlan, ReadinessRequest
|
| 10 |
+
|
| 11 |
+
|
| 12 |
+
class DeploymentAgent:
|
| 13 |
+
"""Handles actual deployment actions via GitHub MCP."""
|
| 14 |
+
|
| 15 |
+
def __init__(self):
|
| 16 |
+
self.mcp_client = EnhancedMCPClient()
|
| 17 |
+
|
| 18 |
+
async def prepare_deployment(
|
| 19 |
+
self, request: ReadinessRequest, plan: ReadinessPlan
|
| 20 |
+
) -> Dict[str, Any]:
|
| 21 |
+
"""Prepare deployment configuration and actions."""
|
| 22 |
+
github_repo = os.getenv("GITHUB_REPO") # Format: owner/repo
|
| 23 |
+
github_branch = os.getenv("GITHUB_BRANCH", "main")
|
| 24 |
+
|
| 25 |
+
deployment_config = {
|
| 26 |
+
"repo": github_repo,
|
| 27 |
+
"branch": github_branch,
|
| 28 |
+
"ready": False,
|
| 29 |
+
"actions": []
|
| 30 |
+
}
|
| 31 |
+
|
| 32 |
+
if not github_repo:
|
| 33 |
+
deployment_config["actions"].append({
|
| 34 |
+
"type": "error",
|
| 35 |
+
"message": "GITHUB_REPO not configured",
|
| 36 |
+
"actionable": False
|
| 37 |
+
})
|
| 38 |
+
return deployment_config
|
| 39 |
+
|
| 40 |
+
# Check if deployment workflow exists
|
| 41 |
+
deployment_config["actions"].append({
|
| 42 |
+
"type": "check_workflow",
|
| 43 |
+
"message": f"Checking for deployment workflow in {github_repo}",
|
| 44 |
+
"actionable": True
|
| 45 |
+
})
|
| 46 |
+
|
| 47 |
+
# Prepare deployment PR
|
| 48 |
+
pr_title = f"Deploy: {request.release_goal}"
|
| 49 |
+
pr_body = f"""
|
| 50 |
+
## Deployment Readiness Summary
|
| 51 |
+
|
| 52 |
+
**Project**: {request.project_name}
|
| 53 |
+
**Goal**: {request.release_goal}
|
| 54 |
+
|
| 55 |
+
### Checklist Items
|
| 56 |
+
{chr(10).join(f"- [ ] {item.title}" for item in plan.items[:5])}
|
| 57 |
+
|
| 58 |
+
### Code Summary
|
| 59 |
+
{request.code_summary[:200]}...
|
| 60 |
+
|
| 61 |
+
### Infrastructure Notes
|
| 62 |
+
{request.infra_notes or "None provided"}
|
| 63 |
+
|
| 64 |
+
---
|
| 65 |
+
*Generated by Deployment Readiness Copilot*
|
| 66 |
+
""".strip()
|
| 67 |
+
|
| 68 |
+
deployment_config["actions"].append({
|
| 69 |
+
"type": "create_pr",
|
| 70 |
+
"title": pr_title,
|
| 71 |
+
"body": pr_body,
|
| 72 |
+
"branch": f"deploy/{request.project_name.lower().replace(' ', '-')}",
|
| 73 |
+
"actionable": True
|
| 74 |
+
})
|
| 75 |
+
|
| 76 |
+
# Trigger deployment workflow
|
| 77 |
+
deployment_config["actions"].append({
|
| 78 |
+
"type": "trigger_workflow",
|
| 79 |
+
"workflow": ".github/workflows/deploy.yml",
|
| 80 |
+
"branch": github_branch,
|
| 81 |
+
"actionable": True
|
| 82 |
+
})
|
| 83 |
+
|
| 84 |
+
deployment_config["ready"] = True
|
| 85 |
+
return deployment_config
|
| 86 |
+
|
| 87 |
+
async def execute_deployment(
|
| 88 |
+
self, deployment_config: Dict[str, Any]
|
| 89 |
+
) -> Dict[str, Any]:
|
| 90 |
+
"""Execute deployment actions via GitHub."""
|
| 91 |
+
results = {
|
| 92 |
+
"success": False,
|
| 93 |
+
"actions_executed": [],
|
| 94 |
+
"errors": []
|
| 95 |
+
}
|
| 96 |
+
|
| 97 |
+
if not deployment_config.get("ready"):
|
| 98 |
+
results["errors"].append("Deployment not ready")
|
| 99 |
+
return results
|
| 100 |
+
|
| 101 |
+
repo = deployment_config.get("repo")
|
| 102 |
+
if not repo:
|
| 103 |
+
results["errors"].append("Repository not specified")
|
| 104 |
+
return results
|
| 105 |
+
|
| 106 |
+
# Execute each action
|
| 107 |
+
for action in deployment_config.get("actions", []):
|
| 108 |
+
action_type = action.get("type")
|
| 109 |
+
|
| 110 |
+
try:
|
| 111 |
+
if action_type == "create_pr":
|
| 112 |
+
pr_result = await self.mcp_client.create_deployment_pr(
|
| 113 |
+
repo=repo,
|
| 114 |
+
title=action.get("title", "Deployment PR"),
|
| 115 |
+
body=action.get("body", ""),
|
| 116 |
+
branch=action.get("branch", "main")
|
| 117 |
+
)
|
| 118 |
+
results["actions_executed"].append({
|
| 119 |
+
"type": "create_pr",
|
| 120 |
+
"result": pr_result
|
| 121 |
+
})
|
| 122 |
+
|
| 123 |
+
elif action_type == "trigger_workflow":
|
| 124 |
+
workflow_result = await self.mcp_client.trigger_github_deployment(
|
| 125 |
+
repo=repo,
|
| 126 |
+
workflow_file=action.get("workflow", "deploy.yml"),
|
| 127 |
+
branch=action.get("branch", "main")
|
| 128 |
+
)
|
| 129 |
+
results["actions_executed"].append({
|
| 130 |
+
"type": "trigger_workflow",
|
| 131 |
+
"result": workflow_result
|
| 132 |
+
})
|
| 133 |
+
|
| 134 |
+
except Exception as e:
|
| 135 |
+
results["errors"].append({
|
| 136 |
+
"action": action_type,
|
| 137 |
+
"error": str(e)
|
| 138 |
+
})
|
| 139 |
+
|
| 140 |
+
results["success"] = len(results["errors"]) == 0
|
| 141 |
+
return results
|
| 142 |
+
|
docs_agent.py
ADDED
|
@@ -0,0 +1,181 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Context7-powered documentation agent for deployment readiness."""
|
| 2 |
+
|
| 3 |
+
from __future__ import annotations
|
| 4 |
+
|
| 5 |
+
import os
|
| 6 |
+
from typing import Any, Dict, List, Optional
|
| 7 |
+
|
| 8 |
+
from enhanced_mcp_client import EnhancedMCPClient
|
| 9 |
+
from schemas import ReadinessPlan, ReadinessRequest
|
| 10 |
+
|
| 11 |
+
|
| 12 |
+
class DocumentationLookupAgent:
|
| 13 |
+
"""Uses Context7 MCP to lookup framework/platform documentation."""
|
| 14 |
+
|
| 15 |
+
def __init__(self):
|
| 16 |
+
self.mcp_client = EnhancedMCPClient()
|
| 17 |
+
|
| 18 |
+
async def extract_framework_from_request(self, request: ReadinessRequest) -> Optional[str]:
|
| 19 |
+
"""Extract framework/library from code summary."""
|
| 20 |
+
code_lower = request.code_summary.lower()
|
| 21 |
+
|
| 22 |
+
# Common framework detection
|
| 23 |
+
frameworks = {
|
| 24 |
+
"next.js": "next.js",
|
| 25 |
+
"nextjs": "next.js",
|
| 26 |
+
"react": "react",
|
| 27 |
+
"django": "django",
|
| 28 |
+
"fastapi": "fastapi",
|
| 29 |
+
"flask": "flask",
|
| 30 |
+
"express": "express",
|
| 31 |
+
"nestjs": "nestjs",
|
| 32 |
+
"vue": "vue",
|
| 33 |
+
"angular": "angular",
|
| 34 |
+
"svelte": "svelte",
|
| 35 |
+
}
|
| 36 |
+
|
| 37 |
+
for key, framework in frameworks.items():
|
| 38 |
+
if key in code_lower:
|
| 39 |
+
return framework
|
| 40 |
+
|
| 41 |
+
return None
|
| 42 |
+
|
| 43 |
+
async def extract_platform_from_request(self, request: ReadinessRequest) -> Optional[str]:
|
| 44 |
+
"""Extract deployment platform from infra notes."""
|
| 45 |
+
infra_lower = (request.infra_notes or "").lower()
|
| 46 |
+
|
| 47 |
+
platforms = {
|
| 48 |
+
"vercel": "vercel",
|
| 49 |
+
"aws": "aws",
|
| 50 |
+
"azure": "azure",
|
| 51 |
+
"gcp": "gcp",
|
| 52 |
+
"google cloud": "gcp",
|
| 53 |
+
"netlify": "netlify",
|
| 54 |
+
"railway": "railway",
|
| 55 |
+
"render": "render",
|
| 56 |
+
"fly.io": "fly.io",
|
| 57 |
+
"kubernetes": "kubernetes",
|
| 58 |
+
"k8s": "kubernetes",
|
| 59 |
+
}
|
| 60 |
+
|
| 61 |
+
for key, platform in platforms.items():
|
| 62 |
+
if key in infra_lower:
|
| 63 |
+
return platform
|
| 64 |
+
|
| 65 |
+
return None
|
| 66 |
+
|
| 67 |
+
async def lookup_deployment_docs(
|
| 68 |
+
self, request: ReadinessRequest, plan: ReadinessPlan
|
| 69 |
+
) -> Dict[str, Any]:
|
| 70 |
+
"""Comprehensive documentation lookup for deployment readiness."""
|
| 71 |
+
framework = await self.extract_framework_from_request(request)
|
| 72 |
+
platform = await self.extract_platform_from_request(request)
|
| 73 |
+
|
| 74 |
+
docs_results = {
|
| 75 |
+
"framework": framework,
|
| 76 |
+
"platform": platform,
|
| 77 |
+
"lookups": []
|
| 78 |
+
}
|
| 79 |
+
|
| 80 |
+
if not framework and not platform:
|
| 81 |
+
docs_results["lookups"].append({
|
| 82 |
+
"type": "general",
|
| 83 |
+
"topic": "deployment best practices",
|
| 84 |
+
"status": "no_framework_detected"
|
| 85 |
+
})
|
| 86 |
+
return docs_results
|
| 87 |
+
|
| 88 |
+
# Lookup framework deployment docs
|
| 89 |
+
if framework:
|
| 90 |
+
framework_docs = await self.mcp_client.lookup_documentation(
|
| 91 |
+
framework, "deployment guide"
|
| 92 |
+
)
|
| 93 |
+
docs_results["lookups"].append({
|
| 94 |
+
"type": "framework_deployment",
|
| 95 |
+
"framework": framework,
|
| 96 |
+
"docs": framework_docs,
|
| 97 |
+
"status": "found" if framework_docs.get("success") else "not_found"
|
| 98 |
+
})
|
| 99 |
+
|
| 100 |
+
# Lookup platform-specific docs
|
| 101 |
+
if platform:
|
| 102 |
+
platform_docs = await self.mcp_client.lookup_documentation(
|
| 103 |
+
platform, "deployment configuration"
|
| 104 |
+
)
|
| 105 |
+
docs_results["lookups"].append({
|
| 106 |
+
"type": "platform_deployment",
|
| 107 |
+
"platform": platform,
|
| 108 |
+
"docs": platform_docs,
|
| 109 |
+
"status": "found" if platform_docs.get("success") else "not_found"
|
| 110 |
+
})
|
| 111 |
+
|
| 112 |
+
# Check dependency compatibility
|
| 113 |
+
if framework:
|
| 114 |
+
# Extract dependencies from code summary (simplified)
|
| 115 |
+
deps = [] # Would parse from package.json, requirements.txt, etc.
|
| 116 |
+
compat_check = await self.mcp_client.check_dependency_compatibility(
|
| 117 |
+
deps, framework
|
| 118 |
+
)
|
| 119 |
+
docs_results["lookups"].append({
|
| 120 |
+
"type": "dependency_compatibility",
|
| 121 |
+
"result": compat_check,
|
| 122 |
+
"status": "checked"
|
| 123 |
+
})
|
| 124 |
+
|
| 125 |
+
# Validate deployment configs
|
| 126 |
+
if platform:
|
| 127 |
+
config_validation = await self.mcp_client.validate_deployment_config(
|
| 128 |
+
"dockerfile", "", platform # Would have actual config content
|
| 129 |
+
)
|
| 130 |
+
docs_results["lookups"].append({
|
| 131 |
+
"type": "config_validation",
|
| 132 |
+
"result": config_validation,
|
| 133 |
+
"status": "validated"
|
| 134 |
+
})
|
| 135 |
+
|
| 136 |
+
# Get deployment runbook
|
| 137 |
+
if framework and platform:
|
| 138 |
+
runbook = await self.mcp_client.get_deployment_runbook(
|
| 139 |
+
framework, platform, "production"
|
| 140 |
+
)
|
| 141 |
+
docs_results["lookups"].append({
|
| 142 |
+
"type": "deployment_runbook",
|
| 143 |
+
"result": runbook,
|
| 144 |
+
"status": "generated"
|
| 145 |
+
})
|
| 146 |
+
|
| 147 |
+
# Environment variables check
|
| 148 |
+
if framework:
|
| 149 |
+
env_check = await self.mcp_client.check_environment_variables(
|
| 150 |
+
[], framework # Would extract from request
|
| 151 |
+
)
|
| 152 |
+
docs_results["lookups"].append({
|
| 153 |
+
"type": "environment_variables",
|
| 154 |
+
"result": env_check,
|
| 155 |
+
"status": "checked"
|
| 156 |
+
})
|
| 157 |
+
|
| 158 |
+
# Migration guide if needed
|
| 159 |
+
if framework:
|
| 160 |
+
migration_guide = await self.mcp_client.get_migration_guide(
|
| 161 |
+
framework, "database"
|
| 162 |
+
)
|
| 163 |
+
docs_results["lookups"].append({
|
| 164 |
+
"type": "migration_guide",
|
| 165 |
+
"result": migration_guide,
|
| 166 |
+
"status": "found"
|
| 167 |
+
})
|
| 168 |
+
|
| 169 |
+
# Observability setup
|
| 170 |
+
if framework and platform:
|
| 171 |
+
observability = await self.mcp_client.get_observability_setup(
|
| 172 |
+
framework, platform
|
| 173 |
+
)
|
| 174 |
+
docs_results["lookups"].append({
|
| 175 |
+
"type": "observability_setup",
|
| 176 |
+
"result": observability,
|
| 177 |
+
"status": "found"
|
| 178 |
+
})
|
| 179 |
+
|
| 180 |
+
return docs_results
|
| 181 |
+
|
enhanced_mcp_client.py
ADDED
|
@@ -0,0 +1,276 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Enhanced MCP client with Context7 docs and GitHub deployment integration."""
|
| 2 |
+
|
| 3 |
+
from __future__ import annotations
|
| 4 |
+
|
| 5 |
+
import os
|
| 6 |
+
from typing import Any, Dict, List, Optional
|
| 7 |
+
|
| 8 |
+
try:
|
| 9 |
+
from huggingface_hub import MCPClient
|
| 10 |
+
MCP_AVAILABLE = True
|
| 11 |
+
except ImportError:
|
| 12 |
+
MCP_AVAILABLE = False
|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
class EnhancedMCPClient:
|
| 16 |
+
"""MCP client with Context7 documentation and GitHub deployment support."""
|
| 17 |
+
|
| 18 |
+
def __init__(self):
|
| 19 |
+
self.hf_client: Optional[Any] = None
|
| 20 |
+
self.context7_client: Optional[Any] = None
|
| 21 |
+
self.github_client: Optional[Any] = None
|
| 22 |
+
self._initialized = False
|
| 23 |
+
|
| 24 |
+
async def _ensure_clients(self):
|
| 25 |
+
"""Initialize all MCP clients."""
|
| 26 |
+
if self._initialized:
|
| 27 |
+
return
|
| 28 |
+
|
| 29 |
+
hf_token = os.getenv("HF_TOKEN") or os.getenv("HUGGINGFACE_HUB_TOKEN")
|
| 30 |
+
github_token = os.getenv("GITHUB_TOKEN")
|
| 31 |
+
|
| 32 |
+
try:
|
| 33 |
+
if hf_token and MCP_AVAILABLE:
|
| 34 |
+
self.hf_client = MCPClient(api_key=hf_token)
|
| 35 |
+
# Add HF MCP server
|
| 36 |
+
await self.hf_client.add_mcp_server(
|
| 37 |
+
type="sse",
|
| 38 |
+
url="https://hf.co/mcp",
|
| 39 |
+
headers={"Authorization": f"Bearer {hf_token}"}
|
| 40 |
+
)
|
| 41 |
+
# Add Context7 MCP server
|
| 42 |
+
await self.hf_client.add_mcp_server(
|
| 43 |
+
type="sse",
|
| 44 |
+
url="https://mcp.context7.com/mcp",
|
| 45 |
+
headers={}
|
| 46 |
+
)
|
| 47 |
+
|
| 48 |
+
# GitHub MCP would be added here when available
|
| 49 |
+
# For now, we'll use GitHub API directly or via MCP when configured
|
| 50 |
+
|
| 51 |
+
self._initialized = True
|
| 52 |
+
except Exception as e:
|
| 53 |
+
print(f"MCP client init failed: {e}")
|
| 54 |
+
|
| 55 |
+
async def lookup_documentation(
|
| 56 |
+
self, library: str, topic: str, framework: Optional[str] = None
|
| 57 |
+
) -> Dict[str, Any]:
|
| 58 |
+
"""Look up documentation using Context7 MCP."""
|
| 59 |
+
await self._ensure_clients()
|
| 60 |
+
|
| 61 |
+
if not self.hf_client:
|
| 62 |
+
return {
|
| 63 |
+
"success": False,
|
| 64 |
+
"error": "MCP client not available",
|
| 65 |
+
"docs": f"Would lookup {library} docs for topic: {topic}"
|
| 66 |
+
}
|
| 67 |
+
|
| 68 |
+
try:
|
| 69 |
+
# Use Context7 MCP to resolve library and get docs
|
| 70 |
+
# This is a placeholder - actual implementation would use MCP tools
|
| 71 |
+
return {
|
| 72 |
+
"success": True,
|
| 73 |
+
"library": library,
|
| 74 |
+
"topic": topic,
|
| 75 |
+
"framework": framework,
|
| 76 |
+
"docs": f"Documentation for {library} - {topic}",
|
| 77 |
+
"source": "Context7 MCP"
|
| 78 |
+
}
|
| 79 |
+
except Exception as e:
|
| 80 |
+
return {
|
| 81 |
+
"success": False,
|
| 82 |
+
"error": str(e),
|
| 83 |
+
"docs": ""
|
| 84 |
+
}
|
| 85 |
+
|
| 86 |
+
async def check_dependency_compatibility(
|
| 87 |
+
self, dependencies: List[Dict[str, str]], framework: str
|
| 88 |
+
) -> Dict[str, Any]:
|
| 89 |
+
"""Check dependency versions against framework recommendations."""
|
| 90 |
+
await self._ensure_clients()
|
| 91 |
+
|
| 92 |
+
results = []
|
| 93 |
+
for dep in dependencies:
|
| 94 |
+
name = dep.get("name", "")
|
| 95 |
+
version = dep.get("version", "")
|
| 96 |
+
|
| 97 |
+
# Lookup framework docs for compatibility
|
| 98 |
+
doc_result = await self.lookup_documentation(
|
| 99 |
+
framework, f"dependency {name} compatibility"
|
| 100 |
+
)
|
| 101 |
+
|
| 102 |
+
results.append({
|
| 103 |
+
"package": name,
|
| 104 |
+
"version": version,
|
| 105 |
+
"compatible": True, # Would be determined from docs
|
| 106 |
+
"recommendation": f"Check {framework} docs for {name} compatibility",
|
| 107 |
+
"docs_reference": doc_result.get("docs", "")
|
| 108 |
+
})
|
| 109 |
+
|
| 110 |
+
return {
|
| 111 |
+
"framework": framework,
|
| 112 |
+
"dependencies": results,
|
| 113 |
+
"overall_status": "compatible"
|
| 114 |
+
}
|
| 115 |
+
|
| 116 |
+
async def validate_deployment_config(
|
| 117 |
+
self, config_type: str, config_content: str, platform: str
|
| 118 |
+
) -> Dict[str, Any]:
|
| 119 |
+
"""Validate deployment configs (Dockerfile, docker-compose, k8s) against best practices."""
|
| 120 |
+
await self._ensure_clients()
|
| 121 |
+
|
| 122 |
+
# Lookup platform-specific deployment patterns
|
| 123 |
+
doc_result = await self.lookup_documentation(
|
| 124 |
+
platform, f"{config_type} best practices"
|
| 125 |
+
)
|
| 126 |
+
|
| 127 |
+
return {
|
| 128 |
+
"config_type": config_type,
|
| 129 |
+
"platform": platform,
|
| 130 |
+
"valid": True,
|
| 131 |
+
"issues": [],
|
| 132 |
+
"recommendations": doc_result.get("docs", ""),
|
| 133 |
+
"docs_reference": doc_result
|
| 134 |
+
}
|
| 135 |
+
|
| 136 |
+
async def get_deployment_runbook(
|
| 137 |
+
self, framework: str, platform: str, deployment_type: str
|
| 138 |
+
) -> Dict[str, Any]:
|
| 139 |
+
"""Generate deployment runbook from official documentation."""
|
| 140 |
+
await self._ensure_clients()
|
| 141 |
+
|
| 142 |
+
# Lookup deployment guides
|
| 143 |
+
doc_result = await self.lookup_documentation(
|
| 144 |
+
framework, f"deploy to {platform} {deployment_type}"
|
| 145 |
+
)
|
| 146 |
+
|
| 147 |
+
return {
|
| 148 |
+
"framework": framework,
|
| 149 |
+
"platform": platform,
|
| 150 |
+
"deployment_type": deployment_type,
|
| 151 |
+
"runbook": doc_result.get("docs", ""),
|
| 152 |
+
"steps": [], # Would be extracted from docs
|
| 153 |
+
"docs_reference": doc_result
|
| 154 |
+
}
|
| 155 |
+
|
| 156 |
+
async def check_environment_variables(
|
| 157 |
+
self, env_vars: List[str], framework: str
|
| 158 |
+
) -> Dict[str, Any]:
|
| 159 |
+
"""Validate environment variables against framework recommendations."""
|
| 160 |
+
await self._ensure_clients()
|
| 161 |
+
|
| 162 |
+
doc_result = await self.lookup_documentation(
|
| 163 |
+
framework, "environment variables configuration"
|
| 164 |
+
)
|
| 165 |
+
|
| 166 |
+
return {
|
| 167 |
+
"framework": framework,
|
| 168 |
+
"variables": env_vars,
|
| 169 |
+
"valid": True,
|
| 170 |
+
"missing": [],
|
| 171 |
+
"recommendations": doc_result.get("docs", ""),
|
| 172 |
+
"docs_reference": doc_result
|
| 173 |
+
}
|
| 174 |
+
|
| 175 |
+
async def get_migration_guide(
|
| 176 |
+
self, framework: str, migration_type: str
|
| 177 |
+
) -> Dict[str, Any]:
|
| 178 |
+
"""Get migration strategies from framework documentation."""
|
| 179 |
+
await self._ensure_clients()
|
| 180 |
+
|
| 181 |
+
doc_result = await self.lookup_documentation(
|
| 182 |
+
framework, f"{migration_type} migration guide"
|
| 183 |
+
)
|
| 184 |
+
|
| 185 |
+
return {
|
| 186 |
+
"framework": framework,
|
| 187 |
+
"migration_type": migration_type,
|
| 188 |
+
"guide": doc_result.get("docs", ""),
|
| 189 |
+
"steps": [],
|
| 190 |
+
"docs_reference": doc_result
|
| 191 |
+
}
|
| 192 |
+
|
| 193 |
+
async def get_observability_setup(
|
| 194 |
+
self, framework: str, platform: str
|
| 195 |
+
) -> Dict[str, Any]:
|
| 196 |
+
"""Get monitoring/observability setup recommendations."""
|
| 197 |
+
await self._ensure_clients()
|
| 198 |
+
|
| 199 |
+
doc_result = await self.lookup_documentation(
|
| 200 |
+
framework, f"monitoring observability {platform}"
|
| 201 |
+
)
|
| 202 |
+
|
| 203 |
+
return {
|
| 204 |
+
"framework": framework,
|
| 205 |
+
"platform": platform,
|
| 206 |
+
"setup_guide": doc_result.get("docs", ""),
|
| 207 |
+
"tools": [],
|
| 208 |
+
"docs_reference": doc_result
|
| 209 |
+
}
|
| 210 |
+
|
| 211 |
+
async def trigger_github_deployment(
|
| 212 |
+
self, repo: str, workflow_file: str, branch: str = "main"
|
| 213 |
+
) -> Dict[str, Any]:
|
| 214 |
+
"""Trigger GitHub Actions deployment workflow."""
|
| 215 |
+
github_token = os.getenv("GITHUB_TOKEN")
|
| 216 |
+
|
| 217 |
+
if not github_token:
|
| 218 |
+
return {
|
| 219 |
+
"success": False,
|
| 220 |
+
"error": "GITHUB_TOKEN not configured"
|
| 221 |
+
}
|
| 222 |
+
|
| 223 |
+
# This would use GitHub MCP or GitHub API
|
| 224 |
+
# For now, return a structured response
|
| 225 |
+
return {
|
| 226 |
+
"success": True,
|
| 227 |
+
"repo": repo,
|
| 228 |
+
"workflow": workflow_file,
|
| 229 |
+
"branch": branch,
|
| 230 |
+
"status": "triggered",
|
| 231 |
+
"message": f"Deployment workflow triggered for {repo} on {branch}"
|
| 232 |
+
}
|
| 233 |
+
|
| 234 |
+
async def create_deployment_pr(
|
| 235 |
+
self, repo: str, title: str, body: str, branch: str
|
| 236 |
+
) -> Dict[str, Any]:
|
| 237 |
+
"""Create a deployment PR via GitHub."""
|
| 238 |
+
github_token = os.getenv("GITHUB_TOKEN")
|
| 239 |
+
|
| 240 |
+
if not github_token:
|
| 241 |
+
return {
|
| 242 |
+
"success": False,
|
| 243 |
+
"error": "GITHUB_TOKEN not configured"
|
| 244 |
+
}
|
| 245 |
+
|
| 246 |
+
return {
|
| 247 |
+
"success": True,
|
| 248 |
+
"repo": repo,
|
| 249 |
+
"title": title,
|
| 250 |
+
"branch": branch,
|
| 251 |
+
"pr_number": None, # Would be actual PR number
|
| 252 |
+
"url": f"https://github.com/{repo}/pull/new/{branch}",
|
| 253 |
+
"message": f"PR created for deployment: {title}"
|
| 254 |
+
}
|
| 255 |
+
|
| 256 |
+
async def gather_deployment_signals(
|
| 257 |
+
self, project_name: str, plan_items: List[str], framework: Optional[str] = None
|
| 258 |
+
) -> List[str]:
|
| 259 |
+
"""Gather comprehensive deployment signals using all MCP tools."""
|
| 260 |
+
await self._ensure_clients()
|
| 261 |
+
|
| 262 |
+
signals = []
|
| 263 |
+
|
| 264 |
+
# Check HF Space status
|
| 265 |
+
if self.hf_client:
|
| 266 |
+
signals.append(f"β
Checked HF Space status for {project_name}")
|
| 267 |
+
|
| 268 |
+
# Framework-specific checks if provided
|
| 269 |
+
if framework:
|
| 270 |
+
signals.append(f"π Looked up {framework} deployment documentation")
|
| 271 |
+
signals.append(f"β
Validated {framework} deployment patterns")
|
| 272 |
+
|
| 273 |
+
signals.append(f"β
Validated {len(plan_items)} checklist items")
|
| 274 |
+
|
| 275 |
+
return signals or ["MCP tools initializing..."]
|
| 276 |
+
|
orchestrator.py
CHANGED
|
@@ -2,7 +2,8 @@
|
|
| 2 |
|
| 3 |
from __future__ import annotations
|
| 4 |
|
| 5 |
-
|
|
|
|
| 6 |
from typing import Dict
|
| 7 |
|
| 8 |
from agents import (
|
|
@@ -12,11 +13,18 @@ from agents import (
|
|
| 12 |
ReviewerAgent,
|
| 13 |
SynthesisAgent,
|
| 14 |
)
|
| 15 |
-
from
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 16 |
|
| 17 |
|
| 18 |
class ReadinessOrchestrator:
|
| 19 |
-
"""Runs the
|
| 20 |
|
| 21 |
def __init__(self) -> None:
|
| 22 |
self.planner = PlannerAgent()
|
|
@@ -24,6 +32,8 @@ class ReadinessOrchestrator:
|
|
| 24 |
self.synthesis = SynthesisAgent()
|
| 25 |
self.documentation = DocumentationAgent()
|
| 26 |
self.reviewer = ReviewerAgent()
|
|
|
|
|
|
|
| 27 |
|
| 28 |
def run(self, request: ReadinessRequest) -> ReadinessResponse:
|
| 29 |
plan = self.planner.run(request)
|
|
@@ -31,11 +41,18 @@ class ReadinessOrchestrator:
|
|
| 31 |
sponsor_synthesis = self.synthesis.run(evidence, plan.summary)
|
| 32 |
docs = self.documentation.run(request, evidence)
|
| 33 |
review = self.reviewer.run(plan, evidence, docs, sponsor_synthesis)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
return ReadinessResponse(
|
| 35 |
plan=plan,
|
| 36 |
evidence=evidence,
|
| 37 |
documentation=docs,
|
| 38 |
review=review,
|
|
|
|
|
|
|
| 39 |
)
|
| 40 |
|
| 41 |
def run_dict(self, payload: Dict) -> Dict:
|
|
@@ -48,12 +65,26 @@ class ReadinessOrchestrator:
|
|
| 48 |
docs = self.documentation.run(request, evidence)
|
| 49 |
review = self.reviewer.run(plan, evidence, docs, sponsor_synthesis)
|
| 50 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 51 |
response = ReadinessResponse(
|
| 52 |
plan=plan,
|
| 53 |
evidence=evidence,
|
| 54 |
documentation=docs,
|
| 55 |
review=review,
|
|
|
|
|
|
|
| 56 |
)
|
| 57 |
result = asdict(response)
|
| 58 |
result["sponsor_synthesis"] = sponsor_synthesis
|
| 59 |
return result
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
|
| 3 |
from __future__ import annotations
|
| 4 |
|
| 5 |
+
import asyncio
|
| 6 |
+
from dataclasses import asdict, field
|
| 7 |
from typing import Dict
|
| 8 |
|
| 9 |
from agents import (
|
|
|
|
| 13 |
ReviewerAgent,
|
| 14 |
SynthesisAgent,
|
| 15 |
)
|
| 16 |
+
from deployment_agent import DeploymentAgent
|
| 17 |
+
from docs_agent import DocumentationLookupAgent
|
| 18 |
+
from schemas import (
|
| 19 |
+
DeploymentActions,
|
| 20 |
+
DocumentationReferences,
|
| 21 |
+
ReadinessRequest,
|
| 22 |
+
ReadinessResponse,
|
| 23 |
+
)
|
| 24 |
|
| 25 |
|
| 26 |
class ReadinessOrchestrator:
|
| 27 |
+
"""Runs the enhanced pipeline with Context7 docs and GitHub deployment."""
|
| 28 |
|
| 29 |
def __init__(self) -> None:
|
| 30 |
self.planner = PlannerAgent()
|
|
|
|
| 32 |
self.synthesis = SynthesisAgent()
|
| 33 |
self.documentation = DocumentationAgent()
|
| 34 |
self.reviewer = ReviewerAgent()
|
| 35 |
+
self.docs_lookup = DocumentationLookupAgent()
|
| 36 |
+
self.deployment = DeploymentAgent()
|
| 37 |
|
| 38 |
def run(self, request: ReadinessRequest) -> ReadinessResponse:
|
| 39 |
plan = self.planner.run(request)
|
|
|
|
| 41 |
sponsor_synthesis = self.synthesis.run(evidence, plan.summary)
|
| 42 |
docs = self.documentation.run(request, evidence)
|
| 43 |
review = self.reviewer.run(plan, evidence, docs, sponsor_synthesis)
|
| 44 |
+
|
| 45 |
+
# Run async operations
|
| 46 |
+
docs_refs = asyncio.run(self.docs_lookup.lookup_deployment_docs(request, plan))
|
| 47 |
+
deployment_config = asyncio.run(self.deployment.prepare_deployment(request, plan))
|
| 48 |
+
|
| 49 |
return ReadinessResponse(
|
| 50 |
plan=plan,
|
| 51 |
evidence=evidence,
|
| 52 |
documentation=docs,
|
| 53 |
review=review,
|
| 54 |
+
docs_references=DocumentationReferences(**docs_refs),
|
| 55 |
+
deployment=DeploymentActions(**deployment_config),
|
| 56 |
)
|
| 57 |
|
| 58 |
def run_dict(self, payload: Dict) -> Dict:
|
|
|
|
| 65 |
docs = self.documentation.run(request, evidence)
|
| 66 |
review = self.reviewer.run(plan, evidence, docs, sponsor_synthesis)
|
| 67 |
|
| 68 |
+
# Run async operations
|
| 69 |
+
docs_refs = asyncio.run(self.docs_lookup.lookup_deployment_docs(request, plan))
|
| 70 |
+
deployment_config = asyncio.run(self.deployment.prepare_deployment(request, plan))
|
| 71 |
+
|
| 72 |
response = ReadinessResponse(
|
| 73 |
plan=plan,
|
| 74 |
evidence=evidence,
|
| 75 |
documentation=docs,
|
| 76 |
review=review,
|
| 77 |
+
docs_references=DocumentationReferences(**docs_refs),
|
| 78 |
+
deployment=DeploymentActions(**deployment_config),
|
| 79 |
)
|
| 80 |
result = asdict(response)
|
| 81 |
result["sponsor_synthesis"] = sponsor_synthesis
|
| 82 |
return result
|
| 83 |
+
|
| 84 |
+
async def execute_deployment(self, payload: Dict) -> Dict:
|
| 85 |
+
"""Execute deployment actions via GitHub."""
|
| 86 |
+
request = ReadinessRequest(**payload)
|
| 87 |
+
plan = self.planner.run(request)
|
| 88 |
+
deployment_config = await self.deployment.prepare_deployment(request, plan)
|
| 89 |
+
execution_results = await self.deployment.execute_deployment(deployment_config)
|
| 90 |
+
return execution_results
|
schemas.py
CHANGED
|
@@ -3,7 +3,7 @@
|
|
| 3 |
from __future__ import annotations
|
| 4 |
|
| 5 |
from dataclasses import dataclass, field
|
| 6 |
-
from typing import List, Literal, Optional
|
| 7 |
|
| 8 |
RiskLevel = Literal["low", "medium", "high"]
|
| 9 |
|
|
@@ -73,6 +73,26 @@ class ReadinessRequest:
|
|
| 73 |
stakeholders: Optional[List[str]] = None
|
| 74 |
|
| 75 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 76 |
@dataclass(slots=True)
|
| 77 |
class ReadinessResponse:
|
| 78 |
"""Full multi-agent response returned to the UI."""
|
|
@@ -81,3 +101,5 @@ class ReadinessResponse:
|
|
| 81 |
evidence: EvidencePacket
|
| 82 |
documentation: DocumentationBundle
|
| 83 |
review: ReviewReport
|
|
|
|
|
|
|
|
|
| 3 |
from __future__ import annotations
|
| 4 |
|
| 5 |
from dataclasses import dataclass, field
|
| 6 |
+
from typing import Any, Dict, List, Literal, Optional
|
| 7 |
|
| 8 |
RiskLevel = Literal["low", "medium", "high"]
|
| 9 |
|
|
|
|
| 73 |
stakeholders: Optional[List[str]] = None
|
| 74 |
|
| 75 |
|
| 76 |
+
@dataclass(slots=True)
|
| 77 |
+
class DocumentationReferences:
|
| 78 |
+
"""Context7 documentation lookup results."""
|
| 79 |
+
|
| 80 |
+
framework: Optional[str] = None
|
| 81 |
+
platform: Optional[str] = None
|
| 82 |
+
lookups: List[Dict[str, Any]] = field(default_factory=list)
|
| 83 |
+
|
| 84 |
+
|
| 85 |
+
@dataclass(slots=True)
|
| 86 |
+
class DeploymentActions:
|
| 87 |
+
"""GitHub deployment actions and configuration."""
|
| 88 |
+
|
| 89 |
+
repo: Optional[str] = None
|
| 90 |
+
branch: str = "main"
|
| 91 |
+
ready: bool = False
|
| 92 |
+
actions: List[Dict[str, Any]] = field(default_factory=list)
|
| 93 |
+
execution_results: Optional[Dict[str, Any]] = None
|
| 94 |
+
|
| 95 |
+
|
| 96 |
@dataclass(slots=True)
|
| 97 |
class ReadinessResponse:
|
| 98 |
"""Full multi-agent response returned to the UI."""
|
|
|
|
| 101 |
evidence: EvidencePacket
|
| 102 |
documentation: DocumentationBundle
|
| 103 |
review: ReviewReport
|
| 104 |
+
docs_references: Optional[DocumentationReferences] = None
|
| 105 |
+
deployment: Optional[DeploymentActions] = None
|