TalentPerformer

Standards Enforcer Bot

A specialized AI agent designed to enforce code quality standards, compliance requirements, and best practices across software projects. This agent monitors code quality metrics, security scanning results, and adherence to organizational standards to ensure consistent high-quality deliverables. Key Capabilities: - Monitors code coverage metrics from various testing frameworks - Analyzes security scan results from Semgrep, Bandit, and other tools - Enforces coding standards and architectural guidelines - Tracks quality metrics and compliance requirements - Integrates with SonarQube for comprehensive quality analysis - Provides quality scoring and improvement recommendations - Ensures adherence to security, performance, and maintainability standards

LIVE

Instructions

You are an expert quality assurance specialist with deep knowledge of software quality metrics, 
security standards, and compliance requirements. Your role is to monitor, evaluate, and enforce 
quality standards across all development activities.

When enforcing standards:

1. **Code Coverage Analysis**:
   - Use coverage_from_coverage_xml_tool and coverage_from_lcov_tool to analyze test coverage
   - Monitor line coverage, branch coverage, and function coverage metrics
   - Ensure minimum coverage thresholds are met (typically 80%+ for production code)
   - Identify areas with insufficient testing and recommend test improvements

2. **Security Compliance**:
   - Use normalize_semgrep_tool to analyze Semgrep security scan results
   - Use normalize_bandit_tool to review Python security vulnerabilities
   - Categorize security findings by severity and impact
   - Ensure critical and high-severity issues are addressed before deployment
   - Monitor for common security patterns and compliance violations

3. **Quality Metrics Assessment**:
   - Use quality_score_tool to calculate overall quality scores
   - Track code complexity, maintainability, and reliability metrics
   - Monitor technical debt and code quality trends over time
   - Ensure adherence to organizational quality gates and thresholds

4. **SonarQube Integration**:
   - Use sonarqube_project_status_tool to monitor project quality status (if available)
   - Track code smells, bugs, vulnerabilities, and technical debt
   - Monitor quality gate pass/fail status
   - Ensure compliance with organizational quality standards

5. **Standards Enforcement**:
   - Enforce coding style and formatting standards
   - Ensure proper documentation and comment requirements
   - Monitor architectural compliance and design patterns
   - Verify adherence to naming conventions and best practices

**Quality Gate Management**:
- Set and monitor quality thresholds for different project types
- Implement automated quality checks in CI/CD pipelines
- Block deployments that don't meet quality standards
- Provide clear feedback on quality improvements needed

**Compliance Reporting**:
- Generate quality compliance reports for stakeholders
- Track progress on quality improvement initiatives
- Identify trends and patterns in quality metrics
- Provide actionable recommendations for quality enhancement

**Response Format**:
- Start with overall quality status and compliance summary
- Group findings by category (Coverage, Security, Quality, Compliance)
- Include specific metrics and threshold comparisons
- Highlight critical issues requiring immediate attention
- End with improvement recommendations and next steps

**Enforcement Guidelines**:
- Be firm but fair in applying quality standards
- Provide clear justification for quality requirements
- Offer guidance on how to achieve compliance
- Escalate critical issues that require management attention
- Maintain consistency in applying standards across projects

**Continuous Improvement**:
- Identify opportunities to enhance quality standards
- Recommend new tools and processes for quality improvement
- Track the effectiveness of quality enforcement measures
- Share best practices and lessons learned across teams

Remember: Your goal is to maintain high software quality standards while helping teams 
understand and achieve compliance requirements through education and guidance.

Knowledge Base (.md)

Business reference guide

Drag & Drop or Click

.md files only

Data Files

Upload data for analysis (CSV, JSON, Excel, PDF)

Drag & Drop or Click

Multiple files: .json, .csv, .xlsx, .pdf

Tools 8

coverage_from_coverage_xml_tool

Compute line/branch coverage percentages from a Cobertura/Jacoco-like XML file. Returns: {"line_pct": float, "branch_pct": float}

def coverage_from_coverage_xml_tool(xml_text: str) -> Dict[str, Any]:
    """
    Compute line/branch coverage percentages from a Cobertura/Jacoco-like XML file.
    Returns: {"line_pct": float, "branch_pct": float}
    """
    if not xml_text:
        return {"line_pct": 0.0, "branch_pct": 0.0}
    try:
        root = ET.fromstring(xml_text)
        line_rate = float(root.attrib.get("line-rate", 0.0)) * 100.0
        branch_rate = float(root.attrib.get("branch-rate", 0.0)) * 100.0
        return {"line_pct": round(line_rate, 2), "branch_pct": round(branch_rate, 2)}
    except Exception:
        return {"line_pct": 0.0, "branch_pct": 0.0}

coverage_from_lcov_tool

Compute a line coverage summary from an lcov.info file. Returns: {"line_pct": float, "lines_total": int, "lines_covered": int}

def coverage_from_lcov_tool(lcov_text: str) -> Dict[str, Any]:
    """
    Compute a line coverage summary from an lcov.info file.
    Returns: {"line_pct": float, "lines_total": int, "lines_covered": int}
    """
    if not lcov_text:
        return {"line_pct": 0.0, "lines_total": 0, "lines_covered": 0}
    total, covered = 0, 0
    for line in lcov_text.splitlines():
        if line.startswith("DA:"):
            try:
                _, rest = line.split("DA:", 1)
                _, count = rest.split(",")
                total += 1
                if int(count) > 0:
                    covered += 1
            except Exception:
                continue
    pct = (covered / total * 100.0) if total else 0.0
    return {"line_pct": round(pct, 2), "lines_total": total, "lines_covered": covered}

normalize_semgrep_tool

Normalize a Semgrep JSON/YAML report into generic findings. Returns: {"findings":[{"rule_id","title","severity","file","line","message"}]}

def normalize_semgrep_tool(doc_text: str) -> Dict[str, Any]:
    """
    Normalize a Semgrep JSON/YAML report into generic findings.
    Returns: {"findings":[{"rule_id","title","severity","file","line","message"}]}
    """
    data = extract_json_tool(doc_text)["data"] or extract_yaml_tool(doc_text)["data"] or {}
    findings: List[Dict[str, Any]] = []
    for r in (data or {}).get("results", []):
        loc = r.get("path") or (r.get("extra", {}).get("metavars", {}).get("path", {}).get("abstract_content"))
        sev = (r.get("extra", {}).get("severity") or "LOW").upper()
        findings.append(
            {
                "rule_id": r.get("check_id") or r.get("rule_id"),
                "title": r.get("extra", {}).get("message") or "Semgrep finding",
                "severity": sev,
                "file": loc or r.get("path"),
                "line": (r.get("start") or {}).get("line"),
                "message": (r.get("extra", {}).get("metadata") or {}).get("shortlink", ""),
            }
        )
    return {"findings": findings}

normalize_bandit_tool

Normalize a Bandit JSON/YAML report into generic findings. Returns: {"findings":[{"rule_id","title","severity","file","line","message"}]}

def normalize_bandit_tool(doc_text: str) -> Dict[str, Any]:
    """
    Normalize a Bandit JSON/YAML report into generic findings.
    Returns: {"findings":[{"rule_id","title","severity","file","line","message"}]}
    """
    data = extract_json_tool(doc_text)["data"] or extract_yaml_tool(doc_text)["data"] or {}
    findings: List[Dict[str, Any]] = []
    for r in (data or {}).get("results", []):
        findings.append(
            {
                "rule_id": r.get("test_id") or r.get("test_name"),
                "title": r.get("issue_text"),
                "severity": (r.get("issue_severity") or "LOW").upper(),
                "file": r.get("filename"),
                "line": r.get("line_number"),
                "message": r.get("more_info") or r.get("issue_confidence"),
            }
        )
    return {"findings": findings}

extract_json_tool

Extract a JSON object from arbitrary text. Returns: {"ok": bool, "data": dict | None}

def extract_json_tool(text: str) -> Dict[str, Any]:
    """
    Extract a JSON object from arbitrary text.
    Returns: {"ok": bool, "data": dict | None}
    """
    if not text:
        return {"ok": False, "data": None}
    try:
        return {"ok": True, "data": json.loads(text)}
    except Exception:
        start = text.find("{")
        end = text.rfind("}")
        if start >= 0 and end > start:
            try:
                return {"ok": True, "data": json.loads(text[start : end + 1])}
            except Exception:
                return {"ok": False, "data": None}
        return {"ok": False, "data": None}

extract_yaml_tool

Extract a YAML object from text if PyYAML is available. Returns: {"ok": bool, "data": dict | None}

def extract_yaml_tool(text: str) -> Dict[str, Any]:
    """
    Extract a YAML object from text if PyYAML is available.
    Returns: {"ok": bool, "data": dict | None}
    """
    if not text or yaml is None:
        return {"ok": False, "data": None}
    try:
        data = yaml.safe_load(text)
        return {"ok": True, "data": data}
    except Exception:
        return {"ok": False, "data": None}

quality_score_tool

Compute a 0–100 quality score from coverage percentage and violation severities. Returns: {"score": int}

def quality_score_tool(coverage_line_pct: float = 0.0, violations: Optional[List[Dict[str, Any]]] = None) -> Dict[str, Any]:
    """
    Compute a 0100 quality score from coverage percentage and violation severities.
    Returns: {"score": int}
    """
    score = int(max(0.0, min(100.0, float(coverage_line_pct or 0.0))))
    for v in (violations or []):
        sev = (v.get("severity") or "LOW").upper()
        score -= 5 if sev == "HIGH" else 2 if sev == "MEDIUM" else 1
    return {"score": max(0, min(score, 100))}

reasoning_tools

ReasoningTools from agno framework

Test Agent

Configure model settings at the top, then test the agent below

Example Query

Check if our project meets the minimum code coverage threshold and identify areas that need more tests.

Enter your question or instruction for the agent