Product
Engineering Operations Analytics

Archie AI Engineering Operations Analytics Documentation

Overview

Archie AI Engineering Operations Analytics is a comprehensive tool designed to provide in-depth insights into your software development process. It offers three main types of metrics:

  1. DORA Metrics: Focused on deployment frequency, lead time for changes, time to restore service, and change failure rate.
  2. Throughput Metrics: Measuring the volume and pace of work completed.
  3. Quality Metrics: Evaluating the quality of pull requests and code reviews.

These metrics are crucial for understanding and improving your development process:

  • Velocity metrics (including DORA): Tell you how fast your team is moving.
  • Throughput metrics: Show you how much work is being completed in aggregate.
  • Quality metrics: Ensure that speed and volume don't come at the cost of code quality.

By tracking these metrics, engineering teams can identify bottlenecks, improve efficiency, and maintain high standards of code quality while increasing productivity.

DORA Metrics


DORA Metrics

DORA (DevOps Research and Assessment) metrics are industry-standard measurements for software development and operations performance.

What DORA Metrics Measure

We're starting with offering Cycle Time - the total time from the first commit to deployment, broken down into:

  • Coding Time
  • Pickup Time
  • Review Time
  • Deploy Time

In upcoming releases, we plan to incorporate additional DORA metrics such as:

  • Deployment Frequency
  • Lead Time for Changes
  • Time to Restore Service
  • Change Failure Rate

These additions will provide a more comprehensive view of your DevOps performance and align more closely with the full set of DORA metrics.

How to Use DORA Metrics

  • Deployment Frequency: Aim for more frequent, smaller deployments to reduce risk and get faster feedback.
  • Lead Time for Changes: Reduce this to get new features to users faster. Break down the lead time into coding time, pickup time, review time, and deploy time to identify specific areas for improvement.
  • Time to Restore Service: Minimize this to ensure reliability. Implement automated rollback processes and improve monitoring.
  • Change Failure Rate: Keep this low to maintain stability. Implement thorough testing and code review processes.

Use these metrics to benchmark your team's performance against industry standards and to set improvement goals.

Throughput Metrics


Throughput Metrics

Throughput metrics measure the fundamental speed at which software can be implemented and the volume of work completed.

What Throughput Metrics Measure

  • Code Changes: Total lines of code added or changed.
  • Commits: Number of commits pushed across all relevant branches.
  • PRs Opened: Number of new pull requests created.
  • Merge Frequency: Number of pull requests merged.
  • Reviews: Number of reviews completed on open pull requests.

How to Use Throughput Metrics

  • Code Changes: Use this to understand the volume of work, but remember that more lines of code don't always mean more productivity.
  • Commits: Track this to see how often developers are pushing changes. Frequent, smaller commits are often a sign of good practices.
  • PRs Opened and Merged: These metrics help you understand your team's output and how efficiently PRs are moving through the pipeline.
  • Reviews: Monitor this to ensure that code is being adequately reviewed before merging.

Use these metrics to get a high-level view of your team's output and to identify any bottlenecks in your development process.

Quality Metrics


Quality Metrics

Quality metrics ensure that increased speed and volume don't come at the cost of code quality.

What Quality Metrics Measure

  • Review Depth: Average number of comments per pull request review.
  • PRs Merged Without Review: Number of pull requests merged without any review.
  • PR Size: Average size of pull requests in lines of code.
  • Number of Reviewers: Average number of reviewers per pull request.
  • AI-Generated Insights: Qualitative data generated by Archie AI, including AI reviews, labels, and PR summaries.

How to Use Quality Metrics

  • Review Depth: Aim for a balance. Too few comments might indicate superficial reviews, while too many (e.g., >11) might suggest issues with design or communication.
  • PRs Merged Without Review: Keep this number as low as possible to ensure all code is properly vetted.
  • PR Size: Encourage smaller PRs (e.g. less than 400 lines) for easier review and lower risk of introducing bugs.
  • Number of Reviewers: Aim for 2-3 reviewers per PR. More than 3 can significantly increase review time.
  • AI-Generated Insights: Use these to get additional context on PRs and to identify potential issues that human reviewers might miss.

Use these metrics to maintain and improve code quality, ensuring that your codebase remains maintainable and reliable as your team moves faster.

Conclusion

Archie AI Engineering Operations Analytics provides a comprehensive view of your software development process through DORA, Throughput, and Quality metrics. By regularly monitoring and acting on these metrics, you can:

  • Increase deployment frequency and reduce lead time for changes, getting features to users faster.
  • Improve code quality and reduce the likelihood of introducing bugs.
  • Balance speed and quality to achieve sustainable, long-term productivity improvements.

By leveraging Archie AI's advanced analytics, including AI-generated insights, you can make data-driven decisions to continuously improve your software development process and deliver better results for your users and stakeholders.

Share This Page