ProjectSettings

Project Usage & Resource Tracking

View detailed usage data for your Origin project, including agent activity, token consumption, session counts, and cost breakdowns to monitor and optimize your resource usage over time.

The Project Usage page tracks resource consumption, token usage, and development activity across the project. It covers compute costs, AI model usage, code edit activity, team contributions, and a detailed log of individual runs.

Summary Metrics

At the top of the page, four cards summarize total consumption for the project:

  • Compute Cost: total spend on sandbox, workspace, and trigger execution. A breakdown line shows the sandbox billing contribution separately.
  • Total Tokens: the number of tokens consumed across all model calls in the project.
  • LLM Cost: spend attributed specifically to AI model usage.
  • Grand Total: combined cost across all services.

Sandbox Costs

The Sandbox Costs table breaks down compute spend by individual sandbox instance.

The header shows the per-session billing rate, total number of billable sessions, and the period total. Each row in the table shows:

  • Workspace: the sandbox instance name and any linked trials
  • Status: whether the sandbox is currently Running or Stopped
  • Resources: allocated CPU, memory, and disk (for example, 4 CPU / 8 GB / 10 GB)
  • Trials: the number of trials linked to that sandbox

A Billed Total row at the bottom of the table shows the total charged for all sandbox instances in the period.

AI Model Usage

The AI Model Usage section shows token consumption broken down by model. Once models have been used in the project, each model appears here with its token count, making it straightforward to see which models are driving the most usage and cost.

AI Line Edits

The AI Line Edits section visualizes code edit activity over time using a contribution-style heatmap. Each cell represents a day, with color intensity indicating volume of edits, from fewer (lighter) to more (darker).

The heatmap can be filtered using three tabs:

  • ALL: combined view of all AI-driven edits
  • LLM: edits attributed to language model calls
  • TASKS: edits attributed to task execution

A download button allows you to export the activity data.

Below the heatmap, four summary stats are shown:

  • Most Active Month: the calendar month with the highest edit volume
  • Most Active Day: the single most active day across the project
  • Longest Streak: the longest consecutive run of days with edit activity
  • Current Streak: the number of consecutive active days up to today

Usage Leaderboard

The Usage Leaderboard shows activity by team member. Once team members start using the project, each contributor appears here with their usage metrics, giving visibility into who is driving activity across the project.

Detailed Usage Log

At the bottom of the page, the Detailed Usage Log provides a paginated record of individual runs. Each entry represents a discrete execution: a task run, agent session, or triggered workflow, giving a granular audit trail of all activity in the project.

On this page