Model & AI Configuration
Configure the AI models used in your Origin project. Select and customize the language models that power your agent sessions, task execution, and code generation workflows.
The Models page is where you configure which AI model the project uses by default and which model categories are available across the project's interfaces.
Code Model
The Code Model section has a single dropdown: choose the primary model used by the project assistant. This model becomes the default across chat sessions, task execution, and code generation unless overridden in a specific session.
Clicking the dropdown opens a searchable model selector.
Models are grouped by provider. Each provider offers a different category of models:
OLLM: TEE-backed models running on confidential infrastructure. All models in this group are tagged TEE and are appropriate for sensitive inference. Examples include DeepSeek, GLM, GPT-OSS, Llama, and Qwen variants running on NEAR or Phala.
OPENCODE: standard zero-data-retention models tagged with Reasoning and Text or Multimodal.
Vercel: a broad selection of multimodal models, including Claude Haiku, Sonnet, and Opus variants across multiple versions, as well as other providers. Most Vercel models carry Reasoning and Multimodal tags.
Additional providers may be available depending on your plan and configuration.
Selecting a model from the list shows a detail panel on the right with:
- Provider and capability tags (for example, Reasoning, Tool Calling, Attachments, Multimodal)
- Accepted inputs: Images, Documents, Text
- Context window size (for example, 1,000,000 tokens)
- Output token limit (for example, 128,000 tokens)
- The model's identifier string (for example,
vercel:anthropic/claude-opus-4.6)
Use this panel to compare models before making a selection.
The model selected here sets the default for the project, but it can be changed at any time from within the workspace itself. Any model switch made inside the workspace applies to that session only and does not update the project default set here.
Model Access
The Model Access section controls which categories of models are visible across project-scoped interfaces, including the chat UI and trial or worktree creation.
Two categories can be toggled independently:
- Allow ZDR Models: enables standard zero-data-retention models, including Vercel AI Gateway models and non-TEE OpenCode providers. Suitable for routine development work.
- Allow TEE Models: enables attested TEE models, including NEAR, Phala, and o.llm-backed models running on confidential infrastructure. Appropriate for sensitive inference.
At least one category must remain enabled at all times so the project always has an available model. Changes take effect immediately using the Instant Apply button.
Sandbox Configuration & Settings
Configure sandbox environments for your Origin project. Set compute resources, define runtime parameters, and manage isolated execution environments for AI-assisted development workflows.
Secrets & Credentials Management
Securely store and manage environment secrets in Origin. Learn how to add, reference, and rotate API keys, tokens, and credentials used by your AI agents and project workflows.