サクサク読めて、アプリ限定の機能も多数!
トップへ戻る
GWの過ごし方
developers.openai.com
Docs, videos, and demo apps for building with OpenAI
Protocol Like MCP, codex app-server supports bidirectional communication using JSON-RPC 2.0 messages (with the "jsonrpc":"2.0" header omitted on the wire). Supported transports: stdio (--listen stdio://, default): newline-delimited JSON (JSONL). websocket (--listen ws://IP:PORT, experimental): one JSON-RPC message per WebSocket text frame. In WebSocket mode, app-server uses bounded queues. When re
If you’re new to Codex or coding agents in general, this guide will help you get better results faster. It covers the core habits that make Codex more effective across the CLI, IDE extension, and the Codex app, from prompting and planning to validation, MCP, skills, and automations. Codex works best when you treat it less like a one-off assistant and more like a teammate you configure and improve
Keep workflows in the repo In these repos, we use skills to capture repository-specific workflows. A skill is a small package of operational knowledge: a SKILL.md manifest, plus optional scripts/, references/, and assets/. The Codex customization docs describe why this works well: skills are a good fit for repeatable workflows because they can carry richer instructions, scripts, and references wit
Open-source maintainers do critical work, often behind the scenes, to keep the software ecosystem healthy. Over the past year, the Codex Open Source Fund ($1 million) has supported projects that need API credits, including teams using Codex to power GitHub pull request workflows. OpenAI is grateful to the maintainers who keep that work moving. The fund now supports eligible maintainers by offering
GPT-5.4, our newest mainline model, is designed to balance long-running task performance, stronger control over style and behavior, and more disciplined execution across complex workflows. Building on advances from GPT-5 through GPT-5.3-Codex, GPT-5.4 improves token efficiency, sustains multi-step workflows more reliably, and performs well on long-horizon tasks. GPT-5.4 is designed for production-
Using Codex App Overview Features Settings Review Automations Worktrees Local Environments Commands Windows Troubleshooting IDE Extension Overview Features Settings IDE Commands Slash commands CLI Overview Features Command Line Options Slash commands Web Overview Environments Internet Access Integrations GitHub Slack Linear Codex Security Overview Setup Improving the threat model FAQ
Compaction Manage long-running conversations with server-side and standalone compaction. Overview To support long-running interactions, you can use compaction to reduce context size while preserving state needed for subsequent turns. Compaction helps you balance quality, cost, and latency as conversations grow. Server-side compaction You can enable server-side compaction in a Responses create requ
Shell + Skills + Compaction: Tips for long-running agents that do real work Practical patterns for building with skills, hosted shell, and server-side compaction in the Responses API. We’re shifting from single-turn assistants to long-running agents that handle real knowledge work: reading large datasets, updating files, and writing apps. Based on developer feedback and our own experience building
Upload, manage, and attach reusable skills to hosted environments. Agent Skills let you upload and reuse versioned bundles of files in hosted and local shell environments. For the full reference, see the Skills documentation. What is a skill? A skill is a reusable bundle of files (instructions + scripts + assets), packaged as a folder and anchored by a required SKILL.md manifest. OpenAI copies tha
The Codex app is a focused desktop experience for working on Codex threads in parallel, with built-in worktree support, automations, and Git functionality. Multitask across projectsUse one Codex app window to run tasks across projects. Add a project for each codebase and switch between them as needed. If you’ve used the Codex CLI, a project is like starting a session in a specific directory. If yo
Rules Control which commands Codex can run outside the sandbox Create a rules file Create a .rules file under ./codex/rules/ (for example, ~/.codex/rules/default.rules). Add a rule. This example prompts before allowing gh pr view to run outside the sandbox. # Prompt before running commands with the prefix `gh pr view` outside the sandbox. prefix_rule( # The prefix to match. pattern = ["gh", "pr",
Codex and the gpt-5.2-codex model (recommended) can be used to implement complex tasks that take significant time to research, design, and implement. The approach described here is one way to prompt the model to implement these tasks and to steer it towards successful completion of a project. These plans are thorough design documents, and “living documents”. As a user of Codex, you can use these d
Codex models advance the frontier of intelligence and efficiency and our recommended agentic coding model. Follow this guide closely to ensure you’re getting the best performance possible from this model. This guide is for anyone using the model directly via the API for maximum customizability; we also have the Codex SDK for simpler integrations. In the API, the Codex-tuned model is gpt-5.2-codex
Use agent skills to extend Codex with task-specific capabilities. A skill packages instructions, resources, and optional scripts so Codex can follow a workflow reliably. You can share skills across teams or with the community. Skills build on the open agent skills standard. Skills are available in both the Codex CLI and IDE extensions. Agent skill definition A skill captures a capability expressed
1. Introduction GPT-5.2 is our newest flagship model for enterprise and agentic workloads, designed to deliver higher accuracy, stronger instruction following, and more disciplined execution across complex workflows. Building on GPT-5.1, GPT-5.2 improves token efficiency on medium-to-complex tasks, produces cleaner formatting with less unnecessary verbosity, and shows clear gains in structured rea
Codex models advance the frontier of intelligence and efficiency and our recommended agentic coding model. Follow this guide closely to ensure you’re getting the best performance possible from this model. This guide is for anyone using the model directly via the API for maximum customizability; we also have the Codex SDK for simpler integrations. In the API, the Codex-tuned model is gpt-5.3-codex
Introduction GPT-5.1, our newest flagship model, is designed to balance intelligence and speed for a variety of agentic and coding tasks, while also introducing a new none reasoning mode for low-latency interactions. Building on the strengths of GPT-5, GPT-5.1 is better calibrated to prompt difficulty, consuming far fewer tokens on easy inputs and more efficiently handling challenging ones. Along
Codex is OpenAI’s coding agent for software development. ChatGPT Plus, Pro, Business, Edu, and Enterprise plans include Codex. It can help you: Write code: Describe what you want to build, and Codex generates code that matches your intent, adapting to your existing project structure and conventions. Understand unfamiliar codebases: Codex can read and explain complex or legacy code, helping you gra
ChatGPT app submissions are now open. Learn more in our App submission guidelines.
If you use Codex through the Codex CLI, the IDE extension, or Codex Web, you can also control it programmatically. Use the SDK when you need to: Control Codex as part of your CI/CD pipeline Create your own agent that can engage with Codex to perform complex engineering tasks Build Codex into your own internal tools and workflows Integrate Codex within your own application TypeScript library The Ty
次のページ
このページを最初にブックマークしてみませんか?
『OpenAI for developers』の新着エントリーを見る
j次のブックマーク
k前のブックマーク
lあとで読む
eコメント一覧を開く
oページを開く