Skip to main content
OpenClaw is an open-source AI assistant with persistent memory and multi-platform access. Routing it through Portkey gives you request logs, cost tracking, automatic failovers, and team controls.

Get Connected

Already have OpenClaw set up? Start from step 3.
Already have Portkey configured? Re‑use your existing provider slug and API key.
1. Set up OpenClaw (if not already installed) 1.1 Install OpenClaw Run one of the following:
# macOS or Linux
npm install -g openclaw@latest

# macOS or Linux (curl installer)
curl -fsSL https://openclaw.ai/install.sh | bash

# Windows (PowerShell)
iwr -useb https://openclaw.ai/install.ps1 | iex
1.2 Run the onboarding wizard
openclaw onboard --install-daemon
Follow the onboarding wizard according to your needs.
If you’re unsure, you can refer this minimal QuickStart path:
🦞 OpenClaw 2026.2.15 — onboarding (QuickStart)

- Acknowledge the security notice:
  - Choose: “Yes, I understand this is powerful and inherently risky.”

- Onboarding mode:
  - Choose: QuickStart

- Model/auth provider:
  - Choose: OpenAI
  - Auth method: OpenAI API key
  - Paste your OpenAI API key when prompted
  - Keep default model: openai/gpt-5.1-codex (or similar default suggested)

- Channels (QuickStart):
  - Choose: Skip for now (you can add channels later via `openclaw channels add`)

- Skills:
  - Choose: No / Skip for now

- Hooks:
  - Select: Skip for now

- Gateway service:
  - Let it install the Gateway service (LaunchAgent on macOS)

- Hatch your bot:
  - Choose: Do this later

- When you’re ready:
  - Dashboard: run `openclaw dashboard --no-open`
  - Control UI: follow the printed `http://127.0.0.1:18789/...` link
This completes the basic OpenClaw setup. 2. Make sure OpenClaw is running Once onboarding is complete, ensure the Gateway service is up (or start it via your OS tools or openclaw commands). When OpenClaw is set up and running, continue with Portkey integration below. 3. Add your provider to Portkey Open Model Catalog, click Add Provider, enter your provider API key, and create a slug (for example, gemini). 4. Create a Portkey API key Go to API Keys, click Create, and copy the key. 5. Open your OpenClaw config Edit the OpenClaw config file directly:
open ~/.openclaw/openclaw.json
6. Add the Portkey provider and agent model In openclaw.json, add or merge the following snippet:
{
  "models": {
    "mode": "merge",
    "providers": {
      "portkey": {
        "baseUrl": "https://api.portkey.ai/v1",
        "apiKey": "YOUR_PORTKEY_API_KEY",
        "api": "openai-completions",
        "models": [
          {
            "id": "@gemini/gemini-2.5-flash",
            "name": "Gemini 2.5 Flash"
          }
        ]
      }
    }
  },
  "agents": {
    "defaults": {
      "model": {
        "primary": "portkey/@gemini/gemini-2.5-flash"
      }
    }
  }
}
Replace YOUR_PORTKEY_API_KEY with your actual Portkey API key.
Remember to keep "api" set to "openai-completions" or "openai-responses" — Portkey transforms requests based on the OpenAI API format, and other values from the OpenClaw docs will be incompatible here.
7. Test the integration After saving the config, run:
openclaw agent --agent main --message "hi"
If everything is configured correctly, this message will be routed through Portkey to your configured @gemini/gemini-2.5-flash model. For more advanced configuration options, see the OpenClaw docs:
  • https://docs.openclaw.ai/gateway/configuration
  • https://docs.openclaw.ai/gateway/configuration-examples
  • https://docs.openclaw.ai/gateway/configuration-reference

See Your Requests

Run openclaw and make a request. Open the Portkey dashboard — you should see your request logged with cost, latency, and the full payload.
You can also verify your setup by listing available models:
curl -s https://api.portkey.ai/v1/models \
  -H "x-portkey-api-key: $PORTKEY_API_KEY" | jq '.data[].id'

Add More Models

Configure Models

Add multiple models from any provider you’ve configured in Portkey.
You can then refer to them from your agent config using primary and fallbacks.
If you only want to use a single model, set it in primary without having anything in fallbacks.
"models": [
  {
    "id": "@gemini/gemini-2.5-flash",
    "name": "Gemini 2.5 Flash"
  },
  {
    "id": "@mistral/open-mixtral-8x7b",
    "name": "Mixtral 8x7b"
  }
]
Use primary and fallbacks to control routing:
"agents": {
  "defaults": {
    "model": {
      "fallbacks": [
        "portkey/@gemini/gemini-2.5-flash"
      ],
      "primary": "portkey/@mistral/open-mixtral-8x7b"
    }
  }
}
In this example:
  • Primary traffic goes to portkey/@mistral/open-mixtral-8x7b.
  • Fallback traffic uses portkey/@gemini/gemini-2.5-flash when the primary fails.
    If you want to use just one model at a time, set only the primary field.

Track with Metadata

Add headers to group requests by session or tag by team/project:
// Add to your portkey provider config
headers: {
  "x-portkey-trace-id": "session-auth-refactor",
  "x-portkey-metadata": "{\"team\":\"backend\",\"project\":\"api-v2\"}"
}
These appear in the dashboard for filtering and analytics.

Make It Reliable

Create configs in Portkey Configs and attach them to your API key.
From the Portkey dashboard, open Configs, create a config, and copy its Config ID from the config details page.
You can then pass this Config ID from OpenClaw by adding a header in your openclaw.json:
"headers": {
  "x-portkey-config": "pc-config-id"
}
If your Portkey config is responsible for switching between providers or models based on conditions (e.g., latency, cost, availability), you can point OpenClaw at a virtual model and let Portkey handle the routing:
{
  "models": {
    "mode": "merge",
    "providers": {
      "portkey": {
        "baseUrl": "https://api.portkey.ai/v1",
        "apiKey": "YOUR_PORTKEY_API_KEY",
        "headers": {
          "x-portkey-config": "pc-config-id"
        },
        "api": "openai-completions",
        "models": [
          {
            "id": "portkey-dynamic",
            "name": "Portkey Router"
          }
        ]
      }
    }
  },
  "agents": {
    "defaults": {
      "model": {
        "primary": "portkey/portkey-dynamic"
      }
    }
  }
}
Here, portkey-dynamic is a non-existent model in Portkey’s Model Catalog. Instead, your Portkey Config (referenced by x-portkey-config) decides which real provider/model to call.
Once you’ve wired in the Config ID header, you can iterate on different config JSONs in the Portkey UI (failovers, retries, routing rules, etc.) without changing your OpenClaw setup.
Below are example config payloads you might use inside Portkey:
Automatically switch providers when one is down:
{
  "strategy": { "mode": "fallback" },
  "targets": [
    { "provider": "@anthropic-prod" },
    { "provider": "@openai-backup" }
  ]
}

Control Costs

Budget Limits

Set spending limits in Model Catalog → select your provider → Budget & Limits:
  • Cost limit: Maximum spend per period (e.g., $500/month)
  • Token limit: Maximum tokens (e.g., 10M/week)
  • Rate limit: Maximum requests (e.g., 100/minute)
Requests that exceed limits return an error rather than proceeding.

Guardrails

Add input/output checks to filter requests:
{
  "input_guardrails": ["pii-check"],
  "output_guardrails": ["content-moderation"]
}
See Guardrails for available checks.

Roll Out to Teams

Attach Configs to Keys

When deploying to a team, attach configs to API keys so developers get reliability and cost controls automatically.
  1. Create a config with fallbacks, caching, retries, and guardrails
  2. Create an API key and attach the config
  3. Distribute the key to developers
Developers use a simple config — all routing and reliability logic is handled by the attached config. When you update the config, changes apply immediately.

Enterprise Options

  • SaaS: Everything on Portkey cloud
  • Hybrid: Gateway on your infra, control plane on Portkey
  • Air-gapped: Everything on your infra
In hybrid mode, the gateway has no runtime dependency on the control plane — routing continues even if the connection drops.
  • JWT: Bring your own JWKS URL for validation
  • Service keys: For production applications
  • User keys: For individual developers with personal budget limits
Create keys via UI, API, or Terraform.
Override default pricing for negotiated rates or custom models in Model Catalog → Edit model.

Troubleshooting

ProblemSolution
Requests not in dashboardVerify base URL is https://api.portkey.ai/v1 and API key is correct
401 errorsRegenerate Portkey key; check provider credentials in Model Catalog
Model not foundUse @provider/model format (e.g., @anthropic/claude-sonnet-4-5)
Rate limitedAdjust limits in Model Catalog → Budget & Limits
Status · Discord · Docs
Last modified on February 19, 2026