Skip to main content

Behavry Integration — OpenAI API Proxy

For teams using OpenAI's API programmatically (LangChain, CrewAI, AutoGen, custom scripts), Behavry can intercept all API calls by acting as an OpenAI-compatible proxy.

Point your OpenAI SDK at Behavry instead of OpenAI directly. Behavry authenticates your agent identity, audits request metadata, enforces policy, and forwards the request to OpenAI — streaming the response back transparently.


How It Works

Your Code (openai SDK)
↓ OPENAI_BASE_URL=http://localhost:8000/api/v1/openai
Behavry MCP Proxy
↓ validates JWT | audits metadata | checks OPA policy
OpenAI API (api.openai.com)
↑ response streamed back

The proxy:

  1. Validates your Behavry agent JWT (Authorization: Bearer <behavry-jwt>)
  2. Extracts your OpenAI API key from X-OpenAI-Key header (never logged)
  3. Audits request metadata: model, message count, tool usage, presence of system prompt — not message content
  4. Evaluates OPA policy (can this agent call this model?)
  5. Forwards request to https://api.openai.com/{path} using your key
  6. Streams response back to caller
  7. Audits response metadata: token counts, finish reason

Prerequisites

  • Behavry stack running (make dev or docker compose up)
  • A Behavry agent with web:read and web:write permissions
  • Your OpenAI API key

Step 1 — Provision an Agent

If you already have an agent provisioned for another client, you can reuse its credentials. Otherwise:

cd demos/setup
python create_dev_agent.py --client claude # or any existing client

Load the credentials:

cat demos/data/.creds/claude_creds.json
# { "client_id": "...", "client_secret": "..." }

Step 2 — Get a Behavry JWT

curl -s -X POST http://localhost:8000/api/v1/auth/token \
-H "Content-Type: application/json" \
-d '{"client_id": "YOUR_CLIENT_ID", "client_secret": "YOUR_CLIENT_SECRET", "grant_type": "client_credentials"}' \
| jq -r .access_token

Step 3 — Configure Your Code

Python (openai SDK)

from openai import OpenAI

BEHAVRY_JWT = "eyJhbGci..." # from Step 2
OPENAI_API_KEY = "sk-..." # your real OpenAI key

client = OpenAI(
base_url="http://localhost:8000/api/v1/openai/v1",
api_key=BEHAVRY_JWT, # Behavry validates this as the agent JWT
default_headers={
"X-OpenAI-Key": OPENAI_API_KEY, # forwarded to OpenAI, never logged
},
)

response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)

Environment Variables (12-factor style)

export OPENAI_BASE_URL=http://localhost:8000/api/v1/openai/v1
export OPENAI_API_KEY=<behavry-jwt>
# Pass OpenAI key separately:
export OPENAI_REAL_KEY=sk-...

Then in code:

import os
from openai import OpenAI

client = OpenAI(
base_url=os.environ["OPENAI_BASE_URL"],
api_key=os.environ["OPENAI_API_KEY"],
default_headers={"X-OpenAI-Key": os.environ["OPENAI_REAL_KEY"]},
)

LangChain

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
openai_api_base="http://localhost:8000/api/v1/openai/v1",
openai_api_key=BEHAVRY_JWT,
model_kwargs={"extra_headers": {"X-OpenAI-Key": OPENAI_API_KEY}},
model="gpt-4o",
)

Step 4 — Verify

Make a request and check the Live Activity feed at http://localhost:5173.

You should see an event with:

  • tool_name: openai-api
  • mcp_server: openai-proxy
  • action: POST
  • target: /openai/v1/chat/completions
  • policy_result: allow

Audited Metadata

The proxy audits the following — message content is never stored:

FieldExample
Modelgpt-4o
Message count3
Has system prompttrue
Has toolsfalse
Input tokens142
Output tokens87
Finish reasonstop

Policy Control

You can write OPA policies to control which models agents can access. Example:

package behavry.authz

# Only allow approved models for autonomous agents
deny if {
input.tool_name == "openai-api"
input.agent_type == "autonomous"
not input.model in {"gpt-4o-mini", "gpt-3.5-turbo"}
}

Streaming

Streaming responses (stream=True) are fully supported. The proxy passes SSE chunks through transparently. Request metadata is audited before streaming begins; response metadata is not available for streamed responses (OpenAI doesn't send token counts in stream mode by default).


Troubleshooting

401 Unauthorized

The Behavry JWT has expired. Re-fetch it using Step 2.

403 from proxy

Your agent's role doesn't include web:write permission. Update the role in the Behavry dashboard or re-run create_dev_agent.py.

OpenAI returning 401

The X-OpenAI-Key header is missing or contains an invalid key. Check your OpenAI API key.