Behavry Integration — OpenAI API Proxy
For teams using OpenAI's API programmatically (LangChain, CrewAI, AutoGen, custom scripts), Behavry can intercept all API calls by acting as an OpenAI-compatible proxy.
Point your OpenAI SDK at Behavry instead of OpenAI directly. Behavry authenticates your agent identity, audits request metadata, enforces policy, and forwards the request to OpenAI — streaming the response back transparently.
How It Works
Your Code (openai SDK)
↓ OPENAI_BASE_URL=http://localhost:8000/api/v1/openai
Behavry MCP Proxy
↓ validates JWT | audits metadata | checks OPA policy
OpenAI API (api.openai.com)
↑ response streamed back
The proxy:
- Validates your Behavry agent JWT (
Authorization: Bearer <behavry-jwt>) - Extracts your OpenAI API key from
X-OpenAI-Keyheader (never logged) - Audits request metadata: model, message count, tool usage, presence of system prompt — not message content
- Evaluates OPA policy (can this agent call this model?)
- Forwards request to
https://api.openai.com/{path}using your key - Streams response back to caller
- Audits response metadata: token counts, finish reason
Prerequisites
- Behavry stack running (
make devordocker compose up) - A Behavry agent with
web:readandweb:writepermissions - Your OpenAI API key
Step 1 — Provision an Agent
If you already have an agent provisioned for another client, you can reuse its credentials. Otherwise:
cd demos/setup
python create_dev_agent.py --client claude # or any existing client
Load the credentials:
cat demos/data/.creds/claude_creds.json
# { "client_id": "...", "client_secret": "..." }
Step 2 — Get a Behavry JWT
curl -s -X POST http://localhost:8000/api/v1/auth/token \
-H "Content-Type: application/json" \
-d '{"client_id": "YOUR_CLIENT_ID", "client_secret": "YOUR_CLIENT_SECRET", "grant_type": "client_credentials"}' \
| jq -r .access_token
Step 3 — Configure Your Code
Python (openai SDK)
from openai import OpenAI
BEHAVRY_JWT = "eyJhbGci..." # from Step 2
OPENAI_API_KEY = "sk-..." # your real OpenAI key
client = OpenAI(
base_url="http://localhost:8000/api/v1/openai/v1",
api_key=BEHAVRY_JWT, # Behavry validates this as the agent JWT
default_headers={
"X-OpenAI-Key": OPENAI_API_KEY, # forwarded to OpenAI, never logged
},
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)
Environment Variables (12-factor style)
export OPENAI_BASE_URL=http://localhost:8000/api/v1/openai/v1
export OPENAI_API_KEY=<behavry-jwt>
# Pass OpenAI key separately:
export OPENAI_REAL_KEY=sk-...
Then in code:
import os
from openai import OpenAI
client = OpenAI(
base_url=os.environ["OPENAI_BASE_URL"],
api_key=os.environ["OPENAI_API_KEY"],
default_headers={"X-OpenAI-Key": os.environ["OPENAI_REAL_KEY"]},
)
LangChain
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
openai_api_base="http://localhost:8000/api/v1/openai/v1",
openai_api_key=BEHAVRY_JWT,
model_kwargs={"extra_headers": {"X-OpenAI-Key": OPENAI_API_KEY}},
model="gpt-4o",
)
Step 4 — Verify
Make a request and check the Live Activity feed at http://localhost:5173.
You should see an event with:
tool_name:openai-apimcp_server:openai-proxyaction:POSTtarget:/openai/v1/chat/completionspolicy_result:allow
Audited Metadata
The proxy audits the following — message content is never stored:
| Field | Example |
|---|---|
| Model | gpt-4o |
| Message count | 3 |
| Has system prompt | true |
| Has tools | false |
| Input tokens | 142 |
| Output tokens | 87 |
| Finish reason | stop |
Policy Control
You can write OPA policies to control which models agents can access. Example:
package behavry.authz
# Only allow approved models for autonomous agents
deny if {
input.tool_name == "openai-api"
input.agent_type == "autonomous"
not input.model in {"gpt-4o-mini", "gpt-3.5-turbo"}
}
Streaming
Streaming responses (stream=True) are fully supported. The proxy passes SSE chunks through transparently. Request metadata is audited before streaming begins; response metadata is not available for streamed responses (OpenAI doesn't send token counts in stream mode by default).
Troubleshooting
401 Unauthorized
The Behavry JWT has expired. Re-fetch it using Step 2.
403 from proxy
Your agent's role doesn't include web:write permission. Update the role in the Behavry dashboard or re-run create_dev_agent.py.
OpenAI returning 401
The X-OpenAI-Key header is missing or contains an invalid key. Check your OpenAI API key.