Skip to content

common: add byte_order.h to provide htole32 et al in OS X#1372

Merged
mattklein123 merged 1 commit intoenvoyproxy:masterfrom
turbinelabs:osx-3-common-byte-order
Aug 2, 2017
Merged

common: add byte_order.h to provide htole32 et al in OS X#1372
mattklein123 merged 1 commit intoenvoyproxy:masterfrom
turbinelabs:osx-3-common-byte-order

Conversation

@zuercher
Copy link
Copy Markdown
Member

@zuercher zuercher commented Aug 2, 2017

Provides htole32, htole64, le32toh, and le64toh for OS X. (Split out from #1348, in support of #128).

Copy link
Copy Markdown
Member

@mattklein123 mattklein123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tiny nit: looks good otherwise. Thank you for breaking this out.


#ifdef __APPLE__
#include <libkern/OSByteOrder.h>

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

super minor OCD nit: del newline.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fix format added it for me. Perhaps a newline-after-last-include thing? I can see if it's somehow specific to my env, but I used the CI image to run fix_format to avoid that very problem.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

eh, no problem. Fix format is just messing with my OCD. 😉

@mattklein123 mattklein123 merged commit ab0289b into envoyproxy:master Aug 2, 2017
@zuercher zuercher deleted the osx-3-common-byte-order branch August 10, 2017 16:06
rshriram pushed a commit to rshriram/envoy that referenced this pull request Oct 30, 2018
…ributes (envoyproxy#1372)

Automatic merge from submit-queue.

Support Mixer HTTP filter to report sent.bytes and received.bytes attributes

**What this PR does / why we need it**:Support Mixer HTTP filter to send attributes "sent.bytes" and "received.bytes" in Report() calls. "sent.bytes" records total response sent in bytes, including response headers, body, and trailers. "received.bytes" records total request received in bytes, including request headers, body, and trailers.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes envoyproxy#1370 

**Special notes for your reviewer**:

**Release note**:
```release-note
NONE
```
mathetake pushed a commit that referenced this pull request Mar 3, 2026
…1878)

**Description**

This commit adds a translator that will convert a request sent to
`/anthropic/v1/messages` and `/v1/messages` endpoints for OpenAI schema
backends. It does not matter whether the OpenAI schema backend natively
supports the endpoint (e.g. vLLM) as translating should be a light/fast
enough process. This approach is also more versatile and future-proof
than just passing through the Anthropic Message Request to a backend
that natively supports it. It also follows the already-existing
structure for adding translators, path processor factories, and schema
translation.

A major example use case would be using AI Gateway to route requests
from Claude Code to several AI backends like locally hosted vLLM models
with LoRA adapters.

NOTE: vLLM is only used for local testing as I do not have access to
compute. The intended goal for this PR is to support any OpenAI
compatible backend/services using an Anthropic interface.

**Related Issues/PRs (if applicable)**

Fixes #1372 
Fixes #1867 

**Special notes for reviewers (if applicable)**
Claude Code was used to write most of the tests but were verified. It
would also be nice if the maintainers could review the other PR #1843 as
some of the Anthropic apischema here can be updated once #1843 is
merged.

<details>

<summary>Functional Test Results</summary>

Test for anthropic endpoints for OpenAI schema backends that natively
support it:
```
$ curl -v http://localhost:8080/v1/messages   -H "Content-Type: application/json"   -d '{
    "model": "Qwen/Qwen2.5-0.5B-Instruct",
    "messages": [
      {"role": "user", "content": "Say hello!"}
    ],
    "max_tokens": 100
  }'
* Host localhost:8080 was resolved.
* IPv6: ::1
* IPv4: 127.0.0.1
*   Trying [::1]:8080...
* Connected to localhost (::1) port 8080
> POST /v1/messages HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/8.5.0
> Accept: */*
> Content-Type: application/json
> Content-Length: 143
> 
< HTTP/1.1 200 OK
< date: Fri, 20 Feb 2026 18:46:04 GMT
< server: uvicorn
< content-type: application/json
< content-length: 331
< 
* Connection #0 to host localhost left intact
{"id":"chatcmpl-36ec5b3d-4273-41e9-966b-ed742f7a93d1","type":"message","role":"assistant","content":[{"type":"text","text":"Hello! How can I assist you today?"}],"model":"Qwen/Qwen2.5-0.5B-Instruct","stop_reason":"end_turn","usage":{"cache_creation_input_tokens":0,"cache_read_input_tokens":0,"input_tokens":32,"output_tokens":10}}
```
```
$ curl -v http://localhost:8080/anthropic/v1/messages   -H "Content-Type: application/json"   -d '{
    "model": "Qwen/Qwen2.5-0.5B-Instruct",
    "messages": [
      {"role": "user", "content": "Say hello!"}
    ],
    "max_tokens": 100
  }'
* Host localhost:8080 was resolved.
* IPv6: ::1
* IPv4: 127.0.0.1
*   Trying [::1]:8080...
* Connected to localhost (::1) port 8080
> POST /anthropic/v1/messages HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/8.5.0
> Accept: */*
> Content-Type: application/json
> Content-Length: 143
> 
< HTTP/1.1 200 OK
< date: Fri, 20 Feb 2026 18:46:44 GMT
< server: uvicorn
< content-type: application/json
< content-length: 331
< 
* Connection #0 to host localhost left intact
{"id":"chatcmpl-f639ff32-4f89-48c5-b5b1-56878e641da6","type":"message","role":"assistant","content":[{"type":"text","text":"Hello! How can I assist you today?"}],"model":"Qwen/Qwen2.5-0.5B-Instruct","stop_reason":"end_turn","usage":{"cache_creation_input_tokens":0,"cache_read_input_tokens":0,"input_tokens":32,"output_tokens":10}}
```

Port Forward logs
```
$ kubectl port-forward -n envoy-gateway-system svc/$ENVOY_SERVICE 8080:80
Forwarding from 127.0.0.1:8080 -> 10080
Forwarding from [::1]:8080 -> 10080
Handling connection for 8080
Handling connection for 8080
Handling connection for 8080
```

vLLM Logs (for both requests)
```
(APIServer pid=141923) INFO:     Started server process [141923]
(APIServer pid=141923) INFO:     Waiting for application startup.
(APIServer pid=141923) INFO:     Application startup complete.
(APIServer pid=141923) INFO:     172.18.0.2:46854 - "POST /v1/chat/completions HTTP/1.1" 200 OK
(APIServer pid=141923) INFO 02-20 13:46:05 [loggers.py:257] Engine 000: Avg prompt throughput: 3.2 tokens/s, Avg generation throughput: 1.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
(APIServer pid=141923) INFO 02-20 13:46:15 [loggers.py:257] Engine 000: Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
(APIServer pid=141923) INFO:     172.18.0.2:47216 - "POST /v1/chat/completions HTTP/1.1" 200 OK
(APIServer pid=141923) INFO 02-20 13:46:45 [loggers.py:257] Engine 000: Avg prompt throughput: 3.2 tokens/s, Avg generation throughput: 1.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 25.0%
(APIServer pid=141923) INFO 02-20 13:46:55 [loggers.py:257] Engine 000: Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 25.0%
```

</details>

---------

Signed-off-by: Chang Min <changminbark@gmail.com>
Co-authored-by: Ignasi Barrera <ignasi@tetrate.io>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants