-
Notifications
You must be signed in to change notification settings - Fork 300
feat: change Prompt integer variants from u16 to u32 for future compatability #392
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
ishandhanani
approved these changes
Jun 25, 2025
Hi @64bit - can we get a review on this? Would love to use this for https://github.com/ai-dynamo/dynamo |
64bit
approved these changes
Jun 29, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for detailed context, appreciate it! @paulhendricks
@ishandhanani glad you find it useful!
ifsheldon
pushed a commit
to ifsheldon/async-openai-wasm
that referenced
this pull request
Jun 29, 2025
jkoppel
pushed a commit
to up-to-speed/async-openai
that referenced
this pull request
Jul 8, 2025
gilljon
added a commit
to gilljon/async-openai
that referenced
this pull request
Aug 8, 2025
* fix: readme example link (64bit#347) Co-authored-by: hzlinyiyu <[email protected]> * feat: Gemini openai compatibility (64bit#353) * fix: change id and created fields to Option types in response structs (makes loose deserialization which give advantage to gemini openai compatibility) * fix: change created field to Option type in ImagesResponse struct for better deserialization * feat: add example for Gemini OpenAI compatibility with async_openai integration * fix: rollbacked type changes in async-openai, added more examples using byot features * Backoff when OpenAI returns 5xx (64bit#354) * chore: Release * Implement vector store search, retrieve file content operations (64bit#360) * Implement vector search api * Make ids in ListVectorStoreFilesResponse optional, as they can come back null when there are no files * Implement vector file content api * Add Default derive to RankingOptions, make CompountFilter.type non-optional * Made comparison type non-optional * Make compound filter a Vec of VectorStoreSearchFilter * Implement from conversions for filters * Add vector store retrieval example * Update example readme * Add attributes to create vector store * Update examples/vector-store-retrieval/src/main.rs * Update examples/vector-store-retrieval/src/main.rs --------- Co-authored-by: Himanshu Neema <[email protected]> * [Completions API] Add web search options (64bit#370) * [Completons API] Add web search options * Update async-openai/src/types/chat.rs * Update async-openai/src/types/chat.rs * Update async-openai/src/types/chat.rs * Update async-openai/src/types/chat.rs * Update async-openai/src/types/chat.rs * Update async-openai/src/types/chat.rs * Update async-openai/src/types/chat.rs * Update async-openai/src/types/chat.rs * Update async-openai/src/types/chat.rs * Update examples/completions-web-search/src/main.rs * Update examples/completions-web-search/src/main.rs --------- Co-authored-by: Himanshu Neema <[email protected]> * Add instructions option to speech request (64bit#374) * Add instructions field to speech request * Update async-openai/src/types/audio.rs * Update openapi.yaml --------- Co-authored-by: Himanshu Neema <[email protected]> * feat: Add responses API (64bit#373) * feat: Add responses API Adds support for the OpenAI responses API * feat: Add custom input item There's a lot of possible input items in the responses APIs. Ideally it'd be nice to have strict types, but for now we can use a custom user defined json value. * chore: update readme; format code (64bit#377) * add Resposnes in feature list * cargo fmt * chore: Release * fix web search options; skip serializing if none (64bit#379) * added copyright material links, Resolves 64bit#346 (64bit#380) * add completed state (64bit#384) * feat: adds Default to CompletionUsage (64bit#387) * add flex service tier to chat completions (64bit#385) * chore: Release * Enable dyn dispatch by dyn Config objects (64bit#383) * enable dynamic dispatch * update README with dyn dispatch example * add doc for dyn dispatch * Update test Co-authored-by: Himanshu Neema <[email protected]> * Update Config bound Co-authored-by: Himanshu Neema <[email protected]> * remove Rc impl Co-authored-by: Himanshu Neema <[email protected]> * Fix typo Co-authored-by: Himanshu Neema <[email protected]> * Fix typo Co-authored-by: Himanshu Neema <[email protected]> * Update doc Co-authored-by: Himanshu Neema <[email protected]> * Update README Co-authored-by: Himanshu Neema <[email protected]> --------- Co-authored-by: Himanshu Neema <[email protected]> * Add missing voice Ballad to enum (64bit#388) * Add missing voice Ballad to enum * Update openapi.yaml * Update openapi.yaml --------- Co-authored-by: Himanshu Neema <[email protected]> * feat: enhance realtime response types and audio transcription options (64bit#391) * feat: enhance realtime response types and audio transcription options - Added `Cancelled` variant to `ResponseStatusDetail` enum for better handling of cancelled responses. - Introduced `LogProb` struct to capture log probability information for transcribed tokens. - Updated `ConversationItemInputAudioTranscriptionCompletedEvent` and `ConversationItemInputAudioTranscriptionDeltaEvent` to include optional `logprobs` for per-token log probability data. - Enhanced `AudioTranscription` struct with optional fields for `language`, `model`, and `prompt` to improve transcription accuracy and customization. - Added new `SemanticVAD` option in the `TurnDetection` enum to control model response eagerness. - Expanded `RealtimeVoice` enum with additional voice options for more variety in audio responses. * feat: update audio format enum values for consistency - Changed enum variants for `AudioFormat` to use underscores instead of hyphens in their serialized names. - Updated `G711ULAW` from `g711-ulaw` to `g711_law` and `G711ALAW` from `g711-alaw` to `g711_alaw` for improved clarity and adherence to naming conventions. * feat: add auto-response options to VAD configurations --------- Co-authored-by: Chris Raethke <[email protected]> * feat: change Prompt integer variants from u16 to u32 for future compatibility (64bit#392) * task: Add serialize impl for ApiError (64bit#393) * task: Add serialize impl for ApiError - Adds the `serde::Serialize` derive macro to the `ApiError` type so that this error can be passed along the wire to clients for proxies * Update async-openai/Cargo.toml * Update async-openai/Cargo.toml --------- Co-authored-by: Himanshu Neema <[email protected]> * refactor: adding missing fields from Responses API (64bit#394) * remove .mime_str(application/octet-stream) (64bit#395) * chore: Release --------- Co-authored-by: Yiyu Lin <[email protected]> Co-authored-by: hzlinyiyu <[email protected]> Co-authored-by: DarshanVanol <[email protected]> Co-authored-by: Tinco Andringa <[email protected]> Co-authored-by: Himanshu Neema <[email protected]> Co-authored-by: Christopher Fraser <[email protected]> Co-authored-by: Adam Benali <[email protected]> Co-authored-by: Eric Christiansen <[email protected]> Co-authored-by: Sam Lewis <[email protected]> Co-authored-by: Spencer Bartholomew <[email protected]> Co-authored-by: Jens Walter <[email protected]> Co-authored-by: Paul Hendricks <[email protected]> Co-authored-by: ifsheldon <[email protected]> Co-authored-by: Jeff Registre <[email protected]> Co-authored-by: Chris Raethke <[email protected]> Co-authored-by: Chris Raethke <[email protected]> Co-authored-by: Paul Hendricks <[email protected]> Co-authored-by: Thomas Harmon <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
Updates the
Prompt
enum to useu32
instead ofu16
for integer-based prompt representations. The change affects bothPrompt::IntegerArray
andPrompt::ArrayOfIntegerArray
variants, as well as associatedimpl_from_for_*
macro implementations.Motivation
The original use of
u16
may have been based on the assumption that token ID values would not exceed the vocabulary size of models like GPT-2 or GPT-3 (i.e. 50256 tokens).While that assumption held previously, newer models frequently exceed this range. Allowing
u32
values would enable inference libraries to useu32
token IDs and larger vocabulary sizes without needing to truncate while batching.I fully recognize this is a breaking change to the public API:
Prompt::IntegerArray
orPrompt::ArrayOfIntegerArray
withu16
inputs will encounter compile-time errorsu16
variant types must be updated to useu32
However, I think that this is in line with the scope and mission of
async-openai
as the OpenAI OpenAPI schema does not specify the maximum token ID value.And while users of
Prompt::IntegerArray
orPrompt::ArrayOfIntegerArray
withu16
inputs will need to update, conversion ofu16
tou32
is lossless and won't cause any runtime issues. And the benefits of future proofing and greater compatibility outweigh prioritizing smaller vocabulary sizes from the previous GPT-2/GPT-3 era.Thank you for your consideration!