Skip to content

feat: change Prompt integer variants from u16 to u32 for future compatability #392

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 29, 2025

Conversation

paulhendricks
Copy link
Contributor

@paulhendricks paulhendricks commented Jun 25, 2025

Summary

Updates the Prompt enum to use u32 instead of u16 for integer-based prompt representations. The change affects both Prompt::IntegerArray and Prompt::ArrayOfIntegerArray variants, as well as associated impl_from_for_* macro implementations.

Motivation

The original use of u16 may have been based on the assumption that token ID values would not exceed the vocabulary size of models like GPT-2 or GPT-3 (i.e. 50256 tokens).

While that assumption held previously, newer models frequently exceed this range. Allowing u32 values would enable inference libraries to use u32 token IDs and larger vocabulary sizes without needing to truncate while batching.

I fully recognize this is a breaking change to the public API:

  • Users of Prompt::IntegerArray or Prompt::ArrayOfIntegerArray with u16 inputs will encounter compile-time errors
  • Pattern matches and constructors relying on the u16 variant types must be updated to use u32

However, I think that this is in line with the scope and mission of async-openai as the OpenAI OpenAPI schema does not specify the maximum token ID value.

And while users of Prompt::IntegerArray or Prompt::ArrayOfIntegerArray with u16 inputs will need to update, conversion of u16 to u32 is lossless and won't cause any runtime issues. And the benefits of future proofing and greater compatibility outweigh prioritizing smaller vocabulary sizes from the previous GPT-2/GPT-3 era.

Thank you for your consideration!

@ishandhanani
Copy link

Hi @64bit - can we get a review on this? Would love to use this for https://github.com/ai-dynamo/dynamo

Copy link
Owner

@64bit 64bit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for detailed context, appreciate it! @paulhendricks

@ishandhanani glad you find it useful!

@64bit 64bit merged commit 7cb57e8 into 64bit:main Jun 29, 2025
ifsheldon pushed a commit to ifsheldon/async-openai-wasm that referenced this pull request Jun 29, 2025
jkoppel pushed a commit to up-to-speed/async-openai that referenced this pull request Jul 8, 2025
gilljon added a commit to gilljon/async-openai that referenced this pull request Aug 8, 2025
* fix: readme example link (64bit#347)

Co-authored-by: hzlinyiyu <[email protected]>

* feat: Gemini openai compatibility (64bit#353)

* fix: change id and created fields to Option types in response structs (makes loose deserialization which give advantage to gemini openai compatibility)

* fix: change created field to Option type in ImagesResponse struct for better deserialization

* feat: add example for Gemini OpenAI compatibility with async_openai integration

* fix: rollbacked type changes in async-openai, added more examples using byot features

* Backoff when OpenAI returns 5xx (64bit#354)

* chore: Release

* Implement vector store search, retrieve file content operations (64bit#360)

* Implement vector search api

* Make ids in ListVectorStoreFilesResponse optional, as they can come back null when there are no files

* Implement vector file content api

* Add Default derive to RankingOptions, make CompountFilter.type non-optional

* Made comparison type non-optional

* Make compound filter a Vec of VectorStoreSearchFilter

* Implement from conversions for filters

* Add vector store retrieval example

* Update example readme

* Add attributes to create vector store

* Update examples/vector-store-retrieval/src/main.rs

* Update examples/vector-store-retrieval/src/main.rs

---------

Co-authored-by: Himanshu Neema <[email protected]>

* [Completions API] Add web search options (64bit#370)

* [Completons API] Add web search options

* Update async-openai/src/types/chat.rs

* Update async-openai/src/types/chat.rs

* Update async-openai/src/types/chat.rs

* Update async-openai/src/types/chat.rs

* Update async-openai/src/types/chat.rs

* Update async-openai/src/types/chat.rs

* Update async-openai/src/types/chat.rs

* Update async-openai/src/types/chat.rs

* Update async-openai/src/types/chat.rs

* Update examples/completions-web-search/src/main.rs

* Update examples/completions-web-search/src/main.rs

---------

Co-authored-by: Himanshu Neema <[email protected]>

* Add instructions option to speech request (64bit#374)

* Add instructions field to speech request

* Update async-openai/src/types/audio.rs

* Update openapi.yaml

---------

Co-authored-by: Himanshu Neema <[email protected]>

* feat: Add responses API (64bit#373)

* feat: Add responses API

Adds support for the OpenAI responses API

* feat: Add custom input item

There's a lot of possible input items in the responses APIs. Ideally
it'd be nice to have strict types, but for now we can use a custom user
defined json value.

* chore: update readme; format code (64bit#377)

* add Resposnes in feature list

* cargo fmt

* chore: Release

* fix web search options; skip serializing if none (64bit#379)

* added copyright material links, Resolves 64bit#346 (64bit#380)

* add completed state (64bit#384)

* feat: adds Default to CompletionUsage (64bit#387)

* add flex service tier to chat completions (64bit#385)

* chore: Release

* Enable dyn dispatch by dyn Config objects (64bit#383)

* enable dynamic dispatch

* update README with dyn dispatch example

* add doc for dyn dispatch

* Update test

Co-authored-by: Himanshu Neema <[email protected]>

* Update Config bound

Co-authored-by: Himanshu Neema <[email protected]>

* remove Rc impl

Co-authored-by: Himanshu Neema <[email protected]>

* Fix typo

Co-authored-by: Himanshu Neema <[email protected]>

* Fix typo

Co-authored-by: Himanshu Neema <[email protected]>

* Update doc

Co-authored-by: Himanshu Neema <[email protected]>

* Update README

Co-authored-by: Himanshu Neema <[email protected]>

---------

Co-authored-by: Himanshu Neema <[email protected]>

* Add missing voice Ballad to enum (64bit#388)

* Add missing voice Ballad to enum

* Update openapi.yaml

* Update openapi.yaml

---------

Co-authored-by: Himanshu Neema <[email protected]>

* feat: enhance realtime response types and audio transcription options (64bit#391)

* feat: enhance realtime response types and audio transcription options

- Added `Cancelled` variant to `ResponseStatusDetail` enum for better handling of cancelled responses.
- Introduced `LogProb` struct to capture log probability information for transcribed tokens.
- Updated `ConversationItemInputAudioTranscriptionCompletedEvent` and `ConversationItemInputAudioTranscriptionDeltaEvent` to include optional `logprobs` for per-token log probability data.
- Enhanced `AudioTranscription` struct with optional fields for `language`, `model`, and `prompt` to improve transcription accuracy and customization.
- Added new `SemanticVAD` option in the `TurnDetection` enum to control model response eagerness.
- Expanded `RealtimeVoice` enum with additional voice options for more variety in audio responses.

* feat: update audio format enum values for consistency

- Changed enum variants for `AudioFormat` to use underscores instead of hyphens in their serialized names.
- Updated `G711ULAW` from `g711-ulaw` to `g711_law` and `G711ALAW` from `g711-alaw` to `g711_alaw` for improved clarity and adherence to naming conventions.

* feat: add auto-response options to VAD configurations

---------

Co-authored-by: Chris Raethke <[email protected]>

* feat: change Prompt integer variants from u16 to u32 for future compatibility (64bit#392)

* task: Add serialize impl for ApiError (64bit#393)

* task: Add serialize impl for ApiError

- Adds the `serde::Serialize` derive macro to the `ApiError` type so
  that this error can be passed along the wire to clients for proxies

* Update async-openai/Cargo.toml

* Update async-openai/Cargo.toml

---------

Co-authored-by: Himanshu Neema <[email protected]>

* refactor: adding missing fields from Responses API (64bit#394)

* remove .mime_str(application/octet-stream) (64bit#395)

* chore: Release

---------

Co-authored-by: Yiyu Lin <[email protected]>
Co-authored-by: hzlinyiyu <[email protected]>
Co-authored-by: DarshanVanol <[email protected]>
Co-authored-by: Tinco Andringa <[email protected]>
Co-authored-by: Himanshu Neema <[email protected]>
Co-authored-by: Christopher Fraser <[email protected]>
Co-authored-by: Adam Benali <[email protected]>
Co-authored-by: Eric Christiansen <[email protected]>
Co-authored-by: Sam Lewis <[email protected]>
Co-authored-by: Spencer Bartholomew <[email protected]>
Co-authored-by: Jens Walter <[email protected]>
Co-authored-by: Paul Hendricks <[email protected]>
Co-authored-by: ifsheldon <[email protected]>
Co-authored-by: Jeff Registre <[email protected]>
Co-authored-by: Chris Raethke <[email protected]>
Co-authored-by: Chris Raethke <[email protected]>
Co-authored-by: Paul Hendricks <[email protected]>
Co-authored-by: Thomas Harmon <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants