Skip to content

Commit ba0ba88

Browse files
gilljonattila-linhzlinyiyu-neteaseDarshanVanoltinco
authored
Sync Upstream 483c84f (#4)
* fix: readme example link (64bit#347) Co-authored-by: hzlinyiyu <[email protected]> * feat: Gemini openai compatibility (64bit#353) * fix: change id and created fields to Option types in response structs (makes loose deserialization which give advantage to gemini openai compatibility) * fix: change created field to Option type in ImagesResponse struct for better deserialization * feat: add example for Gemini OpenAI compatibility with async_openai integration * fix: rollbacked type changes in async-openai, added more examples using byot features * Backoff when OpenAI returns 5xx (64bit#354) * chore: Release * Implement vector store search, retrieve file content operations (64bit#360) * Implement vector search api * Make ids in ListVectorStoreFilesResponse optional, as they can come back null when there are no files * Implement vector file content api * Add Default derive to RankingOptions, make CompountFilter.type non-optional * Made comparison type non-optional * Make compound filter a Vec of VectorStoreSearchFilter * Implement from conversions for filters * Add vector store retrieval example * Update example readme * Add attributes to create vector store * Update examples/vector-store-retrieval/src/main.rs * Update examples/vector-store-retrieval/src/main.rs --------- Co-authored-by: Himanshu Neema <[email protected]> * [Completions API] Add web search options (64bit#370) * [Completons API] Add web search options * Update async-openai/src/types/chat.rs * Update async-openai/src/types/chat.rs * Update async-openai/src/types/chat.rs * Update async-openai/src/types/chat.rs * Update async-openai/src/types/chat.rs * Update async-openai/src/types/chat.rs * Update async-openai/src/types/chat.rs * Update async-openai/src/types/chat.rs * Update async-openai/src/types/chat.rs * Update examples/completions-web-search/src/main.rs * Update examples/completions-web-search/src/main.rs --------- Co-authored-by: Himanshu Neema <[email protected]> * Add instructions option to speech request (64bit#374) * Add instructions field to speech request * Update async-openai/src/types/audio.rs * Update openapi.yaml --------- Co-authored-by: Himanshu Neema <[email protected]> * feat: Add responses API (64bit#373) * feat: Add responses API Adds support for the OpenAI responses API * feat: Add custom input item There's a lot of possible input items in the responses APIs. Ideally it'd be nice to have strict types, but for now we can use a custom user defined json value. * chore: update readme; format code (64bit#377) * add Resposnes in feature list * cargo fmt * chore: Release * fix web search options; skip serializing if none (64bit#379) * added copyright material links, Resolves 64bit#346 (64bit#380) * add completed state (64bit#384) * feat: adds Default to CompletionUsage (64bit#387) * add flex service tier to chat completions (64bit#385) * chore: Release * Enable dyn dispatch by dyn Config objects (64bit#383) * enable dynamic dispatch * update README with dyn dispatch example * add doc for dyn dispatch * Update test Co-authored-by: Himanshu Neema <[email protected]> * Update Config bound Co-authored-by: Himanshu Neema <[email protected]> * remove Rc impl Co-authored-by: Himanshu Neema <[email protected]> * Fix typo Co-authored-by: Himanshu Neema <[email protected]> * Fix typo Co-authored-by: Himanshu Neema <[email protected]> * Update doc Co-authored-by: Himanshu Neema <[email protected]> * Update README Co-authored-by: Himanshu Neema <[email protected]> --------- Co-authored-by: Himanshu Neema <[email protected]> * Add missing voice Ballad to enum (64bit#388) * Add missing voice Ballad to enum * Update openapi.yaml * Update openapi.yaml --------- Co-authored-by: Himanshu Neema <[email protected]> * feat: enhance realtime response types and audio transcription options (64bit#391) * feat: enhance realtime response types and audio transcription options - Added `Cancelled` variant to `ResponseStatusDetail` enum for better handling of cancelled responses. - Introduced `LogProb` struct to capture log probability information for transcribed tokens. - Updated `ConversationItemInputAudioTranscriptionCompletedEvent` and `ConversationItemInputAudioTranscriptionDeltaEvent` to include optional `logprobs` for per-token log probability data. - Enhanced `AudioTranscription` struct with optional fields for `language`, `model`, and `prompt` to improve transcription accuracy and customization. - Added new `SemanticVAD` option in the `TurnDetection` enum to control model response eagerness. - Expanded `RealtimeVoice` enum with additional voice options for more variety in audio responses. * feat: update audio format enum values for consistency - Changed enum variants for `AudioFormat` to use underscores instead of hyphens in their serialized names. - Updated `G711ULAW` from `g711-ulaw` to `g711_law` and `G711ALAW` from `g711-alaw` to `g711_alaw` for improved clarity and adherence to naming conventions. * feat: add auto-response options to VAD configurations --------- Co-authored-by: Chris Raethke <[email protected]> * feat: change Prompt integer variants from u16 to u32 for future compatibility (64bit#392) * task: Add serialize impl for ApiError (64bit#393) * task: Add serialize impl for ApiError - Adds the `serde::Serialize` derive macro to the `ApiError` type so that this error can be passed along the wire to clients for proxies * Update async-openai/Cargo.toml * Update async-openai/Cargo.toml --------- Co-authored-by: Himanshu Neema <[email protected]> * refactor: adding missing fields from Responses API (64bit#394) * remove .mime_str(application/octet-stream) (64bit#395) * chore: Release --------- Co-authored-by: Yiyu Lin <[email protected]> Co-authored-by: hzlinyiyu <[email protected]> Co-authored-by: DarshanVanol <[email protected]> Co-authored-by: Tinco Andringa <[email protected]> Co-authored-by: Himanshu Neema <[email protected]> Co-authored-by: Christopher Fraser <[email protected]> Co-authored-by: Adam Benali <[email protected]> Co-authored-by: Eric Christiansen <[email protected]> Co-authored-by: Sam Lewis <[email protected]> Co-authored-by: Spencer Bartholomew <[email protected]> Co-authored-by: Jens Walter <[email protected]> Co-authored-by: Paul Hendricks <[email protected]> Co-authored-by: ifsheldon <[email protected]> Co-authored-by: Jeff Registre <[email protected]> Co-authored-by: Chris Raethke <[email protected]> Co-authored-by: Chris Raethke <[email protected]> Co-authored-by: Paul Hendricks <[email protected]> Co-authored-by: Thomas Harmon <[email protected]>
1 parent 0b9c9ff commit ba0ba88

39 files changed

+3990
-1061
lines changed

Cargo.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,4 +5,4 @@ default-members = ["async-openai", "async-openai-*"]
55
resolver = "2"
66

77
[workspace.package]
8-
rust-version = "1.75"
8+
rust-version = "1.75"

async-openai-macros/Cargo.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,4 +16,4 @@ proc-macro = true
1616
[dependencies]
1717
syn = { version = "2.0", features = ["full"] }
1818
quote = "1.0"
19-
proc-macro2 = "1.0"
19+
proc-macro2 = "1.0"

async-openai/Cargo.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[package]
22
name = "async-openai"
3-
version = "0.28.0"
3+
version = "0.29.0"
44
authors = ["Himanshu Neema"]
55
categories = ["api-bindings", "web-programming", "asynchronous"]
66
keywords = ["openai", "async", "openapi", "ai"]

async-openai/README.md

Lines changed: 20 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,7 @@
3636
- [x] Moderations
3737
- [x] Organizations | Administration (partially implemented)
3838
- [x] Realtime (Beta) (partially implemented)
39+
- [x] Responses (partially implemented)
3940
- [x] Uploads
4041
- Bring your own custom types for Request or Response objects.
4142
- SSE streaming on available APIs
@@ -140,13 +141,30 @@ This can be useful in many scenarios:
140141
- To avoid verbose types.
141142
- To escape deserialization errors.
142143

143-
Visit [examples/bring-your-own-type](https://github.com/64bit/async-openai/tree/main/examples/bring-your-own-type) directory to learn more.
144+
Visit [examples/bring-your-own-type](https://github.com/64bit/async-openai/tree/main/examples/bring-your-own-type)
145+
directory to learn more.
146+
147+
## Dynamic Dispatch for Different Providers
148+
149+
For any struct that implements `Config` trait, you can wrap it in a smart pointer and cast the pointer to `dyn Config`
150+
trait object, then your client can accept any wrapped configuration type.
151+
152+
For example,
153+
154+
```rust
155+
use async_openai::{Client, config::Config, config::OpenAIConfig};
156+
157+
let openai_config = OpenAIConfig::default();
158+
// You can use `std::sync::Arc` to wrap the config as well
159+
let config = Box::new(openai_config) as Box<dyn Config>;
160+
let client: Client<Box<dyn Config> > = Client::with_config(config);
161+
```
144162

145163
## Contributing
146164

147165
Thank you for taking the time to contribute and improve the project. I'd be happy to have you!
148166

149-
All forms of contributions, such as new features requests, bug fixes, issues, documentation, testing, comments, [examples](../examples) etc. are welcome.
167+
All forms of contributions, such as new features requests, bug fixes, issues, documentation, testing, comments, [examples](https://github.com/64bit/async-openai/tree/main/examples) etc. are welcome.
150168

151169
A good starting point would be to look at existing [open issues](https://github.com/64bit/async-openai/issues).
152170

async-openai/src/client.rs

Lines changed: 21 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ use serde::{de::DeserializeOwned, Serialize};
88

99
use crate::{
1010
config::{Config, OpenAIConfig},
11-
error::{map_deserialization_error, OpenAIError, WrappedError},
11+
error::{map_deserialization_error, ApiError, OpenAIError, WrappedError},
1212
file::Files,
1313
image::Images,
1414
moderation::Moderations,
@@ -167,6 +167,11 @@ impl<C: Config> Client<C> {
167167
Projects::new(self)
168168
}
169169

170+
/// To call [Responses] group related APIs using this client.
171+
pub fn responses(&self) -> Responses<C> {
172+
Responses::new(self)
173+
}
174+
170175
pub fn config(&self) -> &C {
171176
&self.config
172177
}
@@ -341,6 +346,21 @@ impl<C: Config> Client<C> {
341346
.map_err(OpenAIError::Reqwest)
342347
.map_err(backoff::Error::Permanent)?;
343348

349+
if status.is_server_error() {
350+
// OpenAI does not guarantee server errors are returned as JSON so we cannot deserialize them.
351+
let message: String = String::from_utf8_lossy(&bytes).into_owned();
352+
tracing::warn!("Server error: {status} - {message}");
353+
return Err(backoff::Error::Transient {
354+
err: OpenAIError::ApiError(ApiError {
355+
message,
356+
r#type: None,
357+
param: None,
358+
code: None,
359+
}),
360+
retry_after: None,
361+
});
362+
}
363+
344364
// Deserialize response body from either error object or actual response object
345365
if !status.is_success() {
346366
let wrapped_error: WrappedError = serde_json::from_slice(bytes.as_ref())

async-openai/src/config.rs

Lines changed: 79 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ pub const OPENAI_BETA_HEADER: &str = "OpenAI-Beta";
1515

1616
/// [crate::Client] relies on this for every API call on OpenAI
1717
/// or Azure OpenAI service
18-
pub trait Config: Clone {
18+
pub trait Config: Send + Sync {
1919
fn headers(&self) -> HeaderMap;
2020
fn url(&self, path: &str) -> String;
2121
fn query(&self) -> Vec<(&str, &str)>;
@@ -25,6 +25,32 @@ pub trait Config: Clone {
2525
fn api_key(&self) -> &SecretString;
2626
}
2727

28+
/// Macro to implement Config trait for pointer types with dyn objects
29+
macro_rules! impl_config_for_ptr {
30+
($t:ty) => {
31+
impl Config for $t {
32+
fn headers(&self) -> HeaderMap {
33+
self.as_ref().headers()
34+
}
35+
fn url(&self, path: &str) -> String {
36+
self.as_ref().url(path)
37+
}
38+
fn query(&self) -> Vec<(&str, &str)> {
39+
self.as_ref().query()
40+
}
41+
fn api_base(&self) -> &str {
42+
self.as_ref().api_base()
43+
}
44+
fn api_key(&self) -> &SecretString {
45+
self.as_ref().api_key()
46+
}
47+
}
48+
};
49+
}
50+
51+
impl_config_for_ptr!(Box<dyn Config>);
52+
impl_config_for_ptr!(std::sync::Arc<dyn Config>);
53+
2854
/// Configuration for OpenAI API
2955
#[derive(Clone, Debug, Deserialize)]
3056
#[serde(default)]
@@ -211,3 +237,55 @@ impl Config for AzureConfig {
211237
vec![("api-version", &self.api_version)]
212238
}
213239
}
240+
241+
#[cfg(test)]
242+
mod test {
243+
use super::*;
244+
use crate::types::{
245+
ChatCompletionRequestMessage, ChatCompletionRequestUserMessage, CreateChatCompletionRequest,
246+
};
247+
use crate::Client;
248+
use std::sync::Arc;
249+
#[test]
250+
fn test_client_creation() {
251+
unsafe { std::env::set_var("OPENAI_API_KEY", "test") }
252+
let openai_config = OpenAIConfig::default();
253+
let config = Box::new(openai_config.clone()) as Box<dyn Config>;
254+
let client = Client::with_config(config);
255+
assert!(client.config().url("").ends_with("/v1"));
256+
257+
let config = Arc::new(openai_config) as Arc<dyn Config>;
258+
let client = Client::with_config(config);
259+
assert!(client.config().url("").ends_with("/v1"));
260+
let cloned_client = client.clone();
261+
assert!(cloned_client.config().url("").ends_with("/v1"));
262+
}
263+
264+
async fn dynamic_dispatch_compiles(client: &Client<Box<dyn Config>>) {
265+
let _ = client.chat().create(CreateChatCompletionRequest {
266+
model: "gpt-4o".to_string(),
267+
messages: vec![ChatCompletionRequestMessage::User(
268+
ChatCompletionRequestUserMessage {
269+
content: "Hello, world!".into(),
270+
..Default::default()
271+
},
272+
)],
273+
..Default::default()
274+
});
275+
}
276+
277+
#[tokio::test]
278+
async fn test_dynamic_dispatch() {
279+
let openai_config = OpenAIConfig::default();
280+
let azure_config = AzureConfig::default();
281+
282+
let azure_client = Client::with_config(Box::new(azure_config.clone()) as Box<dyn Config>);
283+
let oai_client = Client::with_config(Box::new(openai_config.clone()) as Box<dyn Config>);
284+
285+
let _ = dynamic_dispatch_compiles(&azure_client).await;
286+
let _ = dynamic_dispatch_compiles(&oai_client).await;
287+
288+
let _ = tokio::spawn(async move { dynamic_dispatch_compiles(&azure_client).await });
289+
let _ = tokio::spawn(async move { dynamic_dispatch_compiles(&oai_client).await });
290+
}
291+
}

async-openai/src/error.rs

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
//! Errors originating from API calls, parsing responses, and reading-or-writing to the file system.
2-
use serde::Deserialize;
2+
use serde::{Deserialize, Serialize};
33

44
#[derive(Debug, thiserror::Error)]
55
pub enum OpenAIError {
@@ -28,7 +28,7 @@ pub enum OpenAIError {
2828
}
2929

3030
/// OpenAI API returns error object on failure
31-
#[derive(Debug, Deserialize, Clone)]
31+
#[derive(Debug, Serialize, Deserialize, Clone)]
3232
pub struct ApiError {
3333
pub message: String,
3434
pub r#type: Option<String>,
@@ -62,9 +62,9 @@ impl std::fmt::Display for ApiError {
6262
}
6363

6464
/// Wrapper to deserialize the error object nested in "error" JSON key
65-
#[derive(Debug, Deserialize)]
66-
pub(crate) struct WrappedError {
67-
pub(crate) error: ApiError,
65+
#[derive(Debug, Deserialize, Serialize)]
66+
pub struct WrappedError {
67+
pub error: ApiError,
6868
}
6969

7070
/// Flat error object returned by vLLM API (not wrapped in "error")

async-openai/src/lib.rs

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -94,6 +94,22 @@
9494
//! # });
9595
//!```
9696
//!
97+
//! ## Dynamic Dispatch for Different Providers
98+
//!
99+
//! For any struct that implements `Config` trait, you can wrap it in a smart pointer and cast the pointer to `dyn Config`
100+
//! trait object, then your client can accept any wrapped configuration type.
101+
//!
102+
//! For example,
103+
//! ```
104+
//! use async_openai::{Client, config::Config, config::OpenAIConfig};
105+
//! unsafe { std::env::set_var("OPENAI_API_KEY", "only for doc test") }
106+
//!
107+
//! let openai_config = OpenAIConfig::default();
108+
//! // You can use `std::sync::Arc` to wrap the config as well
109+
//! let config = Box::new(openai_config) as Box<dyn Config>;
110+
//! let client: Client<Box<dyn Config> > = Client::with_config(config);
111+
//! ```
112+
//!
97113
//! ## Microsoft Azure
98114
//!
99115
//! ```
@@ -146,6 +162,7 @@ mod project_api_keys;
146162
mod project_service_accounts;
147163
mod project_users;
148164
mod projects;
165+
mod responses;
149166
mod runs;
150167
mod steps;
151168
mod threads;
@@ -178,6 +195,7 @@ pub use project_api_keys::ProjectAPIKeys;
178195
pub use project_service_accounts::ProjectServiceAccounts;
179196
pub use project_users::ProjectUsers;
180197
pub use projects::Projects;
198+
pub use responses::Responses;
181199
pub use runs::Runs;
182200
pub use steps::Steps;
183201
pub use threads::Threads;

async-openai/src/responses.rs

Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
use crate::{
2+
config::Config,
3+
error::OpenAIError,
4+
types::responses::{CreateResponse, Response},
5+
Client,
6+
};
7+
8+
/// Given text input or a list of context items, the model will generate a response.
9+
///
10+
/// Related guide: [Responses API](https://platform.openai.com/docs/guides/responses)
11+
pub struct Responses<'c, C: Config> {
12+
client: &'c Client<C>,
13+
}
14+
15+
impl<'c, C: Config> Responses<'c, C> {
16+
/// Constructs a new Responses client.
17+
pub fn new(client: &'c Client<C>) -> Self {
18+
Self { client }
19+
}
20+
21+
/// Creates a model response for the given input.
22+
#[crate::byot(
23+
T0 = serde::Serialize,
24+
R = serde::de::DeserializeOwned
25+
)]
26+
pub async fn create(&self, request: CreateResponse) -> Result<Response, OpenAIError> {
27+
self.client.post("/responses", request).await
28+
}
29+
}

async-openai/src/types/audio.rs

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,6 +40,7 @@ pub enum Voice {
4040
#[default]
4141
Alloy,
4242
Ash,
43+
Ballad,
4344
Coral,
4445
Echo,
4546
Fable,
@@ -188,10 +189,16 @@ pub struct CreateSpeechRequest {
188189
/// One of the available [TTS models](https://platform.openai.com/docs/models/tts): `tts-1` or `tts-1-hd`
189190
pub model: SpeechModel,
190191

191-
/// The voice to use when generating the audio. Supported voices are `alloy`, `ash`, `coral`, `echo`, `fable`, `onyx`, `nova`, `sage` and `shimmer`.
192+
/// The voice to use when generating the audio. Supported voices are `alloy`, `ash`, `coral`, `echo`, `fable`, `onyx`, `nova`, `sage`, `shimmer` and `verse`.
193+
192194
/// Previews of the voices are available in the [Text to speech guide](https://platform.openai.com/docs/guides/text-to-speech#voice-options).
193195
pub voice: Voice,
194196

197+
/// Control the voice of your generated audio with additional instructions.
198+
/// Does not work with `tts-1` or `tts-1-hd`.
199+
#[serde(skip_serializing_if = "Option::is_none")]
200+
pub instructions: Option<String>,
201+
195202
/// The format to audio in. Supported formats are `mp3`, `opus`, `aac`, `flac`, `wav`, and `pcm`.
196203
#[serde(skip_serializing_if = "Option::is_none")]
197204
pub response_format: Option<SpeechResponseFormat>,

0 commit comments

Comments
 (0)