Skip to content

Raise exceptions #344

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 11 commits into from
Nov 5, 2023
24 changes: 18 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -161,9 +161,9 @@ client.models.retrieve(id: "text-ada-001")
- text-babbage-001
- text-curie-001

### ChatGPT
### Chat

ChatGPT is a model that can be used to generate text in a conversational style. You can use it to [generate a response](https://platform.openai.com/docs/api-reference/chat/create) to a sequence of [messages](https://platform.openai.com/docs/guides/chat/introduction):
GPT is a model that can be used to generate text in a conversational style. You can use it to [generate a response](https://platform.openai.com/docs/api-reference/chat/create) to a sequence of [messages](https://platform.openai.com/docs/guides/chat/introduction):

```ruby
response = client.chat(
Expand All @@ -176,11 +176,11 @@ puts response.dig("choices", 0, "message", "content")
# => "Hello! How may I assist you today?"
```

### Streaming ChatGPT
### Streaming Chat

[Quick guide to streaming ChatGPT with Rails 7 and Hotwire](https://gist.github.com/alexrudall/cb5ee1e109353ef358adb4e66631799d)
[Quick guide to streaming Chat with Rails 7 and Hotwire](https://gist.github.com/alexrudall/cb5ee1e109353ef358adb4e66631799d)

You can stream from the API in realtime, which can be much faster and used to create a more engaging user experience. Pass a [Proc](https://ruby-doc.org/core-2.6/Proc.html) (or any object with a `#call` method) to the `stream` parameter to receive the stream of text chunks as they are generated. Each time one or more chunks is received, the proc will be called once with each chunk, parsed as a Hash. If OpenAI returns an error, `ruby-openai` will pass that to your proc as a Hash.
You can stream from the API in realtime, which can be much faster and used to create a more engaging user experience. Pass a [Proc](https://ruby-doc.org/core-2.6/Proc.html) (or any object with a `#call` method) to the `stream` parameter to receive the stream of completion chunks as they are generated. Each time one or more chunks is received, the proc will be called once with each chunk, parsed as a Hash. If OpenAI returns an error, `ruby-openai` will raise a Faraday error.

```ruby
client.chat(
Expand All @@ -195,7 +195,7 @@ client.chat(
# => "Anna is a young woman in her mid-twenties, with wavy chestnut hair that falls to her shoulders..."
```

Note: the API docs state that token usage is included in the streamed chat chunk objects, but this doesn't currently appear to be the case. To count tokens while streaming, try `OpenAI.rough_token_count` or [tiktoken_ruby](https://github.com/IAPark/tiktoken_ruby).
Note: OpenAPI currently does not report token usage for streaming responses. To count tokens while streaming, try `OpenAI.rough_token_count` or [tiktoken_ruby](https://github.com/IAPark/tiktoken_ruby). We think that each call to the stream proc corresponds to a single token, so you can also try counting the number of calls to the proc to get the completion token count.

### Functions

Expand Down Expand Up @@ -455,6 +455,18 @@ puts response["text"]
# => "Transcription of the text"
```

#### Errors

HTTP errors can be caught like this:

```
begin
OpenAI::Client.new.models.retrieve(id: "text-ada-001")
rescue Faraday::Error => e
raise "Got a Faraday error: #{e}"
end
```

## Development

After checking out the repo, run `bin/setup` to install dependencies. You can run `bin/console` for an interactive prompt that will allow you to experiment.
Expand Down
20 changes: 7 additions & 13 deletions lib/openai/http.rb
Original file line number Diff line number Diff line change
Expand Up @@ -50,34 +50,28 @@ def to_json(string)
# For each chunk, the inner user_proc is called giving it the JSON object. The JSON object could
# be a data object or an error object as described in the OpenAI API documentation.
#
# If the JSON object for a given data or error message is invalid, it is ignored.
#
# @param user_proc [Proc] The inner proc to call for each JSON object in the chunk.
# @return [Proc] An outer proc that iterates over a raw stream, converting it to JSON.
def to_json_stream(user_proc:)
parser = EventStreamParser::Parser.new

proc do |chunk, _bytes, env|
if env && env.status != 200
emit_json(json: chunk, user_proc: user_proc)
else
parser.feed(chunk) do |_type, data|
emit_json(json: data, user_proc: user_proc) unless data == "[DONE]"
end
raise_error = Faraday::Response::RaiseError.new
raise_error.on_complete(env.merge(body: JSON.parse(chunk)))
end
end
end

def emit_json(json:, user_proc:)
user_proc.call(JSON.parse(json))
rescue JSON::ParserError
# Ignore invalid JSON.
parser.feed(chunk) do |_type, data|
user_proc.call(JSON.parse(data)) unless data == "[DONE]"
end
end
end

def conn(multipart: false)
Faraday.new do |f|
f.options[:timeout] = @request_timeout
f.request(:multipart) if multipart
f.response :raise_error
end
end

Expand Down
175 changes: 0 additions & 175 deletions spec/fixtures/cassettes/finetune_completions_i_love_mondays.yml

This file was deleted.

125 changes: 0 additions & 125 deletions spec/fixtures/cassettes/finetune_completions_i_love_mondays_create.yml

This file was deleted.

Loading