Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update README.md with More Thorough Documentation #46

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
94 changes: 77 additions & 17 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,48 +31,108 @@ OPENAI_ORGANIZATION="YOUR-ORGANIZATION"
Create a `OpenAIKit.Client` by passing a configuration.

~~~~swift
import AsyncHTTPClient
import NIO
import OpenAIKit

var apiKey: String {
ProcessInfo.processInfo.environment["OPENAI_API_KEY"]!
// Marked ObservableObject so the service can be used with the @EnvironmentObject property wrapper
final class OpenAiService: ObservableObject {
let openAiClient: Client

var apiKey: String {
ProcessInfo.processInfo.environment["OPENAI_API_KEY"]!
}

var organization: String {
ProcessInfo.processInfo.environment["OPENAI_ORGANIZATION"]!
}

init() {
let eventLoopGroup = MultiThreadedEventLoopGroup(numberOfThreads: 1)
// Generally we would advise on creating a single HTTPClient for the lifecycle of your application and recommend shutting it down on application close.
let httpClient = HTTPClient(eventLoopGroupProvider: .shared(eventLoopGroup))
let configuration = Configuration(apiKey: apiKey, organization: organization)
openAIClient = OpenAIKit.Client(httpClient: httpClient, configuration: configuration)
}
}
~~~~


## Using the API

The OpenAIKit.Client implements a handful of methods to interact with the OpenAI API.

### Basic Chat

Here's an example of a simple function that prompts ChatGPT for a response to "Hi!" and returns ChatGPT's response as a String:

~~~~swift
import OpenAIKit

var organization: String {
ProcessInfo.processInfo.environment["OPENAI_ORGANIZATION"]!
func getResponse() async throws -> String {
let completion = try await openAiClient.completions.create(
model: Model.GPT3.davinci,
prompts: ["Hi!"]
)

// OpenAiError is a custom type created for this example that's not included with OpenAIKit
guard let newMessage = completion.choices.first?.text else { throw OpenAiError.noResponseFound }

return newMessage
}
~~~~

...
The code above will interact with the davinci GPT3 model. While this model will work for basic use cases, it is not the best option for full conversations because it does not allow for ChatGPT to be aware of previous messages after a new one is sent. For full conversations, a good option is to use the gpt-3.5-turbo model mentioned below. For more info on the models offered by Open AI, many of which have been incorporated into OpenAIKit, see: https://platform.openai.com/docs/models/overview

// Generally we would advise on creating a single HTTPClient for the lifecycle of your application and recommend shutting it down on application close.
### Advanced Chat

let eventLoopGroup = MultiThreadedEventLoopGroup(numberOfThreads: 1)
Here's an example of functionality in a SwiftUI view model that sends a message to ChatGPT and fetches a response. Both the user's message and ChatGPT's response are stored in an array of type `[Chat.Message]`, which is passed to the `create` method's `messages` property so that ChatGPT can analyze the context of the conversation before responding:

~~~~swift
import OpenAIKit

let httpClient = HTTPClient(eventLoopGroupProvider: .shared(eventLoopGroup))
@Published var messages = [Chat.Message]()

defer {
// it's important to shutdown the httpClient after all requests are done, even if one failed. See: https://github.com/swift-server/async-http-client
try? httpClient.syncShutdown()
func sendMessage(withText text: String) async {
do {
messages.append(Chat.Message.user(content: text))
let response = try await getResponse()
messages.append(response)
} catch {
print(error)
}
}

let configuration = Configuration(apiKey: apiKey, organization: organization)
func getResponse() async throws -> Chat.Message {
let completion = try await openAiClient.chats.create(
model: Model.GPT3.gpt3_5Turbo,
messages: messages
)

let openAIClient = OpenAIKit.Client(httpClient: httpClient, configuration: configuration)
// OpenAiError is a custom type created for this example that's not included with OpenAIKit
guard let newMessage = completion.choices.first?.message else { throw OpenAiError.noResponseFound }
// If the message was fetched successfully, it will use the assistant(content:) case in Chat.Message, where content is a string containing the message text

return newMessage
}
~~~~

### Tokens

## Using the API

The OpenAIKit.Client implements a handful of methods to interact with the OpenAI API:
Every interaction with the OpenAI API requires a certain amount of tokens. By default, the maximum number of tokens allowed per interaction is set to 16, which will only allow the API to respond with very short responses (by default, responses that require more than 16 tokens will be truncated). To increase this maximum number of tokens, use the `maxTokens` parameter on the `create` method like this:

~~~~swift
import OpenAIKit

let completion = try await openAIClient.completions.create(
model: Model.GPT3.davinci,
prompts: ["Write a haiku"]
prompts: ["Write a haiku"],
maxTokens: 300
)
~~~~

Your desired `maxTokens` value will depend on your use case. To determine a good value for your use case that will keep most responses from getting truncated, call the `create` method a few times while steadily increasing the `maxTokens` value and printing out `completion.usage.totalTokens`. While doing this, keep in mind that you are charged for your API usage based on how many tokens you use. To gain a thorough understanding of rate limits and how they work in conjunction with tokens, see: https://platform.openai.com/docs/guides/rate-limits/overview

### What's Implemented
* [x] [Chat](https://platform.openai.com/docs/api-reference/chat)
* [x] [Models](https://beta.openai.com/docs/api-reference/models)
Expand Down