if (!require(usethis))
install.packages("usethis")
::edit_r_environ() # global
usethis# OR #
::edit_r_environ(scope = "project") # project specific usethis
Summary
OpenAI Playground is a tool which provides an environment for people to experiment with reinforcement learning (RL) algorithms. With the OpenAI Playground, users can design an AI agent that learns to effectively play classic Atari titles, as well as custom game environments, create a chatbot and other things. This interface offers a not so streamlined, but easy and way faster possibility to check out things than the ChatGPT site.
The package documentation can be found here: https://irudnyts.github.io/openai/index.html.
Setting up things:
In order to get access to the OpenAI API, you need to optain an API key first. Therefore, please create a new account here, or if applicable log in with your already existing one/and alternative provider (e.g. google).
Once you signed up and are logged in, please open this page. Which is the place for your API keys. If you have none yet, create a new one, via the drop down arrow make the whole key visible and copy it into your clipboard.
Next you need to decide on which level you want to store this API key:
Environment level | usethis call | description |
---|---|---|
project | usethis::edit_r_environ(scope = "project") |
The specific Rstudio project needs to opened to query, as the environment variable is stored in a project related manner. |
global | usethis::edit_r_environ() |
With this setting this environment variable is available in all your Rstudio Sessions. (with the respective user). |
Add the OPENAI_API_KEY to your environment.
In the file that opens now add your token with the following syntax, replacing the xx-es by your personal token, which you can OPENAI_API_KEY = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
Application
Available models
First we load the library, and then call for all available models, due to easier visibility only parts of the returned data are shown here.
library(openai)
list_models()
id | object | created | owned_by |
---|---|---|---|
babbage | model | 1649358449 | openai |
davinci | model | 1649359874 | openai |
gpt-3.5-turbo-0301 | model | 1677649963 | openai |
text-davinci-003 | model | 1669599635 | openai-internal |
babbage-code-search-code | model | 1651172509 | openai-dev |
text-similarity-babbage-001 | model | 1651172505 | openai-dev |
text-davinci-001 | model | 1649364042 | openai |
ada | model | 1649357491 | openai |
curie-instruct-beta | model | 1649364042 | openai |
babbage-code-search-text | model | 1651172509 | openai-dev |
babbage-similarity | model | 1651172505 | openai-dev |
gpt-3.5-turbo | model | 1677610602 | openai |
code-davinci-002 | model | 1649880485 | openai |
code-search-babbage-text-001 | model | 1651172507 | openai-dev |
text-embedding-ada-002 | model | 1671217299 | openai-internal |
code-cushman-001 | model | 1656081837 | openai |
whisper-1 | model | 1677532384 | openai-internal |
code-search-babbage-code-001 | model | 1651172507 | openai-dev |
audio-transcribe-deprecated | model | 1674776185 | openai-internal |
text-ada-001 | model | 1649364042 | openai |
ask a respective model for code completion
This can be easily fine tuned before at the openAI website.
<- openai::create_completion(
text model = "text-davinci-003",
prompt = "Write a little story about a luck little programmer which was at the right time at the right spot to discover something novel",max_tokens = 4000
)
::raw_html(text$choices$text) knitr
Thanks for reading until here!
Regarding promt design I can recomend only to work through the following link yourself, as I haven’t gotten any further yet openAI Guide on text completion.
For testing how authentic your text is, you could try out the openAI text classifier.
And if you by chance know a way to get the returned html text nicely formated acording to the breaks, please feel free to leave me a comment.