Command to create table of contents:
find . -name "*.md" -type f | sed 's/^\.\///' | sed 's/\.md$//' | awk -F/ '{print "- [" $NF "](./" $0 ".md)"}' > links.md
A prompt is text input that describes your problem or request.
Some exapmles of prompts:
Who is Michael Jordan?
Example of answering a question with context:
Please give me the recipe?
Chat-GPT was trained on data up to
September 2021
Expedia Plugin:
I live in New York and I want to fly to Lisbon on September 29th. I want to stay for 7 days, the flight should be a direct flight, I want to arrive in the afternoon or later, the flight back should be in the morning.
Chat
button.Creative
,Balanced(default)
,Precise
Bing page context:
Bing Compose feature:
- Drafts (shows multiple versions of the response to the given prompt)
- google it (searches the web for the prompt)
Skip a few sections…
Installing open ai package
pip install openai
Activating virtual environment
Open the Command Palette (Ctrl+Shift+P), then select the Python: Select Interpreter. From the list, select the virtual environment in your project folder that starts with .env.
Run Terminal: Create New Integrated Terminal (Ctrl+Shift+` or from the Command Palette), which creates a terminal and automatically activates the virtual environment by running its activation script.
Basic Setup for Chat Completions
import os
import openai
from dotenv import load_dotenv
# Load environment variables from .env file
load_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")
completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
)
print(completion.choices[0].message.content)
Temprature
import os
import openai
from dotenv import load_dotenv
# Load environment variables from .env file
load_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")
userInput = input("Enter your message: ")
completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": userInput},
],
temperature=0.9,
)
print(completion.choices[0].message.content)
Max Tokens
import os
import openai
from dotenv import load_dotenv
# Load environment variables from .env file
load_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")
userInput = input("Enter your message: ")
completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": userInput},
],
temperature=0.9,
max_tokens=10,
)
print(completion.choices[0].message.content)
example of output with 10 max tokens:
$ python3 app.py
Enter your message: what is a bananna?
A banana is a tropical fruit that comes in a
Completions Object
completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": userInput},
],
temperature=0.9,
max_tokens=100,
)
print(completion)
{
"id": "chatcmpl-84GS5hZFuorURgfvkaaIdSTg2q7Y4",
"object": "chat.completion",
"created": 1696027245,
"model": "gpt-3.5-turbo-0613",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Python lists are a type of data structure that allow you to store and organize multiple items in a single variable. Lists in Python are ordered and mutable, which means you can change, add, or remove elements from them.\n\nHere is an example of how to create a list in Python:\n\n```\nmy_list = [1, 2, 3, 4, 5]\n```\n\nIn this example, `my_list` is a list that contains the numbers 1, 2,"
},
"finish_reason": "length"
}
],
"usage": {
"prompt_tokens": 22,
"completion_tokens": 100,
"total_tokens": 122
}
}
Roles
By defining roles, the API allows for more structured and dynamic interactions, enabling developers to shape the behavior of the model in various conversational scenarios.
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": userInput},
],
import os
import openai
from dotenv import load_dotenv
# Load environment variables from .env file
load_dotenv()
# Set the OpenAI API key from the environment variable
openai.api_key = os.getenv("OPENAI_API_KEY")
# Initialize an empty list to store the conversation messages
messages = []
# Infinite loop to keep the chat session active
while True:
# Get input from the user
userInput = input("You: ")
# Append the user's input to the messages list
messages.append({"role": "user", "content": userInput})
# Send a request to OpenAI's GPT-3.5 model with the conversation history
completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=messages,
temperature=0,
)
# Extract the assistant's response from the API's response
response = completion.choices[0].message.content
# Append the assistant's response to the messages list
messages.append({"role": "assistant", "content": response})
# Print the assistant's response
print('RESPONSE:', response)
import os
import openai
from dotenv import load_dotenv
# Load environment variables from .env file
load_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")
userInput = input("You: ")
completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a text summarization chatbot. Your goal is to summarize the text that is given to you by the user."},
{"role": "user", "content": userInput},
],
temperature=0
)
response = completion.choices[0].message.content
print('Summarized Text: ', response)
output:
$ python3 app.py
You: A large language model is a sophisticated type of artificial intelligence model specifically designed to handle tasks related to human language. Trained on vast amounts of text data, often encompassing billions of words from diverse sources such as books, articles, and websites, these models absorb the intricacies of language, including grammar, semantics, and context. Through this extensive training, they learn to recognize patterns, nuances, and even cultural references, enabling them to generate, understand, and respond to natural language queries with a high degree of accuracy. Their deep understanding of language allows them to produce text that is coherent, contextually relevant, and often indistinguishable from human-written content. As a result, they are increasingly used in a variety of applications, from chatbots and virtual assistants to content generation and language translation.
Summarized Text: A large language model is an advanced AI model designed to handle tasks related to human language. These models are trained on vast amounts of text data and can understand grammar, semantics, and context. They can generate, understand, and respond to natural language queries accurately. Their deep understanding of language allows them to produce coherent and contextually relevant text that is often indistinguishable from human-written content. As a result, they are used in various applications such as chatbots, virtual assistants, content generation, and language translation.