AI

AI Chat-GPT & LLM’s

Command to create table of contents: find . -name "*.md" -type f | sed 's/^\.\///' | sed 's/\.md$//' | awk -F/ '{print "- [" $NF "](./" $0 ".md)"}' > links.md

Table of Contents

(Chat Generative Pre-trained Transformer) and Large Language Models (LLMs) like GPT-3 are a type of artificial intelligence model designed for natural language processing. Here’s a summary of how they work:

  1. Pre-training:
  1. Transformer Architecture:
  1. Tokenization:
  1. Attention Mechanism:
  1. Fine-tuning:
  1. Inference:
  1. Contextual Understanding:
  1. Limitations:
  1. Ethical and Bias Concerns:

Prompts:

Example of answering a question with context:

Chat-GPT was trained on data up to September 2021


Chat-GPT Plugins:

Expedia Plugin:

I live in New York and I want to fly to Lisbon on September 29th. I want to stay for 7 days, the flight should be a direct flight, I want to arrive in the afternoon or later, the flight back should be in the morning.

Expedia Plugin



Google Bard & Microsoft Bing Chat:

Bing GPT

Bing page context: Bing Page Context

Bing Compose feature: Bing Compose


Google Bard:

Google Bard

Features:
- Drafts (shows multiple versions of the response to the given prompt)
- google it (searches the web for the prompt)

Comparison



Prompt Engineering:

Prompt Engineering



Skip a few sections…



Chat GPT API:

Chat GPT API

Chat Reference

Installing open ai package

pip install openai

Activating virtual environment

Open the Command Palette (Ctrl+Shift+P), then select the Python: Select Interpreter. From the list, select the virtual environment in your project folder that starts with .env.

Run Terminal: Create New Integrated Terminal (Ctrl+Shift+` or from the Command Palette), which creates a terminal and automatically activates the virtual environment by running its activation script.

Basic Setup for Chat Completions

import os
import openai
from dotenv import load_dotenv

# Load environment variables from .env file
load_dotenv()

openai.api_key = os.getenv("OPENAI_API_KEY")

completion = openai.ChatCompletion.create(
  model="gpt-3.5-turbo",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
  ]
)

print(completion.choices[0].message.content)

Temprature

import os
import openai
from dotenv import load_dotenv

# Load environment variables from .env file
load_dotenv()

openai.api_key = os.getenv("OPENAI_API_KEY")

userInput = input("Enter your message: ")

completion = openai.ChatCompletion.create(
  model="gpt-3.5-turbo",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": userInput},
  ],
  temperature=0.9,
)

print(completion.choices[0].message.content)

Max Tokens


import os
import openai
from dotenv import load_dotenv

# Load environment variables from .env file
load_dotenv()

openai.api_key = os.getenv("OPENAI_API_KEY")

userInput = input("Enter your message: ")

completion = openai.ChatCompletion.create(
  model="gpt-3.5-turbo",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": userInput},
  ],
  temperature=0.9,
  max_tokens=10,
)

print(completion.choices[0].message.content)

example of output with 10 max tokens:

$ python3 app.py
Enter your message: what is a bananna?
A banana is a tropical fruit that comes in a

Completions Object

completion = openai.ChatCompletion.create(
  model="gpt-3.5-turbo",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": userInput},
  ],
  temperature=0.9,
  max_tokens=100,
)

print(completion)
{
  "id": "chatcmpl-84GS5hZFuorURgfvkaaIdSTg2q7Y4",
  "object": "chat.completion",
  "created": 1696027245,
  "model": "gpt-3.5-turbo-0613",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Python lists are a type of data structure that allow you to store and organize multiple items in a single variable. Lists in Python are ordered and mutable, which means you can change, add, or remove elements from them.\n\nHere is an example of how to create a list in Python:\n\n```\nmy_list = [1, 2, 3, 4, 5]\n```\n\nIn this example, `my_list` is a list that contains the numbers 1, 2,"
      },
      "finish_reason": "length"
    }
  ],
  "usage": {
    "prompt_tokens": 22,
    "completion_tokens": 100,
    "total_tokens": 122
  }
}

Roles

  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": userInput},
  ],

How to reference previous messages:

import os
import openai
from dotenv import load_dotenv

# Load environment variables from .env file
load_dotenv()

# Set the OpenAI API key from the environment variable
openai.api_key = os.getenv("OPENAI_API_KEY")

# Initialize an empty list to store the conversation messages
messages = []

# Infinite loop to keep the chat session active
while True:

    # Get input from the user
    userInput = input("You: ")

    # Append the user's input to the messages list
    messages.append({"role": "user", "content": userInput})

    # Send a request to OpenAI's GPT-3.5 model with the conversation history
    completion = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=messages,
        temperature=0,
    )

    # Extract the assistant's response from the API's response
    response = completion.choices[0].message.content

    # Append the assistant's response to the messages list
    messages.append({"role": "assistant", "content": response})

    # Print the assistant's response
    print('RESPONSE:', response)

Text Summarization tool:

import os
import openai
from dotenv import load_dotenv

# Load environment variables from .env file
load_dotenv()

openai.api_key = os.getenv("OPENAI_API_KEY")

userInput = input("You: ")

completion = openai.ChatCompletion.create(
    model="gpt-3.5-turbo",
    messages=[
        {"role": "system", "content": "You are a text summarization chatbot. Your goal is to summarize the text that is given to you by the user."},
        {"role": "user", "content": userInput},

    ],
    temperature=0
)

response = completion.choices[0].message.content

print('Summarized Text: ', response)

output:

$ python3 app.py
You: A large language model is a sophisticated type of artificial intelligence model specifically designed to handle tasks related to human language. Trained on vast amounts of text data, often encompassing billions of words from diverse sources such as books, articles, and websites, these models absorb the intricacies of language, including grammar, semantics, and context. Through this extensive training, they learn to recognize patterns, nuances, and even cultural references, enabling them to generate, understand, and respond to natural language queries with a high degree of accuracy. Their deep understanding of language allows them to produce text that is coherent, contextually relevant, and often indistinguishable from human-written content. As a result, they are increasingly used in a variety of applications, from chatbots and virtual assistants to content generation and language translation.
Summarized Text:  A large language model is an advanced AI model designed to handle tasks related to human language. These models are trained on vast amounts of text data and can understand grammar, semantics, and context. They can generate, understand, and respond to natural language queries accurately. Their deep understanding of language allows them to produce coherent and contextually relevant text that is often indistinguishable from human-written content. As a result, they are used in various applications such as chatbots, virtual assistants, content generation, and language translation.