ChatGPT

A Beginner’s Guide to Integrate ChatGPT with Python and Its completely FREE

Rishabh Bothra
3 min readJan 15, 2023

--

OpenAI created ChatGPT, a potent language model that can produce text that sounds like human speech. It can be used for various tasks such as Text Completion, Code Completion, Text Summarization, Question Answering, Translation and many more….. you need to have an account to access ChatGPT ( If you don't have an account yet, make one using this link)

Step 1: Install the OpenAI library.

Installing the OpenAI library is the first step and for that you need to install it using pip using the following command:

pip install openai

Step 2: Generate your unique API Token.

To access the OpenAI library you need to use API Token. Go to the OpenAI website to get yourself an API Token. Keep your token handy. It's completely free as you get $18 credit and you don't need any payment information to get started. You can check your usage and balance here

disclaimer: Never share your API token with anyone

Step 3: Securing your API Token

This is an extra step to secure your API Token. If you are not sharing your Code with others or on any platform then you can skip this step and jump to the next step.

Start your dearest IDE and copy the below code:

import openai_secret_manager

secrets = openai_secret_manager.get_secrets("openai")
api_key = secrets["api_key"]

Step 3: Using OpenAI to generate Text

Now that you installed OpenAI library and have your API Token with you, Now it’s time to Code…

import openai

# Inplace of api_key use your API Token
openai.api_key = secrets["api_key"]

# if you use follow step 3 then in place of above code use below code...
# openai.api_key = api_key

# Define the prompt
prompt = 'What is ChatGPT'

# Send the request to the API
response = openai.Completion.create(
model="text-davinci-002",
prompt=prompt,
temperature=0.5,
max_tokens=100,
top_p=1,
frequency_penalty=0,
presence_penalty=0
)

# Print the response
print(response["choices"][0]["text"])

code explanation

First we are importing the openai library to our code and in the next step, we pass API Token to openai.api_key. prompt is as its name suggests The prompt(s) to generate completions for. you can change it to anything you want. Here I search for ‘What is ChatGPT’. and response capture the response back and we have the option to control a few aspects of how GPT responds.

  • model: ID of the model to use. You can use the List models
  • temperature:Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer.
  • max_tokens: The maximum number of tokens to generate in the completion.
  • top_p: An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
  • frequency_penalty: Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.
  • presence_penalty: Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics.

Play with parameters and you will learn more there are more parameters checklists here. do check out the OpenAI API Doc here and do share your experience here.

I hope you enjoyed it.

Have Fun!

--

--