ChatGPT的API调用方式总结

2024-03-13 12:31:55 浏览数 (1)

OPENAI的ChatGPT API调用方式有多种,有基于SDK和HTTP的调用方式,也有流式和非流式的调用方式,接下来将分别举例说明。

本文示例的普通模型是text-davinci-003模型,聊天模型是gpt-3.5-turbo,可以在OPENAI官网查看更多模型介绍:https://platform.openai.com/docs/models/overview.

示例中的temperature参数是是设置回答的随机性,取值0,1,值越大每次回答内容越随机。

基于SDK

基于SDK的方式调用,需要设置环境变量OPENAI_API_KEY,或者在代码中设置openai.api_key = your_api_key.

普通模型-API调用

代码语言:python代码运行次数:36复制
model = "text-davinci-003"

def openai_sdk_http_api(content):
    response = openai.Completion.create(
        model=model,
        prompt=content,
        temperature=0.8,
    )
    answer = response.choices[0].text.strip()
    return answer

普通模型-API流式调用

代码语言:python代码运行次数:18复制
model = "text-davinci-003"

def openai_sdk_stream_http_api(prompt):
    response = openai.Completion.create(
        model=model,
        prompt=prompt,
        stream=True
    )
    for message in response:
        print(message.choices[0].text.strip(), end='', flush=True)

聊天模型-API调用

代码语言:python代码运行次数:17复制
model = "gpt-3.5-turbo"

def openai_sdk_chat_http_api(message):
    response = openai.ChatCompletion.create(
        model=model,
        messages=[{"role": "user", "content": message}],
        temperature=0.8,
        top_p=1,
        presence_penalty=1,
    )
    answer = response.choices[0].message.content.strip()
    return answer

聊天模型-API流式调用

代码语言:python代码运行次数:6复制
model = "gpt-3.5-turbo"

def openai_sdk_stream_chat_http_api(content):
    response = openai.ChatCompletion.create(
        model=model,
        messages=[{"role": "user", "content": content}],
        temperature=0.8,
        stream=True
    )
    i = 0
    for chunk in response:
        if i > 0 and chunk.choices[0].delta['content']:
            print(chunk.choices[0].delta.content, end='', flush=True)
        i  = 1
    return

基于HTTP

基于HTTP的方式调用,需要在HTTP头部显示设置OPENAI的API KEY.

普通模型-API调用

代码语言:python代码运行次数:3复制
api_key = "your api key"
model = "text-davinci-003"

def openai_http_api(message):
    url = 'https://api.openai.com/v1/completions'
    headers = {
        "Authorization": "Bearer "   api_key,
        "Content-Type": "application/json"
    }
    data = {
        "model": model,
        "prompt": message,
        "temperature": 0.8,
    }
    response = requests.post(url, headers=headers, data=json.dumps(data))
    return response.json()['choices'][0]['text']

普通模型-API流式调用

代码语言:python代码运行次数:12复制
api_key = "your api key"
model = "text-davinci-003"

def openai_stream_http_api(message):
    url = 'https://api.openai.com/v1/completions'
    headers = {
        "Authorization": "Bearer "   api_key,
        'Accept': 'text/event-stream',
    }
    data = {
        "model": model,
        "prompt": message,
        "temperature": 0.8,
        "stream": True
    }
    response = requests.post(url, headers=headers, json=data, stream=True)
    for chunk in response.iter_lines():
        response_data = chunk.decode("utf-8").strip()
        if not response_data:
            continue
        try:
            print('response_data:', response_data)
            if response_data.endswith("data: [DONE]"):
                break
            json_data = json.loads(response_data.split("data: ")[1])
            msg = json_data["choices"][0]["text"]
            print(msg, end='', flush=True)
        except:
            print('json load error, data:', response_data)

聊天模型-API调用

代码语言:python代码运行次数:3复制
api_key = "your api key"
model = "gpt-3.5-turbo"

def openai_chat_http_api(message):
    url = 'https://api.openai.com/v1/chat/completions'
    headers = {
        "Authorization": "Bearer "   api_key,
        "Content-Type": "application/json"
    }
    data = {
        "model": model,
        "messages": [{"role": "user", "content": message}],
        "temperature": 0.8,
    }
    response = requests.post(url, headers=headers, data=json.dumps(data))
    return response.json()['choices'][0]['message']['content']

聊天模型-API流式调用

代码语言:python代码运行次数:11复制
api_key = "your api key"
model = "gpt-3.5-turbo"

def openai_stream_chat_http_api(message):
    url = 'https://api.openai.com/v1/chat/completions'
    headers = {
        "Authorization": "Bearer "   api_key,
        "Content-Type": "application/json"
    }
    data = {
        "model": model,
        "messages": [{"role": "user", "content": message}],
        "temperature": 0.8,
        "stream": True
    }
    response = requests.post(openai_url, headers=headers, json=data, stream=True)
    for chunk in response.iter_lines():
        response_data = chunk.decode("utf-8").strip()
        if not response_data:
            continue
        try:
            if response_data.endswith("data: [DONE]"):
                break
            data_list = response_data.split("data: ")
            if len(data_list) > 2:
                json_data = json.loads(data_list[2])
            else:
                json_data = json.loads(response_data.split("data: ")[1])
            if 'content' in json_data["choices"][0]["delta"]:
                msg = json_data["choices"][0]["delta"]['content']
                print(msg, end='', flush=True)
        except:
            print('json load error:', response_data)

0 人点赞