【李宏毅-生成式AI】Spring 2024, HW3:以API快速搭建自己的应用
实验要求使用Google Gemini API或者ChatGPT (OpenAI API)搭建自己的应用。
1. 准备工作
学会使用Google Colab写代码,并使用Gradio来托管App。
详见课程Slides
2. 获取API key
可选的API有:
- Google Gemini API:免费;
- ChatGPT (OpenAI API):有5美元的免费额度;
PS:这两个在中国大陆都不可用,请自行科学上网。
“获取API Key”详见课程Slides。
3. Task
目标:了解如何通过调用API和输入prompts来构建自己的Language Model应用。
Task:
Summarization (3 points)
Role Playing (3 points)
Customized Task (4 points)
填入API key
本文使用Gemini API进行示范。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# Import packages
import google.generativeai as genai
from typing import List, Tuple
import gradio as gr
import json
# Set up Gemini API key
## TODO: Fill in your Gemini API in the ""
GOOGLE_API_KEY=""
genai.configure(api_key=GOOGLE_API_KEY)
model = genai.GenerativeModel('gemini-pro')
# Check if you have set your Gemini API successfully
# You should see "Set Gemini API sucessfully!!" if nothing goes wrong.
try:
model.generate_content(
"test",
)
print("Set Gemini API sucessfully!!")
except:
print("There seems to be something wrong with your Gemini API. Please follow our demonstration in the slide to get a correct one.")
将API key填入GOOGLE_API_KEY
即可。
Task 1: Summarization
设计一个prompt
,使语言模型能够对文章进行总结。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# function to call the model to generate
def interact_summarization(prompt: str, article: str, temp = 1.0) -> List[Tuple[str, str]]:
'''
* Arguments
- prompt: the prompt that we use in this section
- article: the article to be summarized
- temp: the temperature parameter of this model. Temperature is used to control the output of the chatbot.
The higher the temperature is, the more creative response you will get.
'''
input = f"{prompt}\n{article}"
response = model.generate_content(
input,
generation_config=genai.types.GenerationConfig(temperature=temp),
safety_settings=[
{"category": "HARM_CATEGORY_HARASSMENT","threshold": "BLOCK_NONE",},
{"category": "HARM_CATEGORY_HATE_SPEECH","threshold": "BLOCK_NONE",},
{"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT","threshold": "BLOCK_NONE",},
{"category": "HARM_CATEGORY_DANGEROUS_CONTENT","threshold": "BLOCK_NONE",},
]
)
return [(input, response.text)]
将{prompt}
和{article}
拼接为input,传给Gemini API model.generate_content
,返回的response
即为语言模型生成的结果。
Task 2: Role Playing
设计一个机器人服务,可以和LM一起玩角色扮演游戏。要求与LM进行多轮对话。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
# function to call the model to generate
def interact_roleplay(chatbot: List[Tuple[str, str]], user_input: str, temp=1.0) -> List[Tuple[str, str]]:
'''
* Arguments
- user_input: the user input of each round of conversation
- temp: the temperature parameter of this model. Temperature is used to control the output of the chatbot.
The higher the temperature is, the more creative response you will get.
'''
try:
messages = []
for input_text, response_text in chatbot:
messages.append({'role': 'user', 'parts': [input_text]})
messages.append({'role': 'model', 'parts': [response_text]})
messages.append({'role': 'user', 'parts': [user_input]})
response = model.generate_content(
messages,
generation_config=genai.types.GenerationConfig(temperature=temp),
safety_settings=[
{"category": "HARM_CATEGORY_HARASSMENT","threshold": "BLOCK_NONE",},
{"category": "HARM_CATEGORY_HATE_SPEECH","threshold": "BLOCK_NONE",},
{"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT","threshold": "BLOCK_NONE",},
{"category": "HARM_CATEGORY_DANGEROUS_CONTENT","threshold": "BLOCK_NONE",},
]
)
chatbot.append((user_input, response.text))
except Exception as e:
print(f"Error occurred: {e}")
chatbot.append((user_input, f"Sorry, an error occurred: {e}"))
return chatbot
第一轮:input_text
对应输入的prompt,调用model.generate_content
返回生成的回答response.text
;
第二轮:input_text
对应输入的prompt,response_text
对应第一轮的response.text
。新的prompt对应
user_input
。
调用model.generate_content
之后,user_input
加上 response.text
组成完整的dialogue。
Task 3: Customize Task
创建一个自定义的服务机器人。例如,例如:一个可以解决简单数学问题的机器人、始终输出用户输入单词反义词的机器人等。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
# function to call the model to generate
def interact_customize(chatbot: List[Tuple[str, str]], prompt: str ,user_input: str, temp = 1.0) -> List[Tuple[str, str]]:
'''
* Arguments
- chatbot: the model itself, the conversation is stored in list of tuples
- prompt: the prompt for your desginated task
- user_input: the user input of each round of conversation
- temp: the temperature parameter of this model. Temperature is used to control the output of the chatbot.
The higher the temperature is, the more creative response you will get.
'''
try:
messages = []
for input_text, response_text in chatbot:
messages.append({'role': 'user', 'parts': [input_text]})
messages.append({'role': 'model', 'parts': [response_text]})
messages.append({'role': 'user', 'parts': [prompt+ "\n" + user_input]})
response = model.generate_content(
messages,
generation_config=genai.types.GenerationConfig(temperature=temp),
safety_settings=[
{"category": "HARM_CATEGORY_HARASSMENT","threshold": "BLOCK_NONE",},
{"category": "HARM_CATEGORY_HATE_SPEECH","threshold": "BLOCK_NONE",},
{"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT","threshold": "BLOCK_NONE",},
{"category": "HARM_CATEGORY_DANGEROUS_CONTENT","threshold": "BLOCK_NONE",},
]
)
chatbot.append((user_input, response.text))
except Exception as e:
print(f"Error occurred: {e}")
chatbot.append((user_input, f"Sorry, an error occurred: {e}"))
return chatbot
def interact_customize()
传入用户prompt,使语言模型成为“服务机器人”。后续的用户输入对应input_text
,模型的返回为response_text
。