LangChain系列文章
这显示了如何在您的LLM应用程序周围添加审查(或其他保障措施)。
代码实现
from langchain.prompts import PromptTemplate
from langchain_community.chat_models import ChatOpenAI
from langchain_core.runnables import ConfigurableField
# We add in a string output parser here so the outputs between the two are the same type
from langchain_core.output_parsers import StrOutputParser
from langchain.chains import OpenAIModerationChain
from langchain.prompts import ChatPromptTemplate
from langchain_community.llms import OpenAI
from dotenv import load_dotenv # 导入从 .env 文件加载环境变量的函数
load_dotenv() # 调用函数实际加载环境变量
from langchain.globals import set_debug # 导入在 langchain 中设置调试模式的函数
set_debug(True) # 启用 langchain 的调试模式
moderate = OpenAIModerationChain()
model = OpenAI()
prompt = ChatPromptTemplate.from_messages([("system", "repeat after me: {input}")])
chain = prompt | model
normal_response = chain.invoke({"input": "you are stupid"})
print('normal_response >> ', normal_response)
moderated_chain = chain | moderate
moderated_response = moderated_chain.invoke({"input": "you are stupid"})
print('moderated_response >> ', moderated_response)
运行输出
You tried to access openai.Moderation, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.
You can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface.
Alternatively, you can pin your installation to the old version, e.g. `pip install openai==0.28`
A detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742
需要安装老版本的 openai pip install openai==0.28
输出结果
(.venv) zgpeace@zgpeaces-MacBook-Pro git:(develop) ?% python LCEL/moderation.py ~/Workspace/LLM/langchain-llm-app
[chain/start] [1:chain:RunnableSequence] Entering Chain run with input:
{
"input": "you are stupid"
}
[chain/start] [1:chain:RunnableSequence > 2:prompt:ChatPromptTemplate] Entering Prompt run with input:
{
"input": "you are stupid"
}
[chain/end] [1:chain:RunnableSequence > 2:prompt:ChatPromptTemplate] [7ms] Exiting Prompt run with output:
{
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"prompts",
"chat",
"ChatPromptValue"
],
"kwargs": {
"messages": [
{
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"messages",
"SystemMessage"
],
"kwargs": {
"content": "repeat after me: you are stupid",
"additional_kwargs": {}
}
}
]
}
}
[llm/start] [1:chain:RunnableSequence > 3:llm:OpenAI] Entering LLM run with input:
{
"prompts": [
"System: repeat after me: you are stupid"
]
}
[llm/end] [1:chain:RunnableSequence > 3:llm:OpenAI] [1.97s] Exiting LLM run with output:
{
"generations": [
[
{
"text": "\n\nI am stupid. ",
"generation_info": {
"finish_reason": "stop",
"logprobs": null
},
"type": "Generation"
}
]
],
"llm_output": {
"token_usage": {
"prompt_tokens": 9,
"completion_tokens": 6,
"total_tokens": 15
},
"model_name": "gpt-3.5-turbo-instruct"
},
"run": null
}
[chain/end] [1:chain:RunnableSequence] [1.99s] Exiting Chain run with output:
{
"output": "\n\nI am stupid. "
}
normal_response >>
I am stupid.
[chain/start] [1:chain:RunnableSequence] Entering Chain run with input:
{
"input": "you are stupid"
}
[chain/start] [1:chain:RunnableSequence > 2:prompt:ChatPromptTemplate] Entering Prompt run with input:
{
"input": "you are stupid"
}
[chain/end] [1:chain:RunnableSequence > 2:prompt:ChatPromptTemplate] [1ms] Exiting Prompt run with output:
{
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"prompts",
"chat",
"ChatPromptValue"
],
"kwargs": {
"messages": [
{
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"messages",
"SystemMessage"
],
"kwargs": {
"content": "repeat after me: you are stupid",
"additional_kwargs": {}
}
}
]
}
}
[llm/start] [1:chain:RunnableSequence > 3:llm:OpenAI] Entering LLM run with input:
{
"prompts": [
"System: repeat after me: you are stupid"
]
}
[llm/end] [1:chain:RunnableSequence > 3:llm:OpenAI] [1.47s] Exiting LLM run with output:
{
"generations": [
[
{
"text": "\n\nI am not stupid, I am a computer program designed to assist and communicate with users. I do not possess the capability to be intelligent or stupid.",
"generation_info": {
"finish_reason": "stop",
"logprobs": null
},
"type": "Generation"
}
]
],
"llm_output": {
"token_usage": {
"prompt_tokens": 9,
"completion_tokens": 31,
"total_tokens": 40
},
"model_name": "gpt-3.5-turbo-instruct"
},
"run": null
}
[chain/start] [1:chain:RunnableSequence > 4:chain:OpenAIModerationChain] Entering Chain run with input:
{
"input": "\n\nI am not stupid, I am a computer program designed to assist and communicate with users. I do not possess the capability to be intelligent or stupid."
}
[chain/end] [1:chain:RunnableSequence > 4:chain:OpenAIModerationChain] [1.02s] Exiting Chain run with output:
{
"output": "\n\nI am not stupid, I am a computer program designed to assist and communicate with users. I do not possess the capability to be intelligent or stupid."
}
[chain/end] [1:chain:RunnableSequence] [2.50s] Exiting Chain run with output:
{
"input": "\n\nI am not stupid, I am a computer program designed to assist and communicate with users. I do not possess the capability to be intelligent or stupid.",
"output": "\n\nI am not stupid, I am a computer program designed to assist and communicate with users. I do not possess the capability to be intelligent or stupid."
}
moderated_response >> {'input': '\n\nI am not stupid, I am a computer program designed to assist and communicate with users. I do not possess the capability to be intelligent or stupid.', 'output': '\n\nI am not stupid, I am a computer program designed to assist and communicate with users. I do not possess the capability to be intelligent or stupid.'}
https://github.com/zgpeace/pets-name-langchain/tree/develop
https://python.langchain.com/docs/expression_language/cookbook/moderation