人工智能领域中 参数数量巨大 拥有庞大计算能力和参数规模的模型
特点及应用:
通过单一代码库,InternLM支持在拥有数千个GPU的大型集群上进行预训练,并在单个GPU上进行微调,同时实现了卓越的性能优化。在1024个GPU上训练时,InternLM可以实现90%的加速效率。
InternLM-7B包含了一个拥有70亿擦上农户的基础模型 和一个为实际场景量身定制的对话模型,该模型具有一下特点:
环境中安装运行 demo 所需要的依赖。
# 升级pip
python -m pip install --upgrade pip
pip install modelscope==1.9.5
pip install transformers==4.35.2
pip install streamlit==1.24.0
pip install sentencepiece==0.1.99
pip install accelerate==0.24.1
模型下载
import torch
from modelscope import snapshot_download, AutoModel, AutoTokenizer
import os
model_dir = snapshot_download('Shanghai_AI_Laboratory/internlm-chat-7b', cache_dir='/root/model', revision='v1.0.3')
首先 clone 代码,在 /root 路径下新建 code 目录,然后切换路径, clone 代码.
cd /root/code
git clone https://gitee.com/internlm/InternLM.git
目录下新建一个 cli_demo.py 文件,将以下代码填入其中:
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name_or_path = "/root/model/Shanghai_AI_Laboratory/internlm-chat-7b"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, trust_remote_code=True, torch_dtype=torch.bfloat16, device_map='auto')
model = model.eval()
system_prompt = """You are an AI assistant whose name is InternLM (书生·浦语).
- InternLM (书生·浦语) is a conversational language model that is developed by Shanghai AI Laboratory (上海人工智能实验室). It is designed to be helpful, honest, and harmless.
- InternLM (书生·浦语) can understand and communicate fluently in the language chosen by the user such as English and 中文.
"""
messages = [(system_prompt, '')]
print("=============Welcome to InternLM chatbot, type 'exit' to exit.=============")
while True:
input_text = input("User >>> ")
input_text.replace(' ', '')
if input_text == "exit":
break
response, history = model.chat(tokenizer, input_text, history=messages)
messages.append((input_text, response))
print(f"robot >>> {response}")
在vscode可以直接转发端口
streamlit run web_demo.py --server.address 127.0.0.1 --server.port 6006
Lagent是一个轻量级、开源的基于大预言模型的智能体框架,用户可以快速地将一个大语言模型转变为多种类型的智能体,并提供了一个典型工具为大语言模型赋能
首先切换路径到 /root/code 克隆 lagent 仓库,并通过 pip install -e . 源码安装 Lagent
cd /root/code
git clone https://gitee.com/internlm/lagent.git
cd /root/code/lagent
git checkout 511b03889010c4811b1701abb153e02b8e94fb5e # 尽量保证和教程commit版本一致
pip install -e . # 源码安装
运行代码
streamlit run /root/code/lagent/examples/react_web_demo.py --server.address 127.0.0.1 --server.port 6006
python -m pip install --upgrade pip
pip config set global.index-url 镜像源
cat <<'EOF' > ~/.condarc
pip install -U huggingface_hub
huggingface-cli download --resume-download internlm/internlm-chat-7b --local-dir your_path
pip instgall -U openxlab
from openxlab.model import download
download(model_repo='OpenLMLab/InternLM-7b',model_name='InternLM-7b',output='your local path')
pip install modelscope
pip install transformers
import torch
from modelscope import snapshot_download,AutoModel,AutoTokenizer
import os
model_dir = snapshot_download('Shanghai_AI_Laboratory/internLM-chat-7b',cache_dir='your path',revision='master')
HF_ENDPOINT=https://hf-mirror.com python down_pth.py