https://github.com/HamaWhiteGG/autogen4j
Java version of Microsoft AutoGen, Enable Next-Gen Large Language Model Applications.
AutoGen is a framework that enables the development of LLM applications using multiple agents that can converse with each other to solve tasks. AutoGen agents are customizable, conversable, and seamlessly allow human participation. They can operate in various modes that employ combinations of LLMs, human inputs, and tools.
The following example in the autogen4j-example.
Prerequisites for building:
<dependency>
<groupId>io.github.hamawhitegg</groupId>
<artifactId>autogen4j-core</artifactId>
<version>0.1.0</version>
</dependency>
Using Autogen4j requires OpenAI’s APIs, you need to set the environment variable.
export OPENAI_API_KEY=xxx
Autogen enables the next-gen LLM applications with a generic multi-agent conversation framework. It offers customizable and conversable agents that integrate LLMs, tools, and humans.
By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code.
Features of this use case include:
Auto Feedback From Code Execution Example
// create an AssistantAgent named "assistant"
var assistant = AssistantAgent.builder()
.name("assistant")
.build();
var codeExecutionConfig = CodeExecutionConfig.builder()
.workDir("data/coding")
.build();
// create a UserProxyAgent instance named "user_proxy"
var userProxy = UserProxyAgent.builder()
.name("user_proxy")
.humanInputMode(NEVER)
.maxConsecutiveAutoReply(10)
.isTerminationMsg(e -> e.getContent().strip().endsWith("TERMINATE"))
.codeExecutionConfig(codeExecutionConfig)
.build();
// the assistant receives a message from the user_proxy, which contains the task description
userProxy.initiateChat(assistant,
"What date is today? Compare the year-to-date gain for META and TESLA.");
// followup of the previous question
userProxy.send(assistant,
"Plot a chart of their stock price change YTD and save to stock_price_ytd.png.");
The figure below shows an example conversation flow with Autogen4j.
After running, you can check the file coding_output.log for the output logs.
The final output is as shown in the following picture.
var codeExecutionConfig = CodeExecutionConfig.builder()
.workDir("data/group_chat")
.lastMessagesNumber(2)
.build();
// create a UserProxyAgent instance named "user_proxy"
var userProxy = UserProxyAgent.builder()
.name("user_proxy")
.systemMessage("A human admin.")
.humanInputMode(TERMINATE)
.codeExecutionConfig(codeExecutionConfig)
.build();
// create an AssistantAgent named "coder"
var coder = AssistantAgent.builder()
.name("coder")
.build();
// create an AssistantAgent named "pm"
var pm = AssistantAgent.builder()
.name("product_manager")
.systemMessage("Creative in software product ideas.")
.build();
var groupChat = GroupChat.builder()
.agents(List.of(userProxy, coder, pm))
.maxRound(12)
.build();
// create an GroupChatManager named "manager"
var manager = GroupChatManager.builder()
.groupChat(groupChat)
.build();
userProxy.initiateChat(manager,
"Find a latest paper about gpt-4 on arxiv and find its potential applications in software.");
After running, you can check the file group_chat_output.log for the output logs.
git clone https://github.com/HamaWhiteGG/autogen4j.git
cd autogen4j
# export JAVA_HOME=JDK17_INSTALL_HOME && mvn clean test
mvn clean test
This project uses Spotless to format the code.
If you make any modifications, please remember to format the code using the following command.
# export JAVA_HOME=JDK17_INSTALL_HOME && mvn spotless:apply
mvn spotless:apply
Don’t hesitate to ask!
Open an issue if you find a bug or need any help.