Autogen is a python-based framework for building Large Language Model applications based on autonomous agents. Released by Microsoft Research, Autogen agents operate as a conversational community that collaborate in surprisingly lucid group discussions to solve problems. The individual agents can be specialized to encapsulate very specific behavior of the underlying LLM or endowed with special capabilities such as function calling and external tool use. In this post we describe the communication and collaboration mechanisms used by Autogen. We illustrate its capabilities with two examples. In the first example, we show how an Autogen agent can generate the Python code to read an external file while another agent uses the content of the file together with the knowledge the LLM has to do basic analysis and question answering. The second example stresses two points. As we have shown in a previous blog Large Language Models are not very good a advanced algebra or non-trivial computation. Fortunately, Autogen allows us to invoke external tools. In this example, we show how to use an Agent that invokes Wolfram Alpha to do the “hard math”. While GPT-4 is very good at generating Python code, it is far from perfect when formulating Alpha queries. To help with the Wolfram Alpha code generation we incorporate a “Critic” agent which inspects code generated by a “Coder” agent, looking for errors. These activities are coordinated with a Group Chat feature of Autogen. We do not attempt to do any quantitative analysis of Autogen here. This post only illustrates these ideas.
A full PDF of the paper is here: https://www.researchgate.net/publication/377411718_A_Brief_Look_at_Autogen_a_Multiagent_System_to_Build_Applications_Based_on_Large_Language_Models