Building a Dynamic User Interface for CrewAI Applications
In our previous tutorial, we explored app development using the CrewAI framework with Streamlit, focusing on creating a fundamental application that showcased a CrewAI workflow visualization triggered by an initial user input. While this initial setup provided a solid foundation, it was relatively static and did not incorporate interactive elements between users and the agent group once the workflow was underway.
To address this limitation, we are embarking on a new approach to enhance the CrewAI application. This updated version will facilitate a more interactive experience by integrating a feature where agents can solicit human input directly within the visualized chat interface. Users will then have the ability to offer real-time feedback via a chat input widget embedded in the web application.
In this redesign, I’ve chosen to transition from Streamlit to Panel, a framework I have previously employed for AutoGen projects. This shift is driven by the need for a more flexible and robust user interface that better supports dynamic interactions and provides a richer experience for managing the flow of information between AI agents and human users. I’ll delve into the reasons for this framework switch and how Panel’s capabilities will be leveraged to enhance our CrewAI visualization and user interaction.
Why Not Streamlit?
Initially, I explored extending the CrewAI + Streamlit development to incorporate human interaction, but encountered a fundamental limitation due to Streamlit’s inherent design principles.
Streamlit relies on a “refresh” model for updating content. This is evident in chatbot demos, where a common pattern involves iterating over chat messages like this:
for msg in st.session_state.messages:
st.chat_message(msg["role"]).write(msg["content"])
This loop is responsible for displaying historical chat messages. However, Streamlit’s design triggers a full application refresh with each interaction—such as message input or menu selection—by rerunning the application from the top. This refresh mechanism necessitates storing interim data in a cache object, session_state
, to preserve it across resets.
While this approach is effective for simpler LLM-powered chatbots where app state management is straightforward, it becomes problematic for more complex LLM-driven applications like CrewAI, which involves multi-agent systems.
In CrewAI, the workflow relies on a collaborative internal process where agents and humans interact dynamically. When an agent requires human input, it pauses the workflow and waits for user input in a terminal environment using a standard input function. Replacing this with Streamlit’s chat input widget results in a workflow restart with each user interaction, causing the loss of ongoing context and data.
For complex, multi-agent applications that require a persistent runtime session and seamless interaction between agents and users, a more robust framework is needed.
This brings us to Panel—a framework that offers enhanced flexibility and support for maintaining a continuous and interactive user experience. Let’s explore how Panel can address these challenges and improve the CrewAI application’s UI design.
Integrating CrewAI with Panel: A Closer Look
Understanding Panel
Before we delve into the specifics of the CrewAI integration, it’s essential to understand the Panel framework and how it facilitates interactive web development.
Panel is a versatile web development framework designed to simplify the creation of data-driven applications. It provides a suite of powerful, user-friendly widgets that allow developers to visualize their data, experiments, or final projects without needing extensive HTML knowledge. For LLM applications like CrewAI, Panel has expanded its offerings with specialized widgets, including the ChatInterface template, which will be central to our CrewAI demo.
Unlike Streamlit, which operates on a “refresh” model that resets the app with each interaction, Panel runs as a continuous server in the background. This allows it to maintain the state and flow of the application seamlessly, similar to a traditional Python program. This design is particularly advantageous for applications requiring ongoing interaction between users and agents.
Getting Started with Panel
To set up a basic Panel server, follow these steps:
- Import the Panel package:
import panel as pn
- Configure the overall UI appearance by selecting a design style:
pn.extension(design="material")
- Create and initialize the ChatInterface widget:
chat_interface = pn.chat.ChatInterface(callback=callback)
chat_interface.send("Send a message!", user="System", respond=False)
chat_interface.servable()
In this setup, the ChatInterface
widget is initialized with a callback function that will manage the core functionality of the CrewAI application. Messages can be added to the chat interface using the .send()
method.
- Run the Panel server from the terminal:
panel serve app.py
Once the server starts, you’ll see output in the terminal indicating that the app is running, such as:
2024–04–21 19:12:46,169 Starting Bokeh server version 3.4.0 (running on Tornado 6.4)
2024–04–21 19:12:46,171 User authentication hooks NOT provided (default user enabled)
2024–04–21 19:12:46,176 Bokeh app running at: http://localhost:5006/crewai_panel
2024–04–21 19:12:46,176 Starting Bokeh server with process id: 5308
This output confirms that your Panel server is up and running, and you can access the CrewAI application via the provided URL. Panel’s ability to maintain a persistent server state makes it an excellent choice for applications requiring dynamic, ongoing interactions, which is crucial for effectively managing complex workflows in CrewAI.
Developing a Dynamic CrewAI User Interface with Panel
With an understanding of how Panel operates, let’s integrate the CrewAI workflow into a Panel-based interface.
Designing the CrewAI Workflow
Our demonstration will create a copywriting studio where users input a topic for a writing task. The interface will showcase how the orchestrator agent coordinates with the writer and reviewer agents sequentially. Users will then interact through the interface to confirm or provide feedback before the final result is generated.
To achieve this, we’ll utilize the CrewAI framework’s capabilities to define Agents, Tasks, Crew, and Process. Here’s how to set up and integrate these components:
- Import Dependencies:
from crewai import Crew, Process, Agent, Task
from langchain_openai import ChatOpenAI
- Define the Language Model:
llm = ChatOpenAI(model="gpt-3.5-turbo-0125")
- Create Agents for Writing and Reviewing: Define the agents with their respective roles, backstories, and goals. Ensure each agent has a well-defined role and purpose for effective interaction.
writer = Agent(
role='Blog Post Writer',
backstory='''You are a blog post writer capable of crafting travel blogs. You generate one iteration of an article at a time and are open to feedback from the reviewer.''',
goal="Write and iterate a compelling blog post.",
llm=llm,
callbacks=[MyCustomHandler("Writer")],
)
reviewer = Agent(
role='Blog Post Reviewer',
backstory='''You are a professional reviewer who provides detailed feedback to enhance articles. You review the entire article and offer improvement recommendations but do not generate content.''',
goal="Provide actionable feedback to improve the blog post for better engagement.",
llm=llm,
callbacks=[MyCustomHandler("Reviewer")],
)
- Define a Custom Callback Handler: Create a callback handler to manage and display output in the Panel interface.
from langchain_core.callbacks import BaseCallbackHandler
from typing import Any, Dict
avators = {"Writer":"https://cdn-icons-png.flaticon.com/512/320/320336.png",
"Reviewer":"https://cdn-icons-png.freepik.com/512/9408/9408201.png"}
class MyCustomHandler(BaseCallbackHandler):
def __init__(self, agent_name: str) -> None:
self.agent_name = agent_name
def on_chain_start(self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) -> None:
"""Display the prompt message."""
chat_interface.send(inputs['input'], user="assistant", respond=False)
def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:
"""Display the agent's response."""
chat_interface.send(outputs['output'], user=self.agent_name, avatar=avators[self.agent_name], respond=False)
- Define the CrewAI Workflow Function:
def StartCrew(prompt):
task1 = Task(
description=f"Write a blog post about {prompt}.",
agent=writer,
expected_output="An article of under 100 words."
)
task2 = Task(
description="List review comments to improve the blog post for better social media engagement. Ensure comments are confirmed with a human before finalizing.",
agent=reviewer,
expected_output="Points for improvement.",
human_input=True,
)
project_crew = Crew(
tasks=[task1, task2],
agents=[writer, reviewer],
manager_llm=llm,
process=Process.hierarchical
)
result = project_crew.kickoff()
chat_interface.send("## Final Result\n" + result, user="assistant", respond=False)
- Integrate with Panel Callback: Define a callback to manage user inputs and interact with the CrewAI workflow.
import threading
import time
user_input = None
initiate_chat_task_created = False
def initiate_chat(message):
global initiate_chat_task_created
initiate_chat_task_created = True
StartCrew(message)
def callback(contents: str, user: str, instance: pn.chat.ChatInterface):
global initiate_chat_task_created
global user_input
if not initiate_chat_task_created:
thread = threading.Thread(target=initiate_chat, args=(contents,))
thread.start()
else:
user_input = contents
- Handle Human Input with a Custom Method: Override the
_ask_human_input
method to manage human interactions via the Panel interface.
from crewai.agents import CrewAgentExecutor
import time
def custom_ask_human_input(self, final_answer: dict) -> str:
global user_input
prompt = self._i18n.slice("getting_input").format(final_answer=final_answer)
chat_interface.send(prompt, user="assistant", respond=False)
while user_input is None:
time.sleep(1)
human_comments = user_input
user_input = None
return human_comments
CrewAgentExecutor._ask_human_input = custom_ask_human_input
Conclusion
By following these steps, you can build a robust CrewAI application with a dynamic Panel-based UI. This setup facilitates an interactive environment where agents and users collaborate effectively, ensuring a smooth and engaging workflow. Happy coding!