Skip to content

WebSurfer#

Turbo AI allows you to quickly create workflows with capabilities like live browsing, automatic data retrieval, and tasks requiring up-to-date web information, making it easy to integrate web functionality.

Adding Web Surfing Capabilities to Agents#

Turbo AI provides two ways to add web surfing capabilities to agents. You can either:

  1. Use a WebSurferAgent, which comes with built-in web surfing capabilities (recommended)
  2. Enhance an existing agent with web surfing capability

In this guide, we'll demonstrate both methods with a real-world example. We’ll create a workflow where agents search the web for real-time data.

We’ll build agents and assign them the task: “Search for information about Microsoft AutoGen and summarize the results” to showcase its ability to browse and gather real-time data in action.

Installation & Setup#

We strongly recommend using Cookiecutter for setting up the project. Cookiecutter creates the project folder structure, default workflow, automatically installs all the necessary requirements, and creates a devcontainer that can be used with Visual Studio Code.

You can setup the project using Cookiecutter by following the guide.

Alternatively, you can use pip + venv. Before getting started, make sure you have installed Turbo AI with support for the AutoGen runtime by running the following command:

pip install "Turbo AI[autogen]"

This command installs Turbo AI with support for the Console interface and AutoGen framework.

Create Bing Web Search API Key#

To create Bing Web Search API key, follow the guide provided.

Note

You will need to create Microsoft Azure Account.

Set Up Your API Key in the Environment#

You can set the Binga API key in your terminal as an environment variable:

export BING_API_KEY="your_bing_api_key"
set BING_API_KEY="your_bing_api_key"

Example: Search for information about Microsoft AutoGen and summarize the results#

Step-by-Step Breakdown#

1. Import Required Modules#

The example starts by importing the necessary modules from AutoGen and Turbo AI. These imports lay the foundation for building and running multi-agent workflows.

import os
from typing import Any

from autogen import UserProxyAgent

from Turbo AI import UI, Turbo AI
from Turbo AI.runtimes.autogen import AutoGenWorkflows
from Turbo AI.runtimes.autogen.agents.websurfer import WebSurferAgent
from Turbo AI.ui.console import ConsoleUI

To create a new web surfing agent, simply import WebSurferAgent, which comes with built-in web surfing capabilities, and use it as needed.

import os
from typing import Any

from autogen import UserProxyAgent
from autogen.agentchat import ConversableAgent

from Turbo AI import UI, Turbo AI
from Turbo AI.runtimes.autogen import AutoGenWorkflows
from Turbo AI.runtimes.autogen.tools import WebSurferTool
from Turbo AI.ui.console import ConsoleUI

To enhance existing agents with web surfing capability, import WebSurferTool from Turbo AI and ConversableAgent from AutoGen.

2. Configure the Language Model (LLM)#

Here, the large language model is configured to use the gpt-4o model, and the API key is retrieved from the environment. This setup ensures that both the user and websurfer agents can interact effectively.

llm_config = {
    "config_list": [
        {
            "model": "gpt-4o-mini",
            "api_key": os.getenv("OPENAI_API_KEY"),
        }
    ],
    "temperature": 0.8,
}

3. Define the Workflow and Agents#

In this step, we are going to create two agents and specify the initial message that will be displayed to users when the workflow starts:

  • UserProxyAgent: This agent simulates the user interacting with the system.

  • WebSurferAgent: This agent functions as a web surfer, with built-in capability to browse the web and fetch real-time data as required.

wf = AutoGenWorkflows()

@wf.register(name="simple_websurfer", description="WebSurfer chat")  # type: ignore[type-var]
def websurfer_workflow(
    ui: UI, params: dict[str, Any]
) -> str:
    initial_message = ui.text_input(
        sender="Workflow",
        recipient="User",
        prompt="I can help you with your web search. What would you like to know?",
    )
    user_agent = UserProxyAgent(
        name="User_Agent",
        system_message="You are a user agent",
        llm_config=llm_config,
        human_input_mode="NEVER",
    )
    web_surfer = WebSurferAgent(
        name="Assistant_Agent",
        llm_config=llm_config,
        summarizer_llm_config=llm_config,
        human_input_mode="NEVER",
        executor=user_agent,
        bing_api_key=os.getenv("BING_API_KEY"),
    )

When initiating the WebSurferAgent, the executor parameter must be provided. This can be either a single instance of ConversableAgent or a list of ConversableAgent instances.

The WebSurferAgent relies on the executor agent(s) to execute the web surfing tasks. In this example, the web_surfer agent will call the user_agent agent with the necessary instructions when web surfing is required, and the user_agent will execute those instructions.

In this step, we create two agents, a web surfer tool and set an initial message that will be displayed to users when the workflow starts:

  • UserProxyAgent: This agent simulates the user interacting with the system.

  • ConversableAgent: This is the conversable agent to which we will be adding web surfing capabilities.

  • WebSurferTool: The tool that gives the ConversableAgent the ability to browse the web after it has been registered.

wf = AutoGenWorkflows()


@wf.register(name="simple_websurfer", description="WebSurfer chat")  # type: ignore[type-var]
def websurfer_workflow(
    ui: UI, params: dict[str, Any]
) -> str:
    initial_message = ui.text_input(
        sender="Workflow",
        recipient="User",
        prompt="I can help you with your web search. What would you like to know?",
    )

    user_agent = UserProxyAgent(
        name="User_Agent",
        system_message="You are a user agent",
        llm_config=llm_config,
        human_input_mode="NEVER",
    )
    assistant_agent = ConversableAgent(
        name="Assistant_Agent",
        system_message="You are a useful assistant",
        llm_config=llm_config,
        human_input_mode="NEVER",
    )

    web_surfer = WebSurferTool(
        name_prefix="Web_Surfer",
        llm_config=llm_config,
        summarizer_llm_config=llm_config,
        bing_api_key=os.getenv("BING_API_KEY"),
    )

Now, we need to register the WebSurferAgent with a caller and executor. This setup allows the caller to use the WebSurferAgent for performing real-time web interactions.

    web_surfer.register(
        caller=assistant_agent,
        executor=user_agent,
    )

The executor can be either a single instance of ConversableAgent or a list of ConversableAgent instances.

The caller relies on the executor agent(s) to execute the web surfing tasks. In this example, the assistant_agent agent will call the user_agent agent with the necessary instructions when web surfing is required, and the user_agent will execute those instructions.

4. Enable Agent Interaction and Chat#

Here, the user agent starts a conversation with the websurfer agent, which performs a web search and returns summarized information. The conversation is then summarized using a method provided by the LLM.

chat_result = user_agent.initiate_chat(
    web_surfer,
    message=initial_message,
    summary_method="reflection_with_llm",
    max_turns=3,
)
chat_result = user_agent.initiate_chat(
    assistant_agent,
    message=initial_message,
    summary_method="reflection_with_llm",
    max_turns=3,
)

return chat_result.summary  # type: ignore[no-any-return]

5. Create and Run the Application#

Finally, we create the Turbo AI application and launch it using the console interface.

app = Turbo AI(provider=wf, ui=ConsoleUI())

Complete Application Code#

websurfer_agent.py
import os
from typing import Any

from autogen import UserProxyAgent

from Turbo AI import UI, Turbo AI
from Turbo AI.runtimes.autogen import AutoGenWorkflows
from Turbo AI.runtimes.autogen.agents.websurfer import WebSurferAgent
from Turbo AI.ui.console import ConsoleUI

llm_config = {
    "config_list": [
        {
            "model": "gpt-4o-mini",
            "api_key": os.getenv("OPENAI_API_KEY"),
        }
    ],
    "temperature": 0.8,
}

wf = AutoGenWorkflows()

@wf.register(name="simple_websurfer", description="WebSurfer chat")  # type: ignore[type-var]
def websurfer_workflow(
    ui: UI, params: dict[str, Any]
) -> str:
    initial_message = ui.text_input(
        sender="Workflow",
        recipient="User",
        prompt="I can help you with your web search. What would you like to know?",
    )
    user_agent = UserProxyAgent(
        name="User_Agent",
        system_message="You are a user agent",
        llm_config=llm_config,
        human_input_mode="NEVER",
    )
    web_surfer = WebSurferAgent(
        name="Assistant_Agent",
        llm_config=llm_config,
        summarizer_llm_config=llm_config,
        human_input_mode="NEVER",
        executor=user_agent,
        bing_api_key=os.getenv("BING_API_KEY"),
    )

    chat_result = user_agent.initiate_chat(
        web_surfer,
        message=initial_message,
        summary_method="reflection_with_llm",
        max_turns=3,
    )

    return chat_result.summary  # type: ignore[no-any-return]


app = Turbo AI(provider=wf, ui=ConsoleUI())

websurfer_tool.py
import os
from typing import Any

from autogen import UserProxyAgent
from autogen.agentchat import ConversableAgent

from Turbo AI import UI, Turbo AI
from Turbo AI.runtimes.autogen import AutoGenWorkflows
from Turbo AI.runtimes.autogen.tools import WebSurferTool
from Turbo AI.ui.console import ConsoleUI

llm_config = {
    "config_list": [
        {
            "model": "gpt-4o-mini",
            "api_key": os.getenv("OPENAI_API_KEY"),
        }
    ],
    "temperature": 0.8,
}

wf = AutoGenWorkflows()


@wf.register(name="simple_websurfer", description="WebSurfer chat")  # type: ignore[type-var]
def websurfer_workflow(
    ui: UI, params: dict[str, Any]
) -> str:
    initial_message = ui.text_input(
        sender="Workflow",
        recipient="User",
        prompt="I can help you with your web search. What would you like to know?",
    )

    user_agent = UserProxyAgent(
        name="User_Agent",
        system_message="You are a user agent",
        llm_config=llm_config,
        human_input_mode="NEVER",
    )
    assistant_agent = ConversableAgent(
        name="Assistant_Agent",
        system_message="You are a useful assistant",
        llm_config=llm_config,
        human_input_mode="NEVER",
    )

    web_surfer = WebSurferTool(
        name_prefix="Web_Surfer",
        llm_config=llm_config,
        summarizer_llm_config=llm_config,
        bing_api_key=os.getenv("BING_API_KEY"),
    )

    web_surfer.register(
        caller=assistant_agent,
        executor=user_agent,
    )

    chat_result = user_agent.initiate_chat(
        assistant_agent,
        message=initial_message,
        summary_method="reflection_with_llm",
        max_turns=3,
    )

    return chat_result.summary  # type: ignore[no-any-return]


app = Turbo AI(provider=wf, ui=ConsoleUI())

Running the Application#

Turbo AI run websurfer_agent.py
Turbo AI run websurfer_tool.py

Ensure you have set your OpenAI API key in the environment. The command will launch a console interface where users can input their requests and interact with the websurfer agent.

Output#

Once you run it, Turbo AI automatically detects the appropriate app to execute and runs it. The application will then prompt you with: "I can help you with your web search. What would you like to know?:"

╭── Python module file ───╮
│                         │
│  🐍 websurfer_agent.py  │
│                         │
╰─────────────────────────╯

[INFO] Importing autogen.base.py
[INFO] Initializing Turbo AI <Turbo AI title=Turbo AI application> with workflows: <Turbo AI.runtimes.autogen.  autogen.AutoGenWorkflows object at 0x109a51610> and UI: <Turbo AI.ui.console.console.ConsoleUI object at 0x109adced0>
[INFO] Initialized Turbo AI: <Turbo AI title=Turbo AI application>

╭──── Importable Turbo AI app ────╮
│                                   │
│  from websurfer_agent import app  │
│                                   │
╰───────────────────────────────────╯

╭─ Turbo AI -> user [workflow_started] ──────────────────────────────────────╮
│                                                                              │
│ {                                                                            │
│   "name": "simple_websurfer",                                                │
│   "description": "WebSurfer chat",                                           │
│                                                                              │
│ "params": {}                                                                 │
│ }                                                                            │
╰──────────────────────────────────────────────────────────────────────────────╯

╭─ Workflow -> User [text_input] ──────────────────────────────────────────────╮
│                                                                              │
│ I can help you with your web search. What would you like to know?:           │
╰──────────────────────────────────────────────────────────────────────────────╯
╭── Python module file ──╮
│                        │
│  🐍 websurfer_tool.py  │
│                        │
╰────────────────────────╯

[INFO] Importing autogen.base.py
[INFO] Initializing Turbo AI <Turbo AI title=Turbo AI application> with workflows: <Turbo AI.runtimes.autogen.autogen.AutoGenWorkflows object at 0x11368cbd0> and UI: <Turbo AI.ui.console.console.ConsoleUI object at 0x13441c510>
[INFO] Initialized Turbo AI: <Turbo AI title=Turbo AI application>

╭─── Importable Turbo AI app ────╮
│                                  │
│  from websurfer_tool import app  │
│                                  │
╰──────────────────────────────────╯

╭─ Turbo AI -> user [workflow_started] ──────────────────────────────────────╮
│                                                                              │
│ {                                                                            │
│   "name": "simple_websurfer",                                                │
│   "description": "WebSurfer chat",                                           │
│                                                                              │
│ "params": {}                                                                 │
│ }                                                                            │
╰──────────────────────────────────────────────────────────────────────────────╯

╭─ Workflow -> User [text_input] ──────────────────────────────────────────────╮
│                                                                              │
│ I can help you with your web search. What would you like to know?:           │
╰──────────────────────────────────────────────────────────────────────────────╯

In the prompt, type Search for information about Microsoft AutoGen and summarize the results and press Enter.

This will initiate the task, allowing you to see the real-time conversation between the agents as they collaborate to complete it. Once the task is finished, you’ll see an output similar to the one below.

╭─ workflow -> user [workflow_completed] ──────────────────────────────────────╮
│                                                                              │
│ {                                                                            │
│   "result": "Microsoft AutoGen is an open-source framework designed          │
│ to simplify the orchestration, optimization, and automation of large         │
│ language model (LLM) workflows. It features customizable agents,             │
│ multi-agent conversations, tool integration, and human involvement,          │
│ making it suitable for complex AI applications. Key resources include        │
│ the Microsoft Research Blog and the GitHub repository for AutoGen."          │
│ }                                                                            │
╰──────────────────────────────────────────────────────────────────────────────╯

╭─ Turbo AI -> user [workflow_started] ──────────────────────────────────────╮
│                                                                              │
│ {                                                                            │
│   "name": "simple_websurfer",                                                │
│   "description": "WebSurfer chat",                                           │
│                                                                              │
│ "params": {}                                                                 │
│ }                                                                            │
╰──────────────────────────────────────────────────────────────────────────────╯

    ╭─ Workflow -> User [text_input] ──────────────────────────────────────────────╮
    │                                                                              │
    │ I can help you with your web search. What would you like to know?:           │
    ╰──────────────────────────────────────────────────────────────────────────────╯

The agent will summarize its findings and then prompt you again with "I can help you with your web search. What would you like to know?:", allowing you to continue the conversation with the web surfer agent.


This example demonstrates the power of the AutoGen runtime within Turbo AI, showcasing how easily LLM-powered agents can be integrated with browsing capabilities to fetch and process real-time information. By leveraging Turbo AI, developers can quickly build interactive, scalable applications that interact with live data sources.