As has been widely established by now, Chain-of-Thought (CoT) prompting is a highly effective method for querying LLMs using a single zero or few-shot approach.
It excels at tasks requiring multi-step reasoning, where the model is guided through step-by-step demonstrations before addressing the problem with the instruction Let us think step by step
.
However, recent studies have identified three main limitations of CoT prompting:
Calculations
7% failure rate in test examples.
Missing Steps
12% failure rate in sequential events.
Semantic Misunderstanding
27% failure rate in test examples.
To address these issues, Plan-and-Solve (PS) prompting and its enhanced version, Plan-and-Solve with Detailed Instructions (PS+), have been introduced.
PS involves two key steps:
- Creating a plan to break the task into smaller subtasks and then
- Executing these subtasks according to the plan.
This simple architecture represents the planning agent framework. It has two main components:
- Planner: Prompts an LLM to create a multi-step plan for a large task.
- Executors: Receive the user query and a step in the plan, then invoke one or more tools to complete that task.
After execution, the agent is prompted to re-plan, deciding whether to provide a final response or generate a follow-up plan if the initial plan was insufficient.
This design minimises the need to call the large planner LLM for every tool invocation.
However, it remains limited by serial tool calling and requires an LLM for each task, as it doesn’t support variable assignment.
The LLM assign is done in the following way:
llm = OpenAI(temperature=0,model_name=’gpt-4o-mini’)
Below the complete Python code for the AI agent. The only changes you will need to make is adding your OpenAI API Key, and langSmith project variables.
### Install Required Packages:
pip install -qU langchain-openai langchain langchain_community langchain_experimental
pip install -U duckduckgo-search
pip install -U langchain langchain-openai
### Import Required Modules and Set Environment Variables:
import os
from uuid import uuid4
### Setup the LangSmith environment variables
unique_id = uuid4().hex[0:8]
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_PROJECT"] = f"OpenAI_SM_1"
os.environ["LANGCHAIN_ENDPOINT"] = "https://api.smith.langchain.com"
os.environ["LANGCHAIN_API_KEY"] = ""
### Import LangChain Components and OpenAI API Key
from langchain.chains import LLMMathChain
from langchain_community.utilities import DuckDuckGoSearchAPIWrapper
from langchain_core.tools import Tool
from langchain_experimental.plan_and_execute import (
PlanAndExecute,
load_agent_executor,
load_chat_planner,
)
from langchain_openai import ChatOpenAI, OpenAI
###
os.environ['OPENAI_API_KEY'] = str("")
llm = OpenAI(temperature=0,model_name='gpt-4o-mini')
### Set Up Search and Math Chain Tools
search = DuckDuckGoSearchAPIWrapper()
llm = OpenAI(temperature=0)
llm_math_chain = LLMMathChain.from_llm(llm=llm, verbose=True)
tools = [
Tool(
name="Search",
func=search.run,
description="useful for when you need to answer questions about current events",
),
Tool(
name="Calculator",
func=llm_math_chain.run,
description="useful for when you need to answer questions about math",
),
]
### Initialize Planner and Executor
model = ChatOpenAI(model_name='gpt-4o-mini', temperature=0)
planner = load_chat_planner(model)
executor = load_agent_executor(model, tools, verbose=True)
agent = PlanAndExecute(planner=planner, executor=executor)
### Invoke the Agent
agent.invoke(
"Who is the founder of SpaceX an what is the square root of his year of birth?"
)