02Jun


Having established the ReAct System Prompt and defined the necessary functions, we can now integrate these elements to construct our AI agent.

Let’s return to our main.py script to complete the setup.

Define Available Functions

First, list the functions the agent can utilize. For this example, we only have one:

available_actions = {
"get_response_time": get_response_time
}

In our case we have one function only.

This will enable the agent to select the correct function efficiently.

Set Up User and System Prompts

Define the user prompt and the messages that will be passed to the generate_text_with_conversation function we previously created:

user_prompt = "What is the response time for learnwithhasan.com?"

messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_prompt},
]

The system prompt, structured as a ReAct loop directive, is provided as a system message to the OpenAI LLM.

Now OpenAI’s LLM Model will be instructed to act in a loop of Though. Action, and Action Result!

Create the Agentic Loop

Implement the loop that processes user inputs and handles AI responses:

turn_count = 1
max_turns = 5

while turn_count print (f"Loop: {turn_count}")
print("----------------------")
turn_count += 1

response = generate_text_with_conversation(messages, model="gpt-4")

print(response)

json_function = extract_json(response)

if json_function:
function_name = json_function[0]['function_name']
function_parms = json_function[0]['function_parms']
if function_name not in available_actions:
raise Exception(f"Unknown action: {function_name}: {function_parms}")
print(f" -- running {function_name} {function_parms}")
action_function = available_actions[function_name]
#call the function
result = action_function(**function_parms)
function_result_message = f"Action_Response: {result}"
messages.append({"role": "user", "content": function_result_message})
print(function_result_message)
else:
break

This loop reflects the ReAct cycle, generating responses, extracting JSON-formatted function calls, and executing the appropriate actions.

So we generate the response, and we check if the LLM returned a function.

I created the extract_json method to make it easy for your to extract any functions from the LLM response.

In the following line:

json_function = extract_json(response)

We will check if the LLM returned a function to execute, if yes, it will execute and append the result to the messages, so in the next turn, the LLM can use the Action_response to asnwer the user query.

Test the Agent!

To see this agent in action, you can download the complete codebase using the link provided below:

Basic AI Agent Code

And if you like to see all this in action and see another Real-world Agent Example, you can check this free video:

For more in-depth exploration and additional examples, consider checking out my full course, “Build AI Agents From Scratch With Python.”

And remember, If you have any questions or any encounter issues, I’m available nearly every day on the forum to assist you — for free!



Source link

Protected by Security by CleanTalk