So we've got some function-calling working, but it's not fair to call our program an "agent" yet for one simple reason:
It has no feedback loop.
A key part of an "agent," as defined by AI-influencer-hype-bros, is that it can continuously use its tools to iterate on its own results. So we're going to build two things:
This is a big step; take your time. (But you're also really close to the finish line now!)
client.models.generate_content request into a dedicated function, you'll be calling that function in the loop. If you're doing everything in main(), that's also fine – just take care that the loop is scoped correctly.for loop for this:
for _ in range(20):
# call the model, handle responses, etc.
messages.append(types.Content(role="user", parts=function_responses))
There are two separate appends each loop: first the model's candidate content, then the collected tool responses.
(aiagent) wagslane@MacBook-Pro-2 aiagent % uv run main.py "how does the calculator render results to the console?"
- Calling function: get_files_info
- Calling function: get_file_content
Final response:
Alright, I've examined the code in `main.py`. Here's how the calculator renders results to the console:
- **`print(to_print)`:** The core of the output is done using the `print()` function.
- **`format_json_output(expression, result)`:** Before printing, the `format_json_output` function (imported from `pkg.render`) is used to format the result and the original expression into a JSON-like string. This formatted string is then stored in the `to_print` variable.
- **Error handling:** The code includes error handling with `try...except` blocks. If there's an error during the calculation (e.g., invalid expression), an error message is printed to the console using `print(f"Error: {e}")`.
So, the calculator evaluates the expression, formats the result (along with the original expression) into a JSON-like string, and then prints that string to the console. It also prints error messages to the console if any errors occur.
You may or may not need to make adjustments to your system prompt to get the LLM to behave the way you want. You're a prompt engineer now, so act like one!
Submit the CLI tests.