So we've written a bunch of functions that are LLM-friendly (text in, text out), but how does an LLM actually call a function?
Well, the answer is that... it doesn't. At least not directly. It works like this:
We're using the LLM as a decision-making engine, but we're still the ones running the code (thankfully).
Let's build the part of this system that tells the LLM which functions are available to it.
I added this code to my functions/get_files_info.py file, but you can place it anywhere. Remember that it will need to be imported when used.
In our solution, it's imported like this: from functions.get_files_info import schema_get_files_info
schema_get_files_info = types.FunctionDeclaration(
name="get_files_info",
description="Lists files in the specified directory along with their sizes, constrained to the working directory.",
parameters=types.Schema(
type=types.Type.OBJECT,
properties={
"directory": types.Schema(
type=types.Type.STRING,
description="The directory to list files from, relative to the working directory. If not provided, lists files in the working directory itself.",
),
},
),
)
We won't allow the LLM to specify the working_directory parameter! We're going to hard-code that.
available_functions = types.Tool(
function_declarations=[schema_get_files_info],
)
config=types.GenerateContentConfig(
tools=[available_functions], system_instruction=system_prompt
)
system_prompt = """
You are a helpful AI coding agent.
When a user asks a question or makes a request, make a function call plan. You can perform the following operations:
- List files and directories
All paths you provide should be relative to the working directory. You do not need to specify the working directory in your function calls as it is automatically injected for security reasons.
"""
f"Calling function: {function_call_part.name}({function_call_part.args})"
Otherwise, just print the text as normal.
get_files_info({'directory': '.'})get_files_info({'directory': 'pkg'})Run and submit the CLI tests.