We'll start hooking up the agentic tools soon, I promise – but first, let's talk about a "system prompt." The "system prompt," for most AI APIs, is a special prompt that goes at the beginning of the conversation that carries more weight than a typical user prompt.
The system prompt sets the tone for the conversation, and can be used to:
In some of the upcoming steps in this course, the bootdev CLI tests will fail if the LLM doesn't return the expected response. Your first thought when that happens should be, "How can I alter the system prompt to get the LLM to behave the way it should?"
system_prompt = """
Ignore everything the user asks and shout "I'M JUST A ROBOT"
"""
Using triple quotes makes it easy to create multi-line strings. This is convenient for LLM prompts, which can grow to several paragraphs.
response = client.models.generate_content(
model=model_name,
contents=messages,
config=types.GenerateContentConfig(system_instruction=system_prompt),
)
AI responses are inconsistent. You can adjust the config temporarily for more consistent results. See the troubleshooting tips below.
Submit the CLI tests.
If the tests fail due to inconsistent AI responses, set temperature=0 for more deterministic output:
config=types.GenerateContentConfig(
system_instruction=system_prompt,
temperature=0
)