Large Language Models (LLMs) are the fancy-schmancy AI technology that have been making all the waves in the AI world recently. Products like:
... are all powered by LLMs. For the purposes of this course, you can think of an LLM as a smart text generator. It works just like ChatGPT: you give it a prompt, and it gives you back some text that it believes answers your prompt. We're going to use Google's Gemini API to power our agent in this course. It's reasonably smart, but more importantly for us, it has a free tier.
You can think of tokens as the currency of LLMs. They are the way that LLMs measure how much text they have to process. Tokens are roughly 4 characters for most models. It's important when working with LLM APIs to understand how many tokens you're using.
We'll be staying well within the free tier limits of the Gemini API, but we'll still monitor our token usage!
Be aware that all API calls, including those made during local testing, consume tokens from your free tier quota. If you exhaust your quota, you may need to wait for it to reset (typically 24 hours) to continue the lesson. Regenerating your API key will not reset your quota.
If you already have a GCP account and a project, you can create the API key in that project. If you don't, AI studio will automatically create one for you.
GEMINI_API_KEY="your_api_key_here"
We never want to commit API keys, passwords, or other sensitive information to git.
import os
from dotenv import load_dotenv
load_dotenv()
api_key = os.environ.get("GEMINI_API_KEY")
from google import genai
client = genai.Client(api_key=api_key)
model
: The model name: gemini-2.0-flash-001
(this one has a generous free tier)contents
: The prompt to send to the model (a string). For now, hardcode this prompt:"Why is Boot.dev such a great place to learn backend development? Use one paragraph maximum."
The generate_content
method returns a GenerateContentResponse
object. Print the .text
property of the response to see the model's answer.
If everything is working as intended, you should be able to run your code and see the model's response in your terminal!
Prompt tokens: X
Response tokens: Y
The response has a .usage_metadata
property that has both:
prompt_token_count
property (tokens in the prompt)candidates_token_count
property (tokens in the response)Run and submit the CLI tests.
The Gemini API is an external web service and on occasion it's slow and unreliable. It's possible in this course for you to lose armor because of an API outage on Google's end... just be sure to always run
before submitting
to minimize the risk of that happening.
The Boot.dev CLI requires you to be signed in to submit your solution!
Copy/paste one of the following commands into your terminal:
Run
bootdev run 3d695968-98c9-4a91-b1e2-0ca53e8826b7
Submit
bootdev run 3d695968-98c9-4a91-b1e2-0ca53e8826b7 -s
Run the CLI commands to test your solution.
Login to view solution
Using the Bootdev CLI
The Bootdev CLI is the only way to submit your solution for this type of lesson. We need to be able to run commands in your environment to verify your solution.
You can install it here. It's a Go program hosted on GitHub, so you'll need Go installed as well. Instructions are on the GitHub page.