How to Get a Gemini API Key (Free)
Step by step: get a free Gemini API key from Google AI Studio, make your first call, and avoid the three mistakes most people make on day one.
TL;DR
- • Go to aistudio.google.com/app/apikey, sign in with a Google account, click 'Create API key', pick or create a project, copy the key. Five minutes.
- • The key starts with `AIza` and is about 39 characters long. Treat it like a password. Never commit it to git.
- • Gemini's free tier in 2026 gives you real usage limits on Flash (generous) and Pro (smaller). Check the current quotas on the Google AI pricing page before building anything serious.
- • Three day-one mistakes to avoid: committing the key to git, hardcoding it in frontend JavaScript, and using one key for everything you'll ever build.
- • Once you have the key, you can use it in TinkerLLM, curl, Python, Node.js, and tools like Cursor or Claude Code. We show all four.
You’ve used ChatGPT 500 times. You’ve never made an API call. That’s two different skills, and the gap between them is one free API key and about fifteen minutes of setup.
This post is that setup. We’ll get you a Gemini API key, make your first successful call, and cover the three mistakes I see developers make on day one that cost them either time or money later.
If you just want to use TinkerLLM’s paid exercises, you need this key. If you want to build anything with AI beyond pasting into a chat box, you need this key. Either way, the steps are the same.
What Google AI Studio Actually Is
Google AI Studio is the free developer surface for Gemini. It’s at aistudio.google.com, and it does two things: it’s a web-based prompt playground (similar to what TinkerLLM is, except without a curriculum), and it’s where you generate API keys to use Gemini from code.
Two terms that confuse people:
- Google AI Studio vs Gemini.com. Gemini.com is the consumer chat product, like ChatGPT. AI Studio is for developers. Different product, different URL, different account settings.
- Google AI Studio vs Vertex AI. Vertex AI is Google Cloud’s enterprise AI platform. It has Gemini too, plus identity management, IAM roles, billing alerts, VPC integration, and other enterprise features. AI Studio is simpler and free-tier friendly. For learning or small projects, AI Studio is what you want. For production at a company, Vertex AI is usually the answer.
Good news: you don’t have to decide right now. The API key from AI Studio works immediately. You can migrate to Vertex AI later if you need to.
The Five-Step Walkthrough
Here’s exactly what to do. I’ve done this fresh on a new Google account twice this month. It takes about five minutes if nothing goes wrong.
Step 1: Go to the API key page.
Navigate to aistudio.google.com/app/apikey directly. This URL bypasses the landing page and takes you straight to the API key section. If you go to aistudio.google.com first, you’ll have to click through a welcome screen and accept terms before the API key page is accessible.
Step 2: Sign in with a Google account.
Any Google account works: Gmail, Workspace, personal, work. If you have multiple accounts, pick the one you’ll use for billing and recovery. I recommend a personal Gmail for learning projects, a work Google Workspace account for anything touching client work. Separating these early saves you from awkward account-moving later.
Step 3: Click “Create API key”.
You’ll see an orange or blue button. Click it. A dialog will appear asking about the Google Cloud project.
Step 4: Pick or create a project. This is the step that matters.
Google groups all API usage under a “project” (a Google Cloud concept). You have two options:
- Use an existing project you already have in Google Cloud. If you’ve used any Google Cloud service before, you might have one. It’ll be listed.
- Create a new project. Name it something recognizable:
tinkerllm-learningormy-ai-experiments. Avoid generic names liketestorproject-1, because a year from now you’ll have three of them and no idea which is which.
For learning purposes, create a new project. Keep your learning usage separate from anything else you’re doing in Google Cloud. This matters for billing alerts, for quota management, and for the day you want to revoke one key without affecting others.
Try it yourself: If you haven’t already, open aistudio.google.com/app/apikey in a new tab right now. Sign in. Click “Create API key”. Name the project tinkerllm-learning. You’ll have a working key before you finish reading this post. It’s easier to follow the rest of this guide with the key already in your clipboard.
Step 5: Copy the key.
The key appears on screen. It starts with AIza and is about 39 characters long. Copy it immediately. You can see it again later, but most people find it easier to save it now than to hunt for it later.
That’s the whole flow. Five minutes if your Google account is already signed in, closer to ten if you’re doing it fresh.
Where to Store the Key
This is the part nobody walks you through, and also the part where most mistakes happen.
For learning (TinkerLLM, quick scripts):
Paste the key into the application when it asks. TinkerLLM stores it in your browser’s localStorage only. It never goes to a server. That’s the BYOK (Bring Your Own Key) pattern, and we wrote about why we built TinkerLLM that way in the build story.
For quick personal scripts on your own machine, an environment variable is standard:
export GEMINI_API_KEY="AIzaSy..."
Add this to your shell profile (~/.zshrc, ~/.bashrc, or equivalent). Then reference it in code as process.env.GEMINI_API_KEY or os.environ['GEMINI_API_KEY'].
For a real application:
Environment variables loaded from a .env file that is not committed to git. Use a library like dotenv (Node.js) or python-dotenv (Python) to read them. Every production framework (Next.js, FastAPI, Rails) has a standard way to handle this. Use the standard way.
For anything deployed:
Your hosting platform has a secrets interface. Vercel, Netlify, Cloudflare Workers, Firebase, AWS Lambda: all of them support environment variables set at the platform level, separate from your git repository. Never put the key in your code directly.
I’ll come back to security rules below, because a few of them are non-obvious.
Your First API Call
Four ways to call the API, in rough order of how most people start.
Using curl (works anywhere):
curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent" \
-H "Content-Type: application/json" \
-H "x-goog-api-key: $GEMINI_API_KEY" \
-d '{
"contents": [{
"parts": [{"text": "Say hi in one sentence."}]
}]
}'
You’ll get a JSON response with the model’s answer in candidates[0].content.parts[0].text. If you see that, your key works.
Using the Node.js SDK (@google/genai):
import { GoogleGenAI } from "@google/genai";
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });
const response = await ai.models.generateContent({
model: "gemini-2.5-flash",
contents: "Say hi in one sentence.",
});
console.log(response.text);
Install with npm install @google/genai. This is what TinkerLLM uses internally. It’s the current official SDK and the one Google updates most actively.
Using the Python SDK (google-generativeai):
import os
from google import genai
client = genai.Client(api_key=os.environ['GEMINI_API_KEY'])
response = client.models.generate_content(
model="gemini-2.5-flash",
contents="Say hi in one sentence."
)
print(response.text)
Install with pip install google-generativeai.
Using AI Studio itself:
The AI Studio web interface is also a prompt playground. You can test prompts without writing any code, tweak temperature and other parameters, and export the code once you have a prompt that works. For prototyping, this is often the fastest path.
If you’ve used the TinkerLLM playground, the AI Studio interface will feel familiar. The difference is TinkerLLM adds an exercise curriculum on top, while AI Studio is just the raw playground.
Try it yourself: Copy one of the four snippets above, paste your key, and run it. Print the response. If you see a coherent sentence come back, your key is wired up correctly. If you see an error, it’s almost always one of three things: key typo, key revoked, or the model name is spelled wrong.
Free Tier: What You Actually Get
The free tier is generous enough for learning and small projects, and it’s the single biggest reason Gemini is the default for most of our getting-started content.
As of early 2026, the free tier gives you real usage limits on Gemini 2.5 Flash and smaller limits on Gemini 2.5 Pro. The exact quotas change periodically as Google adjusts pricing tiers, so the Gemini pricing page is always the source of truth. Check it before building anything that depends on a specific limit.
What generally holds:
- Flash free tier is generous enough for a full course. Going through all 68 TinkerLLM exercises won’t come close to the daily quota.
- Pro free tier is smaller. You’ll hit it faster if you’re using Pro heavily, which is why most lessons default to Flash.
- Rate limits matter. Free tier has per-minute rate limits. Five or six calls in quick succession is fine. Fifty in ten seconds will get throttled.
- Free tier is not for commercial use. If you’re building something you’ll charge money for, you need to be on a paid plan, which requires enabling billing on your project.
- Data is used for training on free tier. This is the most important clause. On the free tier, Google may use your prompts and responses to improve models. On paid tiers, they don’t. If you’re testing with sensitive data, use a paid project.
The pattern we recommend for most learners: start on free, build until you bump against a limit or need Pro-level quality or data privacy, then enable billing. If you’re curious why the same prompt in Hindi uses more quota than the same prompt in English, that’s covered in Tokens Explained, and it’s worth knowing before you estimate your own free-tier headroom.
Three Mistakes to Avoid on Day One
I’ve seen all three of these happen to engineers who should know better. The patterns repeat because the platforms make it easy to do the wrong thing.
Mistake 1: Committing the key to git.
You test a quick script, it works, you commit everything in a single git add . to the repo. The key is in a .env file or a config file or accidentally hardcoded in a line of code. You push. Google’s secret scanner finds it within minutes (they scan public GitHub) and automatically revokes the key.
That’s the good outcome. The bad outcome is the repo is private but someone else on your team has their computer compromised later, or you make the repo public without remembering, or the key leaks through a CI log.
The fix: .gitignore your .env files from day one. Never type the literal key characters into any file that git can see. If it slips through anyway, revoke the key immediately, generate a new one, and check if GitGuardian or similar scanners caught it.
Mistake 2: Hardcoding the key in frontend JavaScript.
This looks fine when you’re testing locally. It ships to production and now your API key is visible to anyone who opens browser dev tools on your site. Someone will find it. The key gets used for random things. Your bill goes up.
The fix: never put API keys in client-side code. Either use BYOK (user provides their own key, stored in their browser only, never sent to your server) or proxy the API call through your own backend that holds the key server-side.
TinkerLLM uses BYOK because we shipped a B2C learning product, and BYOK means zero marginal cost per student. For most other applications, the proxy pattern is what you want: client calls your server, server calls Gemini with the shared key, server returns the response to the client. This also lets you add auth, rate limits, logging, and cost controls.
Mistake 3: One key for everything.
You create one key for TinkerLLM. Then you use the same key for a side project. Then for a work prototype. Then for a quick test in Cursor. Six months later, you need to rotate it because of a suspected leak, and suddenly four different things stop working and you have no idea why.
The fix: one key per project. Creating a new key is free and takes thirty seconds. Name them. I keep a plain text file with a single line per key: project name, date created, where it’s used. When I need to rotate, it’s a five-minute job, not an investigation.
Using the Key in TinkerLLM
If you’re going through the TinkerLLM course, here’s the flow:
- Sign in at app.tinkerllm.com with Google. (Same Google account or different, doesn’t matter. The Google sign-in is for progress tracking, not for API access.)
- Open Settings. There’s a field for your Gemini API key.
- Paste the key.
- Start or continue any exercise. The key is used from your browser directly to Google. Our servers never see it.
Lessons 1 and 2 (the 26 free exercises) don’t require a key. Lesson 3 onwards does. If you paste a key and the first exercise still shows a “need API key” error, refresh the page. Browser storage needs a full load to pick up the new value on first paste.
Once the key is in, the first paid exercise I’d run is 3-1 (Deterministic Logic) from Lesson 3 in the curriculum. It’s the classic temperature-zero test: same prompt, same answer, three times in a row. Five seconds to run, and it confirms your key works on a lesson that actually tests the model behavior. The full breakdown of what that exercise demonstrates is in What Temperature Actually Does in LLMs.
Using the Key Elsewhere
The same API key works in every tool that supports Gemini:
- Cursor / Claude Code / GitHub Copilot CLI. Most developer-facing AI tools let you bring your own model. In Cursor settings, add Gemini 2.5 Pro as a custom model with your key.
- Zapier / Make / n8n. Workflow automation tools have Gemini blocks that accept a key directly.
- Custom scripts. The SDK examples above work standalone. Useful for batch processing, scheduled summaries, or anything you’d otherwise do in a Jupyter notebook.
- Chat UIs you build yourself. A weekend “chatbot for my documents” project is a good starter. Google’s Gemini API docs walk through the chat pattern.
Try it yourself: Once you have the key working in curl, try it in one other tool. Paste it into Cursor’s custom model settings, or into a Zapier Gemini block, or into a one-off Python script. The skill of “my key works everywhere because it’s just an HTTP header” is worth internalizing early. After that, every Gemini-compatible tool you encounter is a five-minute setup.
A Minimum Sanity Check
Before you build anything serious, run this three-step check:
- The key works. A curl or SDK test returns a coherent response.
- The key is in env, not code. Search your codebase for the literal characters
AIza. If you find them anywhere outside comments or docs, move them to an env var. - You know the quota. Open the AI Studio dashboard and look at your usage for the day. You should know roughly what’s free and what isn’t before you accidentally blow through it.
Thirty seconds. Saves you a weekend of debugging later.
FAQ
Is the Gemini API really free?
The free tier is genuinely free to use, with quotas. You don’t need to enter a credit card to get an API key or make calls. The paid tier starts when you hit a free tier limit or when you enable billing to unlock higher quotas or commercial usage rights. For learning, prototyping, or small personal projects, most people stay free indefinitely. The Gemini pricing page has the current breakdown, which shifts periodically.
What’s the difference between the API key from AI Studio and a Vertex AI setup?
The AI Studio API key is simpler: one key, one HTTP header, ready in minutes. Vertex AI is Google Cloud’s enterprise platform with IAM roles, service accounts, VPC integration, audit logs, and SLAs. For learning and small projects, use AI Studio. For company-scale production or regulated industries, use Vertex AI. The same Gemini models are available on both. You can also start on AI Studio and migrate to Vertex AI later; your prompts and code are portable with small changes.
Can I use the same key for Gemini Flash and Gemini Pro?
Yes. One key, multiple models. You pick which model by name in your API call (gemini-2.5-flash, gemini-2.5-pro, gemini-2.5-flash-lite, etc.). The free tier has different quotas per model, so using Pro more aggressively will hit the Pro limit sooner than Flash. No separate key is needed for Pro; it’s the same key against a different model endpoint.
How do I know if my key is being used by someone else?
Check the usage dashboard in AI Studio. It shows request counts per day and per model. If you see usage you didn’t make, your key is probably leaked. Revoke the key immediately from the API keys page, create a new one, and replace it everywhere you use it. Then audit: check git history for accidental commits, check frontend bundles if you have any web app, check anyone you sent the key to. Google’s also fairly aggressive about auto-revoking keys they detect on public GitHub, which can be your first signal that a key leaked.
How fast is Gemini compared to GPT-4 or Claude?
Gemini 2.5 Flash is generally one of the fastest responses among mainstream models, often 30-40% faster than comparable models at similar quality tiers. Gemini Pro is slower but higher quality. Exact numbers change as each provider updates models, so benchmarks go stale fast. For a rough sense: Flash feels like near-instant for short prompts, Pro has a noticeable pause. If latency is critical, test on your actual prompt, not synthetic benchmarks.
Can I use my Gemini API key in production?
Yes, with two cautions. First, the free tier usually doesn’t allow commercial use; you need billing enabled for production workloads. Second, don’t use the key in frontend code where users can see it. Proxy through your backend. Production-ready patterns: enable billing, set up budget alerts in Google Cloud Console, rotate the key quarterly, use a separate key per environment (dev, staging, prod), and monitor usage. All of this is standard and documented in the Google AI API docs.
Does Gemini have a rate limit per minute?
Yes. Free tier rate limits are lower than paid. As of early 2026, Flash is around 15 requests per minute on free tier; paid tiers scale up. If you hit the limit, the API returns a 429 status code. Handle it with exponential backoff (wait a second, retry; if it fails, wait two seconds, retry; and so on). Most SDKs have retry logic built in or available as an option. Check the current limits on the pricing page before building anything that depends on specific numbers.
What happens if I lose my API key?
Nothing, as long as you haven’t also lost access to the Google account. Go back to aistudio.google.com/app/apikey, see the key listed (or click to reveal it), and copy it again. If you want to invalidate a key that might be compromised, revoke it from the same page and generate a new one. The key itself isn’t special; the access is tied to the Google account, and you can rotate keys as often as you like.
Do I need a credit card to get started?
No. The free tier doesn’t require a credit card. You only add billing when you need paid-tier quotas or commercial usage rights. This is part of why Gemini’s the default recommendation for anyone starting out. Compare to some other providers where even a $5 test requires billing setup on day one.
Delivery lead at Kalvium Labs with a background in instructional design. Writes concept explainers and process posts. Thinks about how people actually learn before jumping to solutions.
LinkedInWant to try this yourself?
Open the TinkerLLM playground and experiment with real models. 26 exercises free.
Start Tinkering