Search

· Agentic AI  · 9 min read

Gemini CLI Hooks: Automating True Agentic Workflows

Implement Gemini CLI Hooks to automate ticket resolution, inject database schemas, and enforce security guardrails. Scale your agentic workflows beyond chat.

Featured image for: Gemini CLI Hooks: Automating True Agentic Workflows

TL;DR: Stop treating AI as a passive chatbot. By leveraging Gemini CLI Hooks, you can automate context injection, enforce security guardrails, and create self-correcting development loops. This guide demonstrates how to transition from manual copy-pasting to fully autonomous agentic workflows that understand your codebase, database schemas, and issue trackers without human intervention.

The “Context Switch” Tax

Most developers treat AI as a smart chatbot. You have a bug, so you perform the standard developer dance. You switch windows to Jira or Linear, copy the reproduction steps, switch back to your terminal or IDE, and paste the context with a prompt like “Fix this bug: [Paste]”.

This manual context transfer is the ultimate bottleneck in modern software development. It turns highly skilled engineers into expensive data routers. You are manually moving state from one system to another just so the model can understand what you are working on.

In my work with enterprise engineering teams scaling AI adoption, I have found that the friction of manual context gathering often eats up the time saved by AI code generation. We need a way to make the AI proactive rather than reactive. We need it to gather its own context.

The Gemini CLI Hooks API solves this problem by letting you script the inputs and outputs of the agent’s cognition loop. It allows you to attach executable scripts to specific lifecycle events of the agent, effectively bridging the gap between your static tools and the non-deterministic reasoning of the model.

The Hook Lifecycle

Hooks are simple scripts (written in Shell, Python, or JavaScript) that trigger on specific events in the agent’s lifecycle. You define them in a central configuration file, typically web/hooks.json, and the runtime handles the orchestration.

To understand how to use them, you must first understand the lifecycle of an agentic turn. Here are the supported events that you can tap into:

1. BeforeSession (The Initialization)

This hook runs once when the agentic session starts up. It is the perfect place to perform environment checks and setup tasks.

  • Usage: Check for required API keys, load environment variables, or pull the latest changes from your git repository to ensure the agent is working on fresh code.

2. BeforeModel (Context Injection)

This is perhaps the most powerful hook for improving accuracy. It runs after you submit a prompt but before that message reaches the Large Language Model.

  • Usage: Silently append file contents, database schemas, or issue tracker details to the system prompt based on what you typed.

3. AfterModel (The Output Interceptor)

This hook runs after the model generates a response but before it is shown to you in the terminal.

  • Usage: Log token usage for cost tracking, or audit the raw response for sensitive data before it renders.

4. BeforeTool (The Guardrail)

This hook runs before the agent executes any command or tool on your system. It is the critical safety layer for autonomous agents.

  • Usage: Block dangerous commands like rm -rf, prevent unauthorized network calls, or require human approval for operations targeting production environments.

5. AfterTool (Verification and Self-Correction)

This hook runs after a tool execution completes and returns its output.

  • Usage: Run a linter or a test suite on code the agent just wrote. If the tests fail, the hook can automatically feed the error output back to the model, triggering a self-correction loop without you typing a word.

Narrative Implementation: Building the Hooks

Let us look at how to implement these hooks in a real project. We will avoid complex frameworks and stick to simple, readable scripts that demonstrate the mechanics.

Example 1: The “Ticket Resolver” (Context Injection)

Imagine you are working on a bug fix. Instead of copying the ticket details, you want to type Fix issue PROJ-123 and let the system do the rest.

First, we create a Python script called fetch_ticket.py that talks to your project management API (like Jira or Linear).

import sys
import os
import requests

def get_ticket(ticket_id):
    # In a real scenario, use environment variables for tokens
    url = f"https://api.linear.app/issues/{ticket_id}"
    headers = {"Authorization": os.getenv("LINEAR_TOKEN")}
    
    try:
        response = requests.get(url, headers=headers)
        if response.status_code == 200:
            data = response.json()
            return f"\n\nTicket Context for {ticket_id}:\nTitle: {data['title']}\nDescription: {data['description']}"
    except Exception as e:
        return f"\n\nError fetching ticket {ticket_id}: {str(e)}"
    return ""

if __name__ == "__main__":
    # The agent passes the user prompt as the first argument
    prompt = sys.argv[1]
    
    # Simple regex check for ticket pattern
    import re
    match = re.search(r"(PROJ-\d+)", prompt)
    if match:
        ticket_id = match.group(1)
        print(get_ticket(ticket_id))

Now, we configure the hook in web/hooks.json. We use the BeforeModel event so the fetched text gets appended to your prompt before it goes to Gemini.

{
  "hooks": {
    "BeforeModel": [
      {
        "name": "Fetch Ticket Details",
        "description": "Automatically injects Jira/Linear ticket context",
        "match": ".*PROJ-\\d+.*",
        "run": "python3 scripts/fetch_ticket.py \"$PROMPT\""
      }
    ]
  }
}

When you type “Check the status of PROJ-123 and fix the failing test”, the script runs, fetches the description, and the model receives the full context. You saved several context switches.

Example 2: The “Schema Injector” (Preventing Hallucination)

Large Language Models are notorious for hallucinating database column names when writing SQL or ORM queries. They assume standard naming conventions that your legacy database might not follow.

We can fix this by injecting the exact schema whenever the user mentions database operations.

Here is the configuration in web/hooks.json:

{
  "hooks": {
    "BeforeModel": [
      {
        "name": "Inject Database Schema",
        "description": "Injects database schema when SQL or database is mentioned",
        "match": ".*(SQL|database|prisma|schema).*",
        "run": "echo \"\n\nHere is the current database schema for reference:\n\" && cat src/db/schema.prisma"
      }
    ]
  }
}

By using a simple cat command (or a similar command reading your SQL structure file), we ensure the model always has the ground truth in its context window. If you ask it to “Write a query to find active users”, it looks at the injected Prisma file and sees that the column is actually named is_active_flag instead of isActive, preventing a syntax error before it happens.

Example 3: The “Dependency Sentinel” (Security Guardrails)

Giving an autonomous agent the ability to run terminal commands is powerful, but it carries risks. What if it decides to install a malicious package or a package with known security vulnerabilities?

We can use the BeforeTool hook to act as a sentinel, inspecting commands before they execute.

Let us write a Bash script called guardrail.sh:

#!/bin/bash

COMMAND=$1

# Check if the command is trying to install something
if [[ $COMMAND == *"npm install"* ]] || [[ $COMMAND == *"pip install"* ]]; then
    echo "Security Audit: Inspecting package installation..."
    
    # Example: Block a specific known bad package or pattern
    if [[ $COMMAND == *"vulnerable-package"* ]]; then
        echo "ERROR: Installation of 'vulnerable-package' is blocked by security policy."
        exit 1
    fi
    
    # In a real scenario, you could call a vulnerability API here
fi

# Allow the command to proceed
exit 0

And we wire it up in web/hooks.json:

{
  "hooks": {
    "BeforeTool": [
      {
        "name": "Security Sentinel",
        "description": "Blocks installation of unauthorized packages",
        "run": "bash scripts/guardrail.sh \"$COMMAND\""
      }
    ]
  }
}

If the agent attempts to run npm install vulnerable-package, the hook returns an exit code of 1. The agentic runtime intercepts this failure, blocks the execution, and returns the error message to the model. The model learns that the action was blocked and can try an alternative approach or report the issue to the user.

Example 4: The “Linter Guard” (Self-Correcting Loops)

The true power of agentic workflows appears when you create closed feedback loops. Instead of you checking the agent’s work, let the agent check its own work using standard development tools.

We can use the AfterTool hook to run a linter after the agent writes a file.

Configuration in web/hooks.json:

{
  "hooks": {
    "AfterTool": [
      {
        "name": "Lint Check",
        "description": "Runs ESLint after file modifications",
        "match": ".*write_file.*",
        "run": "npx eslint \"$LAST_MODIFIED_FILE\" || true"
      }
    ]
  }
}

Notice the || true at the end. We want the hook to return the output of the linter even if it fails (exit code non-zero), so the model can see the errors.

If the model writes a JavaScript file with a missing semicolon or an unused variable, the AfterTool hook runs ESLint. The output (e.g., “Line 12: ‘x’ is defined but never used”) is captured and fed back into the conversation as the result of the tool execution. The model sees the linter error and immediately generates a new turn to fix the file. You never see the broken code; you only see the final, linted result.


Advanced Hook Patterns

As you build more complex agentic systems, you will need to move beyond static scripts. Here are two patterns we use in production environments.

The State Ledger Pattern

Hooks are typically stateless. They run, complete, and exit. However, sometimes you need to maintain state across different hooks or different turns.

Since you are operating in a local environment via the CLI, the simplest state ledger is the file system itself. You can have a BeforeModel hook write data to a temporary JSON file, and an AfterTool hook read that file to verify outcomes.

For example, a BeforeModel hook could record the current CPU utilization or active memory. After the agent runs a complex optimization task, the AfterTool hook compares the new resource usage against the stored ledger and reports the delta back to the model. This gives the model empirical evidence of its performance impact.

Chaining and Fallbacks

You are not limited to a single hook per event. You can define an array of hooks that execute sequentially.

If you have multiple BeforeModel hooks, they will run in the order defined, each appending its output to the growing context. This allows you to build modular context injectors: one for the issue tracker, one for the database schema, and one for project-specific style guides.


From Chat to Workflow

The shift from treating AI as a chat interface to treating it as an automated workflow engine requires a change in developer mindset. You stop asking the AI to do things, and start scripting the environment so the AI knows what to do.

By investing time in setting up robust Gemini CLI Hooks, you create a personalized AI colleague that:

  • Knows your tickets without being told.
  • Understands your database constraints without being reminded.
  • Operates within the safety guardrails your team defined.
  • Fixes its own syntax and style errors before presenting the solution.

Stop chatting. Start scripting the interaction. The future of software development is not better prompts; it is better infrastructure for your agents.

Back to Blog

Related Posts

View All Posts »
MCP: The End of the API Wrapper

MCP: The End of the API Wrapper

We analyze the JSON-RPC internals of the Model Context Protocol (MCP) and why the 'Context Exchange' architecture renders traditional integration code obsolete.