Building Effective AI Agents

Diving into tool based architecture and best practices

8 May 2025

AI agents are becoming increasingly sophisticated, but what makes them truly effective? The answer lies in their ability to interact with the world through well-designed tools and maintain context through careful state management. In this post, we'll explore the key components and best practices for building powerful AI agents, drawing from my experience building Agent, a command-line AI agent tool that provides a Claude Code or Cursor Agent-like experience. This agent is built on the best AI framework available, the Vercel AI SDK.

The Foundation: Tool-Based Architecture

At the heart of any effective AI agent is its tooling system. Tools are the agent's interface with the world, allowing it to perform actions, gather information, and make changes. A well-designed tool system should be:

  1. Type-Safe: Using TypeScript and Zod for parameter validation ensures tools are used correctly
  2. Permission-Based: Tools should request explicit permission for potentially dangerous operations
  3. Context-Aware: Tools should maintain awareness of their operating environment
  4. Composable: Tools should be able to work together to accomplish complex tasks

Here's an example of a well-structured tool from my Agent project, using the tool function provided by the AI SDK:

const fileEditTool = tool({
  description: "Makes targeted edits to specific files",
  parameters: z.object({
    filePath: z.string().describe("Path to the file"),
    search: z.string().describe("String to replace"),
    replace: z.string().describe("Replacement string"),
  }),
  execute: async ({ filePath, search, replace }) => {
    // Permission check and execution logic
  },
});

State Management and Context

Effective AI agents need to maintain context about their operations and the environment they're working in. This includes:

  1. Permission State: Tracking what operations are allowed
  2. Operation History: Maintaining a record of actions taken
  3. Environment Context: Understanding the current working directory and available resources

In Agent, we use a permissions store to manage this:

const permissions = {
  bashAllowedCommands: new Set(),
  fileEditAllowed: false,
  fileWriteAllowed: false,
  notebookEditAllowed: false,
};

Security and Safety

One of the most critical aspects of building AI agents is ensuring they operate safely and securely. This involves:

  1. Explicit Permissions: Requiring user approval for potentially dangerous operations
  2. Command Validation: Validating and sanitizing inputs before execution
  3. Error Handling: Graceful handling of failures and edge cases

An AI agent is most effective when you can let it run free with the peace of mind you're not going to break anything. A permissions system integrated into your agent can make this a lot easier.

I also find having an escape hatch such as a "YOLO" mode can make rapid prototyping in low risk environments a lot easier.

The permission system in Agent demonstrates this approach:

async function askPermission(promptText: string) {
  if (process.env.YOLO) return true;
  return new Promise((resolve) => {
    rl.question(`${magenta("?")} ${promptText} ${gray("(y/n)")} `, (answer) => {
      process.stdout.write("\n");
      return resolve(answer.toLowerCase() === "y");
    });
  });
}

Tool Selection and Composition

The effectiveness of an AI agent often depends on its available tools. A good toolset should include:

  1. Extensive Finding Tools: The AI needs lots of ways to be able to find information in case it gets stuck needing more context
  2. Reading Tools: This can be a simple file read or a sub-agent augmented search
  3. As Powerful as Possible: The more powerful the tools the more you will find new emergent capabilities
  4. Precision Application: Ensure your editing tools enclude a way to edit precise parts of your data, not just all of it at once

Agent provides a comprehensive set of tools similar to Claude Code and Cursor's:

export const AGENT_TOOLS = {
  bash: bashTool,
  glob: globTool,
  grep: grepTool,
  ls: lsTool,
  file_read: fileReadTool,
  file_edit: fileEditTool,
  file_write: fileWriteTool,
};

The Future of AI Agents

As AI agents become more sophisticated, we're seeing several trends emerge:

  1. Multi-Agent Systems: Agents that can delegate tasks to specialized sub-agents
  2. Enhanced Context Management: Better handling of long-term memory and state
  3. Improved Tool Discovery: Automatic learning of new tools and capabilities
  4. Better Safety Mechanisms: More sophisticated permission and validation systems

Getting Started

If you're interested in building your own AI agent, I recommend starting with a simple tool-based architecture and gradually adding complexity. The Agent project provides a good foundation to build upon, with its permission-based security model and comprehensive tooling system. It's also quite a small size by design, unlike most other agent tools so it can be understood relatively quickly.

Remember that the most effective AI agents are those that:

  • Have clear, well-defined tools
  • Maintain proper context and state
  • Implement robust security measures
  • Can compose tools to solve complex problems

By following these principles and learning from existing implementations, you can build AI agents that are both powerful and safe to use.

All Posts