Reimagining AI Agents in Appsmith

New platform to build agents for different use cases

role

Senior Product Designer

engineers

2 Backend + 2 Frontend + 1 PM

timeline

Feb 2025 - Mar 2025

company

Appsmith (AI Agents Pod)

tldr

Our AI Agent gamble paid off! Appsmith didn't just build new features; we rapidly transformed our core low-code product into an AI-agent native powerhouse.


See how we strategically embedded powerful 'observe, reason, and act' capabilities, delivering cutting-edge AI for immediate, real-world impact by leveraging existing strengths, not starting from scratch!

Shift Towards AI Agents in Low-Code Platforms

The rise of AI agents has triggered a fundamental shift in how users expect to build and interact with software. With tools like OpenAI’s chat completions becoming standard building blocks, we already had users building GenAI-powered apps inside Appsmith — from chatbots to summarization tools.


But as agents — systems that can reason, observe, and act — began to gain traction, we saw a bigger opportunity emerging.

Build GenAI apps in Appsmith

Appsmith was already well-positioned for this shift. Many of the core primitives for agentic workflows were built into the IDE: APIs, queries, user input handling, dynamic state, and a visual UI builder. We didn’t need to start from scratch. Instead, we had the chance to evolve Appsmith into a powerful agent-building platform, grounded in the workflows our users already loved.

Our Initial Architecture for Agents

To move fast, we built AI agents using the existing Appsmith app IDE. We used the chat widget and queries UI space as the agent's primary input and output interfaces.

Main AI setup represented as an query

While this was functional, it was far from optimized. The interface was cluttered, the agent lacked a dedicated space, and we weren’t fully harnessing the potential of an independent AI-first environment.

The Challenge

We made a deliberate choice: instead of building a separate agent IDE from scratch, we decided to stay within the current Appsmith IDE as much as possible.

This would help us validate whether agentic workflows were big enough to justify deeper investments — while incrementally adding AI-specific controls and abstractions along the way.

How both apps and agents share similiar IA

Building Like a 0→1 Startup

The agent space is evolving at breakneck speed. To keep up, we intentionally approached this project like a 0→1 startup inside Appsmith — minimal PRDs, lightweight decision-making, and a strong bias toward fast execution and iterative learning.


Instead of over-planning, we shipped early, validated quickly, and let real user behavior shape the direction. This mindset helped us stay agile in a space where the rules are still being written.

Defining Agents — Getting it Right

Before going deeper, we needed to align on what an "agent" actually means. Many tools and teams use the term loosely. We referred to Anthropic’s definition of agents as systems that “can observe, reason, and act in a loop,” driven by both memory and goals. This helped us frame the design space clearly and build the right expectations for the interface and architecture.

Agents, on the other hand, are systems where LLMs dynamically direct their own processes and tool usage, maintaining control over how they accomplish tasks.

Digging deeper into the IA

We also mapped out flows on how the OpenAI API could orchestrate this loop — from memory and tool selection to user feedback — enabling agents to take coherent actions inside the Appsmith environment.

Agent BTS

How Others Are Approaching It

Given how new and fast-moving the agent space is, most tools are still actively experimenting. Platforms like Stack-AI, Flowise, Relevance AI, Vectoshift, and AgentHub are building dedicated agent IDEs — often centred around visual node-based builders, vector stores, and memory tools.


These approaches offer a fresh take but often assume a blank-slate, standalone experience — which can come with learning curves and limited flexibility for real-world business apps.

At Appsmith, we took a different path: instead of isolating agents into a new IDE, we chose to embed them inside the existing low-code fabric — so developers can bring agents into live apps that already connect to APIs, databases, and UI elements. This lets us focus on fast validation, deeper integration, and minimising user friction.

Explorations with bolt.new (llm based full stack platform)

I started with some sketches, aiming to integrate the design into the existing Appsmith IDE. To quickly prototype and get early feedback, we used Bolt.new, which helped us align on a direction with PMs and stakeholders.

Brainstorming sketches

Next, we experimented with the Appsmith design system to explore a range of UI possibilities — from minimal overlays to more complex interfaces. This allowed us to iterate quickly, test simple and advanced variations, and gradually converge on a direction that balanced usability, autonomy, and clarity for users working with AI agents.

Agent configuration with and without tabs

Different UI variations of the configurations

Refined Designs

After a couple of rounds of exploration, we landed on a single-state layout instead of using multiple tabs. This simplified the experience and made delegation and task visibility more intuitive.

From there, we were able to define the remaining tabs and interactions more clearly, aligning them with how users naturally think about working with agents.

Agent tab

Tools tab

Overall prototype

Early Validation With Existing Infrastructure

Here’s the funny part — all of this validation happened with our existing Appsmith infrastructure, not the fancy new agent designs.


Even with a less-than-ideal setup, users pushed through. They built real, production workflows using raw AI components.


That was our lightbulb moment: If people can do this with janky tools, imagine what they’ll build once we make the experience seamless.

Building AI agents with existing Infra

Launching It to the World

We launched this work recently across Product Hunt, our community, and other channels — not just to showcase what we built, but to start a conversation around AI-native interfaces in low-code environments.


The response has been strong — we onboarded our first customer last week, actively using agent-driven workflows in production.


We’re continuing to share our learnings through blog posts, internal docs, and demos — helping other teams navigate the rapidly evolving agent space with a more grounded and system-aware approach.

First paying customer

Website will be revised in actual code soon

@ Roop Krrish 2025