A Practical Guide to Choosing Prompts, Workflows, or Agents

Sean Falconer
8 min readJan 21, 2025

--

Note: I recently contributed with my colleague Jack Vanlightly on an article exploring when it makes sense to use an AI agent. The topic sparked so much interest that I felt it deserved a deeper dive — hence this blog post.

AI agents are everywhere, and I’ll admit, I’ve been caught up in the hype.

And why not? Agents are reshaping industries by automating multi-step workflows, like customer service triaging, and improving decision-making, such as recommending treatments in healthcare.

However, just because agents can do amazing things doesn’t mean they’re the right solution for every problem.

For businesses new to GenAI, jumping straight into agents can feel like running a marathon without training. Sometimes, a simple prompt or workflow delivers value without the complexity.

This post will help you figure out what makes sense for your AI journey, whether that’s sticking to prompts, designing workflows, or exploring agents. Because while agents are transformative, rushing into them without a clear plan is a great way to end up with a lot of flash and very little substance.

Understanding the Differences: Prompts, Workflows, and Agents

When it comes to AI systems, prompts, workflows, and agents represent distinct but related approaches, each suited to different levels of complexity and automation. More complex systems will likely combine all three techniques.

Let’s break down what they mean and where they fit.

Prompt-Based AI Systems

Prompt-based systems are the simplest form of AI interaction. You give the system a specific input, or “prompt,” and it generates a single response.

Prompt-Based AI System

The effectiveness of these systems hinges on the quality of the prompt. Think of prompt engineering like giving instructions to a chef. If you say, ‘Make something delicious,’ you might get a great dish, but it’s unpredictable. If you say, ‘Make a spicy chicken curry with jasmine rice,’ you’re far more likely to get exactly what you want.

  • Best for: Single, isolated tasks where the AI doesn’t need additional context or the context can be in a fixed state within a prompt.
  • Example use case: Generating marketing copy, summarizing a document, or translating a paragraph.

Prompt engineering lets users shape input to guide output effectively, using techniques like few-shot learning to enhance results.

Workflows

Workflows take the simplicity of prompts and extend it into a sequence of predefined steps, orchestrated through code. Workflows follow a structured, predetermined path. They’re excellent for automating processes where the flow of tasks is well-understood in advance.

Workflows follow rigid, pre-coded sequences but can incorporate external data using techniques like RAG, which retrieves context from a knowledge base, or GraphRAG, which maps relationships between pieces of data for deeper understanding.

  • Best for: Predictable, repeatable tasks where the sequence of actions is clear and doesn’t require dynamic decision-making.
  • Example use case: Automating a customer onboarding process by pulling user data, generating a welcome email, and scheduling a follow-up.
Simple RAG-based Workflow

While workflows can retrieve and process external data, they lack the adaptability of agents. The AI doesn’t make decisions or iterate; it simply executes tasks in a pre-defined order.

Agents

Agents introduce adaptability, dynamically determining actions based on context, integrating with systems, and refining their outputs.

Anatomy of an Agent

This flexibility makes agents powerful for complex, open-ended tasks. Unlike workflows, agents can adjust to unexpected inputs or changes in the environment without requiring predefined code paths.

  • Best for: Dynamic, multi-step tasks that require reasoning, decision-making, and adaptability.
  • Example use case: A virtual assistant that researches a topic, drafts a report, refines it based on feedback, and automates scheduling meetings.

Agents excel in scenarios where workflows are too rigid to accommodate the unpredictability of real-world problems.

However, building agents presents significant challenges, requiring advanced frameworks and tools to handle their non-deterministic behavior. Testing and debugging become particularly complex due to the stochastic nature of their outputs, and scaling agents introduces the same intricate dependencies and reliability concerns as scaling a distributed system.

Now that we’ve explored prompts, workflows, and agents, the question becomes: which one is the right fit for your problem? Let’s break down how to make that decision based on your specific needs.

Which Approach is Right for Me?

Deciding between a simple prompt, a workflow, or a fully agentic system boils down to asking a fundamental question: What’s the right tool for the problem I’m trying to solve?

Too often, we overcomplicate things, building solutions that are flashy but wildly over-engineered for the task at hand.

Start by focusing on business value:

  • What do you want to achieve?
  • How will you measure if it’s working?

The simplest solution is often the best place to begin.

When I built a tool to help me draft LinkedIn posts (see details here), I started with simple prompt engineering. It wasn’t perfect, sometimes the results sounded like they were written by an overly enthusiastic intern, but it worked well enough as a first draft. It saved me time, took minimal effort to set up, and didn’t break the bank on token costs.

Could I make it more complex? Sure.

I could build a workflow that integrates insights from all my past posts, injects personal anecdotes, and uses advanced heuristics to fine-tune every word. Or I could go all-in with an agent that iteratively refines the content until it’s polished to perfection. But do I really need all that just to share updates about my work? Probably not.

Imagine you’re building a product MVP. When you’re just testing an idea, especially when your only users are your mom and dad, do you really need a Kubernetes cluster running in multiple availability zones, with horizontal sharding and the capacity to scale to billions of users? Or can you start with a simple monolithic three-tier architecture? The latter gets the job done without unnecessary complexity, leaving you room to grow once you know there’s demand.

Before diving headfirst into agents — or even workflows — it’s worth taking a step back to ask: Is generative AI even the right tool for this problem? Not every challenge calls for the newest, flashiest tech. Sometimes, older approaches like predictive machine learning or automation scripts are faster, simpler, and more cost-effective.

The key is to match the solution to the problem.

Start small, iterate, and let your needs drive the complexity, not the other way around. Just because you can build something elaborate doesn’t mean you should. It’s like designing a skyscraper for a lemonade stand. Sure, it’s impressive, but it’s overkill when all you need is a sturdy table and some shade.

Assuming agents are the right choice for your needs, the next question is: what challenges might you face when building and deploying them?

Challenges Companies Face with Building and Deploying Agents

While AI agents are full of potential, building and deploying them effectively comes with significant challenges. These obstacles range from over-ambition to technical complexity, and they highlight why many companies struggle to move from flashy demos to real-world impact.

1. Over-Ambition: Trying to Do It All

The first trap is diving in headfirst, attempting to build agents that can plan, reason, and — why not — make coffee. The result? Over-engineered, bloated systems that deliver underwhelming results.

It’s like designing a rocket to deliver pizzas: technically impressive, but probably not worth the cost.

Companies need to resist the urge to overbuild and instead focus on specific, measurable goals. Starting small and proving value early is far less glamorous but infinitely more effective.

2. Engineering Complexity: A New Mindset

Programming agents isn’t like traditional coding. With non-deterministic workflows, you can’t rely on the AI doing exactly what you tell it to every time. This requires a shift in mindset for engineering teams.

Building and testing agents demands patience, flexibility, and a tolerance for unexpected outcomes. If teams aren’t prepared to deal with unpredictability, they risk frustration, stalled projects, and suboptimal results. For now, focusing on non-customer-facing workflows can help mitigate these challenges.

3. Data Dysfunction: The Achilles’ Heel

Many companies still struggle with the basics of data management. They lack visibility into what data they have, where it resides, or how to extract value from it. Data silos and expensive data movement costs make it even harder to operationalize data for AI. Engineers often spend their time fighting with pipelines instead of solving meaningful problems.

Agents rely heavily on clean, accessible data, and if your company’s data foundation is shaky, your agent initiatives will be too. See my post on the data liberation problem for more on this.

4. Managing Agents: A Distributed Systems Problem

Deploying and maintaining agents is remarkably similar to managing a distributed system.

Agents rely on interconnected dependencies: external APIs, knowledge bases, orchestration frameworks, and decision-making loops. Without proper management, these dependencies can become a nightmare to scale and maintain. Small issues in one component can cascade into larger failures, making reliability and performance difficult to ensure. Scaling agents effectively requires robust monitoring, fault tolerance, and architectural discipline, much like designing resilient distributed systems.

Despite these challenges, agents are already demonstrating their value in certain domains. Let’s take a look at where they’re succeeding.

Where Agents Are Succeeding Today

We’re still in the early days of AI agents. Think of it like the dawn of the automobile. They’re useful, exciting, and full of promise, but we’re a long way from “self-driving” perfection. The abstraction frameworks for building agents are still finding their footing, and the developer tools for deployment, testing, and monitoring remain immature. Building an effective agent today is often a mix of trial, error, and intuition.

Despite these growing pains, the potential for agents is enormous. Even achieving a fraction of their promise is compelling.

Right now, the most effective use cases focus on augmenting human effort rather than replacing it entirely. Agents excel in repetitive, resource-intensive tasks or workflows too intricate for traditional automation. Think of agents as interns who can do the grunt work, sifting through endless spreadsheets, drafting reports, or even writing emails, while you focus on the big picture. They’re particularly useful for processes that are frustrating for humans but not catastrophic if they occasionally miss the mark.

For example:

  • Sales and Marketing: Agents can research prospects, identify decision-makers, and even draft personalized outreach emails.
  • Drug Discovery: They semi-automate regulatory paperwork, generating initial responses that humans can verify and refine.

The key to their success lies in reducing grunt work and enhancing productivity, freeing up humans to focus on higher-value tasks. While it may take years to perfect the last 20% needed for full automation, today’s agents already demonstrate their value by lightening the load in areas where precision isn’t mission-critical.

Agents, workflows, and prompts each have their place in the AI toolkit. By understanding where each excels, and starting small, you can unlock AI’s potential while avoiding unnecessary complexity.

Remember, the real magic happens when smart people collaborate with smart AI. By letting agents handle the repetitive, complex groundwork, businesses can harness their potential to unlock new levels of efficiency and creativity — even in these early days.

--

--

Sean Falconer
Sean Falconer

Written by Sean Falconer

AI @ Confluent | Engineer & Storyteller | 100% Canadian 🇨🇦 | Snowflake Data Superhero ❄️ | AWS Community Builder

No responses yet