When everyone is still learning how to "raise lobsters", AI Platforms are learning how to "replace lobsters".
Regarding this wave of "Lobsters" (AI Agents), if you are still arguing about "whether it's useful or not", you are already half a beat behind.
A more accurate perspective is that it's experiencing a three-wave market—
👉 First wave: Selling Dreams
👉 Second wave: Selling Thresholds
👉 Third wave: Currently forming
-0a9a925631db26cbfc0a14de1ce1f090.png)
First Wave: Selling Dreams
The early narrative was very clean:
- AI helps you get things done
- Automated operations
- One-person company
- Passive income
The core message was: 👉 "Even those who don't know tech can use it."
But after everyone jumped in, they realized—they need to tweak prompts, fix workflows, debug, and maintain the environment. The first batch of people started to leave the market, leading to a backlash: 👉 "I'm busy taking care of the lobsters all day, but the lobsters aren't doing any work."
Second Wave: Selling Thresholds
When "accessible to everyone" could no longer hold up, the narrative upgraded:
- It's not that the tool doesn't work, it's that you don't know how to use it.
- You aren't using AI to manage AI.
- Your model is too weak.
- You lack system thinking.
The keywords became: workflow / agent design / orchestration This wave looks more professional and also closer to reality. But it simultaneously did one thing: 👉 Shifted the responsibility for failure from the product to the user.
The first wave sold illusions, the second wave sells capabilities. At this point, most discussions get stuck at two extremes— "Lobsters are a scam" vs "Only experts deserve to use them." Neither conclusion is precise.
Engineering Reality: This thing is still a framework, not a product.
If you've used systems like OpenClaw, the feeling is usually very consistent: 👉 You are not using a tool; you are building a system.
What you actually have to handle is workflow orchestration, state/memory management, tool chaining, and failure recovery. The nature of these problems is closer to distributed systems + automation pipelines, rather than an out-of-the-box AI assistant.
Third Wave: Selling Platforms
What we will see next is not just tutorials or courses anymore, but—
- Platforms that bundle workflows for you
- Pre-configured agent templates
- Execution layers with controllable costs
- Standardized tool ecosystems
Simply put: 👉 Productizing "engineering problems"
Typical characteristics: 1️⃣ Hiding the orchestration 2️⃣ Restricting degrees of freedom in exchange for stability 3️⃣ Replacing personal capability with platform rules 4️⃣ Starting to charge subscriptions, no longer selling courses
But note—the third wave doesn't mean the problem is solved; it just centralizes the problem handling.
However, the third wave actually has two possible outcomes.
Understanding the third wave as "platform rent collection" is from a business model perspective. But if we switch to a technical evolution perspective, we see another path: 👉 The third wave might also be "the model directly consuming the agent layer."
The reason is very realistic. Right now, what most agents are doing essentially boils down to three things: task decomposition, tool invocation, and workflow chaining. And these three things are rapidly becoming built-in capabilities of the models.
A direct signal is the Claude Mythos Preview released by Anthropic in April.
This model isn't packaged as an agent; it simply connects directly to a sandbox environment, and it can—

- Deconstruct vulnerability analysis tasks on its own
- Decide which tools to use on its own
- Run proof-of-concept attacks on its own
A TCP SACK vulnerability in OpenBSD that had been dormant for 27 years, an H.264 vulnerability in FFmpeg undiscovered for 16 years, and a 17-year-old remote execution vulnerability in FreeBSD—were all uncovered just like that. And in the past, these things usually required— 👉 A complete set of security agent frameworks + manual tuning + multi-layer prompt chains.
In other words: When reasoning capabilities reach a critical point, many things that seem to "rely on an agent to achieve" will turn directly into things a model can do with a standard prompt.
A very realistic problem arises: 👉 The agent workflow you spent half a year optimizing today might be swallowed whole by the next version of the model.
Who do Agents Bring the Most Value To? The Answer is Not the User
If agents really will be absorbed by models, then why is the entire industry still pushing agents so hard? A perspective that few people talk about: 👉 Because agents are the best channel for model companies to collect real-world operational data.
Including—how users actually break down tasks, how tools are actually chained together, which workflows fail, and why they fail. For training the next generation of models, this data is more valuable than the revenue generated by the agents themselves.
So the three-wave structure can be interpreted again:
- First Wave: Selling Dreams (getting people to enter the market)
- Second Wave: Selling Thresholds (filtering the users)
- Third Wave: Collecting Platform Rent + Collecting Operational Data (happening simultaneously)
And after the third wave is collected, the next step might be— 👉 Folding this layer directly back inside the model.
By then, what most intermediate-layer agent companies will face is not competition, but extinction.
Returning to the Core Judgment
Are AI Agents (Lobsters) the future? 👉 Yes, it's almost certain.
But the problem most people face right now is: 👉 Using an "unfinished product" while evaluating it with the expectations of a "mature product".
An engineering perspective criteria— If an AI system has: an unpredictable success rate, uncontrollable costs, and requires a lot of manual intervention, Then it is currently still stuck at: 👉 The framework / infra layer, rather than the product layer.
The conclusion isn't pessimistic, nor is it optimistic, but stratified: 👉 Technology is growing. 👉 The market is harvesting. 👉 Platforms are forming. 👉 And models are likely preparing to fold all of this back into themselves.
The question has never been: ❌ "Are lobsters worth building?" But rather: ✅ "Which layer are you standing on right now?"