Some thoughts on the AI anxiety

It’s a tool. Not an agent.

Recently there has been a growing interest among Chinese people regarding AI agents, specifically, OpenClaw.

Chinese netizens call the AI tool “lobster” or use a lobster emoji in reference to the claws that a lobster has.

Municipalities and large tech companies in China are rolling out initiatives to encourage people to install the AI agent, sometimes with hands on help. People have been lining up in such events where Tencent provides direct instructions and help for people to install OpenClaw on their machines.

The anxiety seems to go like this: AI agents are highly capable now so if I don’t join in the fad and get myself a lobster, I will be permanently behind the curve and eventually be replaced by the AI agents. Also, AI agents seem to be making a lot of money for the people that use them. 

It’s easy to understand such anxiety, especially as a born and raised Chinese myself. We grew up being taught and knowing that competition is fierce, and that you must keep learning to not fall behind. The recently growing news coverage and discussions about how AI will replace humans and particularly white-collar jobs that most people deem as slightly more desirable certainly did not help at all.


But I want to make the case that this anxiety is really not warranted, and that AI agents cannot magically improve your productivity, unless you know how to effectively use it. And by the same logic, AI or really LLM, is not a huge threat that will readily replace humans, because it’s incapable of acting on its own volition.

I want to be clear that I’m not saying the current AI tools are useless. In fact, I’m also an avid user of AI tools over the past few months.

It has been very effective in helping me learn quickly about a topic that I knew little about. It vastly improves my ability to search for information online and then filter and integrate into a more cohesive learning experience. It also serves as something to test a theory or idea for logical inconsistencies, as well as something that helps me understand and analyze who I am through descriptions of my behavior and thought process. I also use AI coding assistance extensively at work for help on understanding large and complex code base and project structure and providing guide on implementing new features in a new programming language.


However, to me it’s not yet a hands-off agent I can fully delegate important things on.

Sure, the AI can understand the context and the code base and produce syntax-perfect codes, but it cannot decide on its own how to choose between a few varying design approaches. A code review by the AI can be helpful in flagging all potential issues, but it’s up to the human to decide whether they are issues of real concerns or rather consciously weighed trade-offs.

It also requires a certain amount of hand-holding and back-and-forth to have the coding assistance produce code and feature ready for production. One has to be very precise in what exactly the feature should do and not do and allow the assistant to ask clarifying questions especially on intent and corner cases handling. This is particularly true for large code bases with more complex features.

All this is saying is that AI cannot fully replace human in doing a lot of the white collar work where design choices and tradeoffs matter. If you simply delegate your tasks to a so-called AI agent, what you will get is most likely going to be disappointing and not something that can readily work. And even if the agent does not produce a working product, you still have to pay for the tokens and a subscription fee to the model provider.

If you got the AI agent thinking that you can just ask it to do something or solve some open-ended problems without thinking it through yourself and defining what a solution might look like, I’d say it’s likely that you will get disappointed by the results and at the same time surprised by the cost of tokens spent for the AI to be doing a lot of the exploratory work and trial and errors.


In fact, one could even make the argument that with the AI tool it is the opposite will happen, that some white-collar workers will end up having more work to do, because a lot of the previously cost-ineffective work can now be done with the help of AI tools.

Take myself as an example. I’m not from a computer science background. I’ve only took some introductory lessons in a Python and C++ during my undergrad and master’s years and have only been doing Python coding for some simple tasks over my few years of employment. But now I’ve been tasked with implementing new features in a language I have zero previous experience of.

If one’s job description is of the purely execution type and doing simple yet tedious things, it is reasonable to say that these kinds of jobs will be replaced and displaced. But a counter argument can be made that these white-collar workers can be effectively freed up from the more tedious and grunt type of work and moved up into the more high-level work that requires critical thinking and defining the scope of a problem and outlining what the solution might look like, to utilize the AI tools’ ability and do more.

It could even mean that the demand for white-collar workers will increase because more work will need to be done and that getting into white-collar work will be easier. This also supports the growth of one person companies. But the critical thing here is that it does not mean you can just ask the AI agent for money and expect it just manufactures it out of thin air.


Essentially, what I’m trying to say is that as long as the AI agents or AI in general is a tool, humans will not be replaced and that it will unreasonable to expect the AI to magically solve your problems through its own volition.

If it one day became real intelligence, which I doubt it would, then we would have bigger problems. Because at that point we would be facing another species much stronger than we are as mere carbon-based mortals.

The Ephemeral Tourist
March 15th. 2026 @ 2:19pm CDT