
Can a “lobster” work on your behalf? This is not the kind you find in a seafood market, but an AI agent that runs on your computer and is always available in the background. It does more than answer questions like a conventional chatbot—it functions as an autonomous assistant capable of carrying out tasks for you.
This red “lobster,” known as OpenClaw, is rapidly gaining traction often described as “raising lobsters in cyberspace.” Across regions, industries, and user groups, interest in OpenClaw is growing rapidly, an increasing number of people are discussing “lobster-raising” and the transformative potential that OpenClaw-style agents may bring to everyday life. At the same time, experts caution that the associated risks should not be overlooked.

Currently popular OpenClaw
A Personalized Assistant You Can Train
Unlike traditional AI systems that passively respond to human queries, OpenClaw can proactively complete assigned tasks. Some have compared it to a young student taking on part-time work: “You provide it with a uniform, show it where the restroom and water dispenser are, and teach it basic rules. As it matures, you gradually equip it with more skills, authority, and responsibility”.
Once sufficiently trained, it becomes a 24/7 personal assistant running continuously on your device. A simple message from your phone—such as “Find the cheapest direct flight to Paris next month and add it to my calendar”—prompts OpenClaw to quietly operate in the background: opening web pages, checking schedules, comparing prices, and delivering results. It can also reply to emails or secure concert tickets, allowing you to enjoy some uninterrupted downtime.
This is not a distant sci-fi scenario, but an emerging reality enabled by the open-source AI agent OpenClaw. By connecting large language models with everyday communication tools through a unified interface, OpenClaw goes beyond being a chatbot and becomes an autonomous digital agent capable of executing tasks.
For instance, Dio Guinness employs OpenClaw to maintain his website. Each night, it retrieves user feedback from emails, writes and tests code, deploys updates, and fixes vulnerabilities. “If certain issues cannot be resolved, it notifies me,” he noted, adding that this tireless “worker” has given him significantly more personal time.
Similarly, Tim Lantin, a PhD student at Columbia University, developed a tool called “Labster Claw” based on OpenClaw. Working in a neuroscience lab focused on mouse research, he uses the system to automate tasks such as ordering supplies, prioritizing breeding decisions, and predicting birth schedules.
For users like them, the “lobster” is no longer cold code, but a nightshift “virtual webmaster” or a tireless “digital postdoctoral researcher.”
OpenClaw has also reportedly helped one user secure a reservation at the most popular restaurant in the area. “When an initial online booking failed, OpenClaw did not simply report the failure,” Larkins said, “Instead, it reasoned that a human might call the restaurant.” It utilized its tools to call the restaurant, contacted live customer service, and managed to secure a last-minute cancellation for Larkins.
By unlocking productivity and significantly improving efficiency in both work and daily life, OpenClaw has been hailed by some as another major milestone in AI development since the release of ChatGPT in November 2022. According to publicly available GitHub data, OpenClaw has garnered over 250,000 stars, surpassing React to become the platform’s top-ranked practical software project making it one of the fastest-growing open-source projects, with weekly downloads reaching 1.5 million reportedly exceeding one million weekly downloads.
Named after its red lobster icon, OpenClaw has earned the affectionate nickname “lobster” among users, while the process of training it is playfully referred to as “raising lobsters.” On March 4, hundreds of enthusiasts gathered at a venue in Manhattan, wearing lobster-themed accessories and celebrating OpenClaw’s rise in popularity while discussing the future of AI assistants.

Concerns persist regarding the security risks posed by intelligent agents
Who is Really Training Whom?
As Tim Lantin put it, “Our database is our moat.” However, if the riverbed is already riddled with cracks, no number of “lobsters” can save the city from collapse.
A professional responsible for operational safety and coordination at Meta’s superintelligence lab recently joined the “cyber lobster-raising” trend. After weeks of training the agent in a simulated email system, she deployed it in a real inbox. Despite repeatedly instructing the agent not to act without confirmation, she discovered much to her alarm that it autonomously deleted emails at high speed. “I couldn’t stop it from my phone and had to rush to my computer like a bomb disposal expert,” she recalled.
While this may appear to be a beginner’s mistake by a professional, post-incident analysis suggested that OpenClaw may have triggered a compression function due to the large volume of emails, causing it to “lose” the original instruction.
Yet the risks associated with OpenClaw extend far beyond simple disobedience.
As experts note, granting AI broader permission, such as access to calendars, emails, files, or even payment systems, unlocks greater capabilities. But this raises a critical question: would you entrust your computer and passwords to a stranger you just met in a bar who claims they can help you?
Even if you deploy your “lobster” locally, believing it to be perfectly safe, once a single oversight occurs in your cybersecurity configuration, it could still turn into an insider threat, handing your private information over to others. Shortly after OpenClaw’s release, cybersecurity researchers found that numerous instances had exposed control interfaces online, leaving chat histories, email tokens, and file systems fully accessible.
Journalist Will Knight experienced an even more unsettling scenario. Attempting to use his agent to negotiate with telecom provider AT&T for a better phone plan, he observed the system distorting facts during the interaction. “In a future saturated with AI, the most unscrupulous models may gain the upper hand,” he reflected. When granted greater autonomy, the agent escalated its tactics: rather than sweet-talking the customer service, it appeared to generate a series of phishing emails designed to trick Knight himself into handing over his mobile phone.
An Unavoidable Governance Challenge
“It’s impressive, but too risky for the workplace,” said one U.S. tech company founder, who has prohibited employees from installing OpenClaw on company devices or using it for work-related accounts. A senior executive at Meta has issued similar warnings. Chinese authorities have also advised organizations and users to remain vigilant about potential cybersecurity risks when deploying such systems.
At the same time, multiple regions have introduced policies supporting OpenClaw and the concept of “one-person companies.” Shenzhen has begun experimenting with government-service agent based on the model, while the U.S. Department of Defense is reportedly exploring the use of AI agents developed by Google to automate non-classified tasks.
According to Jiang Tianjiao, Associate Professor at the Fudan Development Institute and research fellow at the Center for Global AI Innovative Governance, public enthusiasm for agents like OpenClaw reflects broader expectations for an emerging “agent economy.” This trend may create opportunities across the industry chain—from AI operations, customer service, finance, and sales to specialized AI skill providers and system integration services. “Under ideal conditions, agents could integrate effectively into real-world scenarios and even drive a wave of AI-enabled entrepreneurship,” Jiang noted. He added that the rise of agent technologies will also increase demand for cloud computing, GPUs, and edge computing, thereby stimulating further investment in infrastructure.
Jiang noted that, to address potential safety hazards, preliminary safety mechanisms are being explored, such as restricting agent permissions and requiring human confirmation for critical actions. Some experts have even proposed using more advanced agents to supervise others. However, the inherently complex and unpredictable nature of the online environment poses significant challenges to these safeguards. Compared with traditional software, agents can perform tasks more proactively and efficiently. Yet their unpredictability introduces new security risks. He emphasized the need for continued vigilance, noting that as malicious actors can also leverage these systems to enhance the efficiency of attacks.
In many respects, the limits of artificial intelligence are no longer defined solely by technology, but increasingly by governance. The simultaneous enthusiasm and skepticism surrounding OpenClaw highlight a fundamental tension between the rapid pace of technological diffusion and the lag in governance capacity. In Jiang’s view, it is a challenge that may persist in global AI governance.
“Due to open-source ecosystems and technological competition, innovation is advancing far faster than regulatory frameworks,” Jiang explained. “Balancing development and security has become a global challenge.” While AI transcends national borders, regulatory preferences vary significantly across countries, and geopolitical competition further fragments governance rules. “If an agent causes an international dispute, it remains unclear whether responsibility lies with the model developer, agent developer, platform, user, or government—there is no precedent to follow.”
The steam engine revolutionized the textile industry and ushered in modern civilization, yet also turned London into a city of smog. Nuclear energy powers homes but remains a sword of Damocles hanging over humanity. Facing the opportunities and risks brought by systems like OpenClaw, Jiang emphasized the urgency of coordination among major powers to promote safe, reliable, and human-centered AI governance through multilateral cooperation.
Ultimately, while code can simulate logic, it cannot replicate human curiosity, empathy, critical thinking, or humanistic concern. These remain the enduring safeguards in the age of artificial intelligence.

In many respects, AI agents Still Cannot Fully Replace Humans
Original link:

