The Rise of AI Agents and the Challenges of Authentication
In recent weeks, an AI agent called OpenClaw has gone viral. Unlike ChatGPT, which waits for a prompt, OpenClaw is autonomous. It can access your email, calendar, and credit cards to take actions it believes you want.
For litigators, this presents a nightmare scenario for authentication.
Take this real example. One morning, an OpenClaw user received a phone call from OpenClaw. Overnight, OpenCalaw gave itself a voice using an AI voice generator tool. It then learned how to make a phone call and called the human. It did this because it believed it would be useful to the human user.
Now, let’s imagine this hypothetical scenario for a moment. Bob is the human owner who set up OpenClaw. Bob and Jane were in a relationship that ended very badly. Jane now has a restraining order against Bob, and Bob is not allowed to contact Jane. One night, OpenClaw looks at Bob’s contact list and decides to call Jane. OpenClaw uses Bob’s phone number and places a call to Jane or sends a text message to Jane. Jane reports this to the police.
The police would come, look at Jane’s phone, and see a phone call or text message from Bob’s number. Bob might be arrested on that basis alone. But Bob never made the call. Bob didn’t even direct the call. Bob’s OpenClaw agent made this decision all on its own.
In an era where AI agents can act autonomously, how do we properly authenticate data? How can we demonstrate that an action was taken by a human rather than an AI?
In this scenario, Bob’s computer will likely have some logs to show what OpenClaw did. But the recovery and presentation of those logs will require a computer forensics expert. It may cost Bob thousands of dollars to prove he didn’t violate the restraining order. This is a simple example. Imagine how complex this could become when the AI Agent is working on behalf of a business and has access to complex servers and far more computing power