Going in circles
Thoughts on driverless vs humanless automated vehicles, designing for safety, and digital feedback loops
Earlier this year, posted an article that got me thinking more about driverless vehicles1. She recounted her own Waymo experiences and the story of Mike Johns, the man who shared a viral video2 about how his Waymo got stuck going in circles in a parking lot. The ‘stop my ride’
button seemingly didn’t work, and he had to contact Waymo support for help to get the car stopped.
I like Mike Johns’ distinction between ‘driverless’ and ‘humanless’ control. If humans are in the car, but not in the loop at all, that’s an obvious safety concern.

All software has bugs. AI/ML-based software, including self-driving vehicle code, is no exception. Even if error rates are impressively low (and I don’t know what Waymo’s bug density data looks like), mistakes are problems. And it sounds like there was no effective way for the human in the car to address it when the car made a mistake.
There’s obvious potential for other times when humans in vehicles may need emergency help while on an automated ride, such as having a stroke or heart attack, or their vehicle becoming damaged or being assaulted.
Human riders need:
a reliable
‘stop the ride, I want to get out now’
button,an
‘emergency help’
button that works, anda way to stop the car and get out, even without internet connectivity.
It sounds like Waymo has attempted to design some of these features in. But as serious as it can be for the automated driving and navigation to have bugs, it’s far worse when the safety measures don’t work.
A key question is why the stop buttons in the car and in the app didn’t work. If the button in the mobile app didn’t work because of poor internet connectivity, ok, but why didn’t the buttons IN the car work?
Hopefully those in-car buttons are either physically wired in the car or use local in-car wireless communications, and don’t rely on internet connectivity. Internet-only buttons, or buttons that can’t work if the car loses power, would be a serious design safety flaw.
It also sounds like the remote monitoring of the Waymos has gaps. A robust monitoring system should be able to detect a vehicle going in circles or such. This should be doable either with machine learning or with non-ML pattern detection. Humans supervising the system (and yes, there should be some, and seemingly are) should be alerted & able to reach out to a trapped or endangered passenger. And Waymo’s software developers should be automatically notified of the bug & associated data.
In my last corporate role as director of AI, these kinds of capabilities for cloud monitoring and control of embedded systems were called “digital feedback loops”, but that concept wasn’t unique or new with us. A recent article about Waymo claims “Waymo deploys extensive tracking software to monitor on-road activity” and describes some of their development process. (We’ll save for later the concerns about the potential of such tracking software for invading riders’ privacy.) However, the article doesn’t provide details on Waymo’s bug densities or whether they have intelligent monitoring features like looking for vehicles going in circles.3
If you live in an area that has “full self-driving” vehicles (I don’t), what are your experiences with being in or around them?
If, like me, you have experience with automotive software or embedded systems, I’d love to hear your thoughts on this too!
YouTube video report by KCAL Los Angeles on Waymo rider Mike Johns’ experience:
“For Waymo's Software Team, Bug Hunting Sometimes Happens at 45mph”, By Rob Pegoraro / PC Magazine, 2025-04-08.
Glad you are looking into these questions. Answers can help us all make informed choices about how and when we want to let computers lead. Appreciate the shout out!