OP-ED: What Academics Get Wrong About Sophia the Robot

Sophia the Robot Ben Goertzel Hanson Robotics SingularityNet David Hanson
Written by Ben Goertzel

BEN GOERTZEL responds to criticism from AI expert Noel Sharkey and others who claim Sophia the Robot is just for show. Here’s what he says they just don’t get. [OPINION]

Ben Goertzel singularitynetaNewDomain — Maybe you saw a recent Forbes column by computer scientist and Robot Wars judge Noel Sharkey.

It took aim at the Sophia robot from Hanson Robotics, in whose software I’ve played a key role over the years.

Now, I like Noel Sharkey’s work and I have no desire to get into a flame war with the guy. And his article does make some valid points.

But it also gets some things wrong. Given that he mentions me a few times in his column, I feel obliged to clear up a few things. 

On Sophia and her importance

In my own personal view — and I must note this is is not necessarily the official view of Hanson Robotics where I’m the chief scientist nor of SingularityNET, where I’m CEO — Sophia and the rest of Hanson’s awesome human-scale Hanson robots serve four key purposes.

1) Sophia and her brethren are wonderful platforms for experimentation and exploration in robotics, AI and in human-robot experimentation.

2) They present the world with a clear, palpable vision of AIs as positive, compassionate, loving entities. 

This is critical, as the public discourse on AI should not always be dominated by “The Terminator,” HAL-9000 and such.  Consider our Loving AI project, which uses Sophia as a guide for meditation and for personal growth. It is a focused effort along these specific lines, as well as being a pioneering study of human-robot interaction: 

3) Robots like Sophia truly are early versions of what will soon be ubiquitous —  humanoid service robots for home and commercial use.

Not only will these usher in tremendous good for the world, but they’ll also be of great commercial value. Seeing what they might be like and how we might interact with them lets us begin to work out some of the societal and personal issues they are sure to bring up.

4) And as such robots are rolled out widely, they will serve as a precious tool by which AIs can learn about human values and culture by interacting with humans in shared physical, cultural and emotional situations.

These reasons, along with my long friendship with Hanson Robotics founder and CEO David Hanson, are why I chose to get involved with Hanson Robotics in the first place. They are also why I continue this work alongside my leadership of SingularityNET, a blockchain-based AI platform aimed at democratizing AI services.

Public reactions to Sophia and the other Hanson robots have been all over the map.  

Some people assume they are much less advanced and intelligent than they actually are — e.g. claiming they are purely puppets or “show robots,” to use a term from Sharkey’s broadside.

Still others assume they are much more advanced and intelligent than they already are, and incorrectly consider them to already possess human-level AGI. Some commentators even have tied them in with the Illuminati, pyramid power and Earth’s alien overlords and such.

But those points aren’t really what provoked me to write this brief response to Sharkey’s article.

Rather, it is his decision to cite the following selective section from an interview I did with The Verge last year, which itself was just a selective summary.

To point, Sharkey writes:

In a rare candid moment, Goertzel tells The Verge how he is using Sophia to promote his hobbyhorse, Artificial General Intelligence (AGI).  …

Goertzel admitted to The Verge that, “if I show them [the public] a beautiful smiling robot face, then they get the feeling that AGI may indeed be nearby and viable.” And also that, “thinking we’re already there is a smaller error than thinking we’ll never get there.”

Therein lies the problem for Hanson Robotics. They have produced a remarkable show robot but are using it, according to Goertzel’s own admission, as a platform to falsely represent the current state of artificial intelligence and to actively deceive the public into believing that we have AGI or are very close to it.

Now, unlike what Sharkey asserts here, last fall when I first started to get a lot of media inquiries about Sophia and her underlying AI, I posted a detailed article explaining the various control systems used to run Sophia behind the scenes:

In fact, every time a journalist asks me about Sophia’s underlying mechanisms, I refer them to that article.   

The key bits of this article regarding Sophia were re-published and updated a little later here in aNewDomain.

My point is, rather than me being explicit about Sophia’s software only in a “rare candid moment,” this definitely has been my modus operandi.  


Now, it is certainly true that David Hanson, I and others on the Hanson team have used the Sophia robot to get people excited about the promise of AI in general and of humanoid robots in particular.

But we are in no way trying to be deceptive about this.

To me, it is purely a matter of using something easily accessible and appreciable to convey an underlying reality (of rapid AI progress) whose particulars are too technical for most people to understand without a lot of study.

Noel Sharkey has been around AI and the media long enough to understand how the popular media at once enjoys controversy and eschews complex nitty-gritty details.  That’s why it’s no surprise that my detailed exposition of what actually is happening behind Sophia’s beautiful human-like expressive face has not gotten much attention.   

This is of course because the real story is not as dramatic as the media would prefer.

That story explains in detail how Sophia works via three different (but overlapping) software systems we use to run and control her: A scripting interface, a sensory-data-enhanced chat system, and an OpenCog-based cognitive dialogue system.   

So, no, she is not just a simple chatbot, as Sharkey suggests. And yet she is also not a human-level AGI by any means.

Also, what Sophia is from day to day and appearance to appearance.

That is to say, humanoid AI robotics is not such a one-dimensional thing as Sharkey suggests.

Complexity vs Whizziness

Another thing: Many of the things Sophia says are generated by plucking out whole statements that humans have entered into her knowledge base by humans, with some blanks in these statements filled in based on the conversational context.  Heuristics are what guide the choice of statements at any given point of time.

This is hardly trickery. It’s very similar to how Siri, Alexa or any other agent works.   

At other times, like when she is running on OpenCog or other Hanson AI systems, she is using such complex processes as stochastic language generation and probabilistic reasoning. Or replying based on knowledge she has gathered with her sensors.

The control systems used to operate Sophia are complex mixes of processes with varying levels of sophistication and intelligence. Unfortunately, that’s not a whizzy or even a very understandable headline-friendly story. Alas, at many outlets, the reality gets ignored in favor of extreme statements of one sort or another.

And I should note here, too, that things have advanced on the Sophia-AI side since I wrote those articles I mentioned in H+ and aNewDomain.  In recent months, we have been experimenting with using our OpenCog cognitive architecture for more of Sophia’s public appearances, including the Web Summit in Lisbon, the Transformative Technologies conference in Palo Alto and, last month, at the Malta Blockchain Summit.

A Flexible Intelligence

While OpenCog is a general-purpose cognitive architecture aimed toward general intelligence, using OpenCog to control Sophia does not automatically make her responses highly intelligent or deeply understanding based.   

Running OpenCog, she can still produce statements that are precise quotes or minor variations of sentences that people fed into her knowledge base. However, operating her using a flexible cognitive architecture like this opens a lot of doors for more flexible behaviors, and in the above presentations we showed off a little bit of some more advanced features, such as fully-autonomously generated text (“AI-generated poetry” for instance) and verbal question-answering based on observations of the environment (“What direction are you looking in?”, “What object are you looking at?” etc.).  

In the last few weeks we have integrated our SingularityNET alpha platform with Sophia’s control software, so she can use SingularityNET as a framework for accessing some of her computer vision functions, such as facial emotion recognition.  

Among the next steps here will be integrating some of the more advanced work we’ve been doing on visual intelligence in the SingularityNET and Hanson AI labs, such as our neural-symbolic approaches to visual question answering,

This is part of a vision regarding how both Sophia and similar robots, and the SingularityNET decentralized AI platform, can work together with deep AI algorithms like neural-symbolic learning and meta-inference to yield powerful AGI.

Returning to Sharkey,  let me be clear. Contrary to what he alleges, I do not in any way wish to use Sophia “as a platform to falsely represent the current state of artificial intelligence and to actively deceive the public into believing that we have AGI or are very close to it.” 

Like my friends Ray Kurzweil, David Hanson and other forward-thinking AI leaders, I do sincerely believe we could be quite close to making the breakthroughs human-level AGI will require. For what it’s worth, Kurzweil has posited 2029 as the year this breakthrough is most likely to be made, and with my colleagues at SingularityNET, Hanson Robotics and OpenCog I’m hoping to beat him by a couple years.  

Despite my optimism though, I have never told anyone that  Sophia possesses human-level AGI or anywhere near that.  I mean, I have spent 30 plus years so far working hard toward human-level and trans-human AGI; I know better than anyone that we are not there yet.

Sharkey is of course more than welcome to disagree with my assessments regarding AGI, both in general and in regards to the promise of my team’s work; The science world is full of divergent opinions, and that is as it should be.   

But I don’t like being accused of hiding information or of deceiving people. And I’m not doing that. I’ve never done that. Period.

Basically Alive?

Sharkey, in his article, follows nearly every recent negative commentator on Sophia by quoting an off-the-cuff remark David Hanson made on a TV show that Sophia is “basically alive.”   

David has since walked that back a bit; and indeed, in a highly compressed TV-interview setting there is often not time for a full elaboration of what one means by one’s words.

But I think it is quite accurate that we are gradually bringing Sophia to life — step by step, year by year, code and hardware upgrade by code and hardware upgrade.

Biology hasn’t rigorously specified any general definition of life.

And the boundary between life and non-life is still considered fuzzy even in the biological world.

Now, I wouldn’t say and I’m not saying now that Sophia is basically or otherwise alive at this point. But some of her behaviors do reflect artificial life dynamics in some of the back-end software we use to operate her.

That aspect will only increase as the underlying technology develops.   

If this is too much nuance for most people to digest, then so be it.

And yes, I’ve been saying for a long time that the Singularity is near, but I’ve never claimed the underlying technologies would be simple.


But listen. Sophia is no mere show robot as Sharkey alleges. Nor is she as close to AGI as a small subset of observers has mistakenly assumed.

The world is more complicated than simple black and white dichotomies.

I will continue to try to explain things as clearly and patiently as I can, even when others muddy the waters.  

Our progress toward AGI and intelligent, emotionally and socially savvy humanoid robots will continue and accelerate, despite the naysayers. Watch.

For aNewDomain, I’m Ben Goertzel.