Humans are obsessed with robots. Leonardo da Vinci attempted to build one in the 16th century, and the Jetsons were served by Rosie the robot maid. Todays pop culture robots are indistinguishable from living, breathing humans (some examples: Blade Runner, Westworld, Ex Machina,and Black Mirror).
Were obsessed with the pursuit of replicating or replacing ourselves. But strangely, the same obsession hasnt really been applied to pets.
aibo (stylized in all lowercase letters, as opposed to its all-caps predecessor AIBO) might change that. The companys iconic robotic dog was originally introduced in the early 2000s. At the time, Sony timed AIBOs release with the only two research papers written by a group of computer scientists that dived into understanding how A.I. could simulate animal intelligence, which is to say our understanding here is pretty scant. The two papers detailed how the company used studies on animal behavior (ethology) to program the bots. One paper described how the team essentially broke down basic animal behavior into a series of modules that the robo-pup could simulate, like whining for attention, and the other described how the team modeled AIBOs complex emotional system to match predictable, relatable dog behavior that humans could form a connection to.
AIBO wasnt the only robo-dog of the early aughtsthe far less expensive Poo-Chi toys were wildly popular in the same time frame, and the instinct to raise a robotic creature fed the popularity of digital critters like Neopets to Pokemon.
Despite their introduction in the early 2000s, tangible robotic pets remain a noveltyuntil now. Sony had previously discontinued production on AIBO in 2006. But on November 1, the company announced that it would be reviving the robotic dog. The new aibo, available exclusively in Japan in January, will be packed with A.I., including software that allows it to learn in a rudimentary fashion by repeating behavior that gets positive feedback from its owners, according to the New York Times. aibos novelty is that its a device that actually needs your inputits specifically made to be interacted with, played with, and talked to, unlike other now-ubiquitous connected devices.
That need for human care frankly scares the crap out of experts like Sherry Turkle, a psychologist at MIT who has written extensively about human beings' interactions with sociable computers. The danger in forming a bond with a robot or nurturing it like a living creature, Turkle said, is in assuming that the bond goes both ways.
When a computer or robot seems to ask for our help we treat it as though it cares about us, Turkle told The Daily Beast via email. We are vulnerable here. We are vulnerable to feeling that objects that have no care for us do have care for us.
Turkle said that synthetic pets still wouldnt be capable of feeling emotion. Our living, breathing pets today do, albeit in slightly different ways (a 2017 study, for example, found that dogs have strong brain responses to the smell of familiar humans and to emotional cues in verbal speech, a testament to the two species 30,000-year bond). Turkle said that people turn to a synthetic pet, which has no capacity for a relationship with us for the emotional gratification we typically reserve for something that canlove us back, it puts fake emotion into our lives. Developmentally, I can see only harm, she said.
But as A.I. advances, it may get harder and harder to tell the difference between real and synthetic. Turkles opinion is that A.I. will always remain artificial, and any emotions it presents are simulated. In humanoid A.I., of course, we wrestle with this definition: If a simulation of consciousness, emotion, and humanity becomes indistinguishable from the real thing, whos to say its not real?
A.I. researchers have proposed a number of well-defined processes or tests for determining whether or not a robot is conscious. One of the oldest and most rudimentary tests to proving animal consciousness is the Turing test, a procedure designed to figure out whether or not an A.I simulate consciousness and intelligence well enough to fool a human being into thinking its one of them.
But there isnt any such Turing Test for pets. In fact, we still arent sure what makes an animal conscious or not; performing the same tests on computers is even more difficult. Dr. Manuel Blum, a professor of Computer Science at Carnegie Mellon University who originally studied under Marvin Minsky, one of the godfathers of A.I., told The Daily Beast that hes still trying to formulate a good set of qualifications that would test for consciousness in a machine.
In animals, Blum explained, researchers can perform a very rudimentary test to determine whether or not a creature is self-aware. In the mirror test, an unsuspecting animal is marked with some sort of paint, on a part of their body they cannot see, like their forehead. The animal is then shown a mirror. If they see their reflection, with the paint on their forehead, and attempt to wipe it off, they pass the test — they can recognize themselves in the mirror, and connect that the paint they see in the mirrors reflection is on them in real life. (Dogs, interestingly, don't pass the test. Elephants and other smarter animals do.)
But Blum said trying to apply a similar test of consciousness to A.I. quickly falls apart. Its very easy to code a program to pass the mirror test, and consciousness has to require more than that, like some form of inner thought process that can choose actions beyond knee-jerk reactions to stimuli, for one. Still, he said were probably approaching the time when these conversations become necessary.
Im very optimistic about what computers can do, Blum said in an interview. Im very optimistic about A.I. This barrierwhen simulated intelligence becomes nigh-indistinguishable from the real thing, either a dog or a humanis close. I think that these machines are very close to achieving it.
Blum is optimistic, and seems to regard the coming singularitywhen a computer can simulate your pet, or your fellow manwith curiosity. For Turkle, its more of an existential threat. The simulation of thinking, she said, in reference to a Turing test, may be enough for us to be content to take it as thinking. But the simulation of feeling is not feeling, the simulation of love never love.
A robotic dog may be able to simulate love. It may even be able to simulate waking you up at 5 a.m., whining for food that it does not need. But ultimately, its up to us to decide if that makes it real or not.