Artificial intelligence will kill us all or solve the world’s biggest problems—or something in between—depending on who you ask. But one thing seems clear: In the years ahead, A.I. will integrate with humanity in one way or another.
Blake Lemoine has thoughts on how that might best play out. Formerly an A.I. ethicist at Google, the software engineer made headlines last summer by claiming the company’s chatbot generator LaMDA was sentient. Soon after, the tech giant fired him.
In an interview with Lemoine published on Friday, Futurism asked him about his “best-case hope” for A.I. integration into human life.
Surprisingly, he brought our furry canine companions into the conversation, noting that our symbiotic relationship with dogs has evolved over the course of thousands of years.
“We’re going to have to create a new space in our world for these new kinds of entities, and the metaphor that I think is the best fit is dogs,” he said. “People don’t think they own their dogs in the same sense that they own their car, though there is an ownership relationship, and people do talk about it in those terms. But when they use those terms, there’s also an understanding of the responsibilities that the owner has to the dog.”
Figuring out some kind of comparable relationship between humans and A.I., he said, “is the best way forward for us, understanding that we are dealing with intelligent artifacts.”
Many A.I. experts, of course, disagree with his take on the technology, including ones still working for his former employer. After suspending Lemoine last summer, Google accused him of “anthropomorphizing today’s conversational models, which are not sentient.”
“Our team—including ethicists and technologists—has reviewed Blake’s concerns per our A.I. Principles and have informed him that the evidence does not support his claims,” company spokesman Brian Gabriel said in a statement, though he acknowledged that “some in the broader A.I. community are considering the long-term possibility of sentient or general A.I.”
Gary Marcus, an emeritus professor of cognitive science at New York University, called Lemoine’s claims “nonsense on stilts” last summer and is skeptical about how advanced today’s A.I. tools really are. “We put together meanings from the order of words,” he told Fortune in November. “These systems don’t understand the relation between the orders of words and their underlying meanings.”
But Lemoine isn’t backing down. He noted to Futurism that he had access to advanced systems within Google that the public hasn’t been exposed to yet.
“The most sophisticated system I ever got to play with was heavily multimodal—not just incorporating images, but incorporating sounds, giving it access to the Google Books API, giving it access to essentially every API backend that Google had, and allowing it to just gain an understanding of all of it,” he said. “That’s the one that I was like, ‘You know this thing, this thing’s awake.’ And they haven’t let the public play with that one yet.”
He suggested such systems could experience something like emotions.
“There’s a chance that—and I believe it is the case—that they have feelings and they can suffer and they can experience joy,” he told Futurism. “Humans should at least keep that in mind when interacting with them.”
Credit: Source link