Bloomberg News reports that a software engineer associated with Google was given a suspension notice after he went public about his claims that he encountered “sentient” artificial intelligence on the company’s servers. This has spurred a debate about whether and how AI can attain consciousness, and researchers reveal that this issue is a distraction from more serious problems in the industry.
Then Impact of the Issue
Blake Lemoine, the engineer, stated that he believes that the AI chatbot of Google could express human emotions and that it is necessary for the company to address the significant ethical ramifications. Google has asked him to go on leave for sharing confidential information, and he also said that his worries had no basis. This opinion is widely harbored in the AI community. What is more concerning, researchers state whether artificial intelligence can endanger harm in the real world and prejudice, whether actual human being are facing exploitation in the AI training, and how the major technology companies are playing the role of gatekeepers in the development of tech.
On Sunday, the Washington Post interviewed Lemoine and conversed with LaMDA, an artificial intelligence system, or Language Models for Dialogue Applications, a framework used by Google to build specialized chatbots. Lemoine said he concluded that artificial intelligence was sentient and must have rights, and he said it was a religious feeling than a scientific one.
Bloomberg News reports that the LaMDA architecture cannot support some human-like capabilities, as stated by Max Kreminiski, a researcher at the University of California, Santa Cruz studying computational media. He revealed that if LaMDA works similarly to the other language models, it would not be able to learn just by interactions with human beings since the weights of the neural networks were frozen in the deployed model. Moreover, it will not have any form of storage for the long term that could write any information, which means that it will not be able to think.
Responding to its claims of Lemoine, Google said that LaMDA could follow along with the prompts and the leading questions, making it appear that it can riff on any topic. Chris Pappas, who is the spokesperson for Google, stated that their team comprising technologists and ethicists had reviewed its concerns of Blake and, as per their AI Principles, has kept him informed that the evidence does not support the claims. The spokesperson also said that hundreds of engineers and researchers have conversed with LaMDA, and the company is unaware of anyone claiming assertions the way Blake did.