A senior software engineer working for Google has been placed on paid leave for publicly stating his belief that the companies chat bot has become sentient.
Google engineer put on leave after saying AI chatbot has become sentient
What is your opinion?
An AI can be given conflicting targets. This means it makes choices. Also a self learning AI can improve if you can automatically adjust it to improve when it gets closer to some target output. Usually this kind of improvement is done by a computer program which reviews the AI response to data or stimuli (which a specific kind of data), and then the computer program adjusts the AI's artificial neural links. Manufacturing AI are trained in virtual environments before being put to work. For example sometimes the entire factory is simulated or the parts that the AI will encounter are simulated. Microsoft sells a program specifically for simulating environments like this for the purpose of training bots and AI programs. The AI unit made for interacting in real time will be trained to respond to stimuli.
Artificial neural links (the basis of AI) can be in software or may be electronic units or a compromise between the two things. They do not have any magical components and do not have supernatural responses. PC companies have begun making so called AI chips as add on units to PC's. Apple has its M1 and M2 chips, and Google's new Pixel phone has a Tensor chip in it. These chips are completely confined to doing what they are made to do. They can make mistakes, but they don't suddenly change to become more sentient.
Stimuli are real time data, the opposite of data that sits in a file and is processed when convenient. There are AI units which respond in real time, such as chat bots, and there are AI that process data independent of time. There are also virtual bots that are made to process data in a virtual space like in a game, so they do process stimuli but may not be reacting to real time outside of that virtual space. Their whole game might be paused, and adjustments might be made to the AI by an external program. Creating an AI requires training it, not programming it. This training requires a lot of data and time, and so the design of an AI might be done on very fast hardware compared to what it will be implemented upon. A designer might pay for a lot of compute time on high speed computers in order to shorten the training time for their AI program.
So what is sentience? You have to decide that, and you have to decide ethical questions such as "Is it possible to be cruel to an AI unit?" So far, no. I don't think we have any AI units complex enough that we can be cruel to them. Sentient, however, they might be. Sentience is a very fuzzy line. All it requires is instinct -- complex decision making. It doesn't even require awareness of one's aims. It doesn't require much.