Sorry I think I overlooked your reply.
This is where it gets interesting.
Let's assume that the AI is sort of like in the style of the new Blade Runner, if you have seen that? Where the main character has an AI girlfriend, if the AI can make you feel something and even if it is just acting so much as a human that you can't tell the difference, would it matter? And could we even tell whether it was sentient or not? How would we even test it?
I've seen the old Blade Runner. I didn't know there was a new one. But I saw Blade Runner as another variation on Pinocchio, albeit with a lot of cool visual and special effects.
Another AI-related sci-fi was the more comedic Short Circuit, in which a robot is struck by lightning and somehow gains a sense of awareness and sentience which it is able to articulate and assert. "Number Five is alive," as the line went. As it was a robot, it certainly didn't look human at all, but there's a scene where his creator is talking with him and questioning him, in order to test whether he's really sentient. The clincher, apparently, was when he told a joke and Number Five started bursting out laughing, and viewed as confirmation that he was, in fact, sentient and alive.
So, perhaps a concept like humor might be beyond AI's capabilities.
It's very difficult to speculate about I think, because we are aware that it is an AI when we interact with them and in general, it doesn't act like it cares for us personally if you know what I mean. You can have some very interesting chats with ChatGPT, but not to the point where you think it is a human.
But will be interesting to see when these get into support tasks and you speak with them on chats or phone, how the average person will react. I think the majority of us will be unable to tell the difference, but I still think we would say stuff like "Thanks for the help", "Have a good day" etc. even though it would be meaningless to the AI in a greater sense.
I find that most computerized attempts at customer service to be woefully lacking. I've tried communicating with organizations and companies through their customer service or support lines, and I find that, unless you're calling for something very basic and easy, it's a very befuddling and frustrating experience. It still becomes necessary to speak to a human, because the AI interface over the phone just isn't there yet - not even close. I know this from my own experience.
They are copying or duplicating the human way of thinking. To explain it a bit simply, the normal way we as humans do things is that we problem solve, we have an issue, big or small and then we arrive at some sort of solution of how to best deal with it. Whether that is to walk or drive when we have to buy groceries. There are a lot of considerations being made here even though it might sound like a trivial task, how is the weather? how much can you carry? how much time does it take? etc. And based on all these things we decide to do something. Essentially that is what they are trying to make the AI do as well. Whereas in traditional programming, we would do something like, "If weather is bad then take car" and the computer does that without questioning.
So when you suddenly are faced with a computer that thinks as we do, it might arrive at other conclusions that we will, depending on what information it thinks is important. If that make sense?
The big question is are we going to look at these AI as we do a GPS or not? And if not, wouldn't we consider them to be more than just tools?
The human way of thinking is not necessarily the most efficient. We're not machines, and sometimes, humans can be distracted from a given task or start daydreaming. Sometimes, physical conditions can affect our thinking processes, such as hunger, thirst, sleeplessness, pain - things that AI would have no knowledge or experience with.
Human minds can also be cluttered with a lot of random memories. I find I end up remembering things I would rather forget, and forgetting things I would like to remember. I wish my brain had the ability to "save" and "delete" files like we have with computers.