But the cue ball has a very simplistic response to its stimuli and has no internal representation of its environment. And, from what I can tell, complexity of the internal representation of the environment is a crucial aspect of sentience.
By pushing the notion of sentience to a place that cannot be tested is a dangerous thing. It reminds me of the debates about whether women had souls or whether slaves were people.
Yes, use the Turing test and *surprise* the subject. See how it responds to novel situations and whether it seems to question itself and others. See if it responds in ways typical of people: which we know are sentient. See if it interrupts, stops and rethinks, etc. All good tests.
Once again, I don't know if the current case qualifies as sentient. I haven't seen the evidence, only the claims of one of the workers., who is probably not trained in the issues involved. But I don't see any reason to think sentience in a constructed machine is impossible, either.
It's that "internal representation" thing that really troubles me -- even about human sentience. Let me try an example.
We know that are wired in a way possibly vaguely analogous to neural network circuitry. (The analogy probably is better the other way round, but never mind). So there could be a reason to suppose that silicon circuits, properly connected and networked, may well be able to mimic what our brains can do. But I can think of examples in which I know (by result) what my brain has done, but I do not think that I (or it) was even conscious of it. Here's the example:
I like to do cryptic crosswords, but sometimes the clue is so cryptic I just cannot find it, no matter what I try. But I have learned that if I turn away, move on and do something else, and come back -- as often as not I (the conscious
I) will be "presented" with an answer. I know that some unconscious part of my brain continued to somehow work the problem, and then notified the conscious me when done.
This suggests to me that there is another layer of neural net in my brain, sitting metaphorically "on top" of lower parts of my brain, where true sentience happens. I'm not sure, I mean, that the means by which my brain stores and retrieves information is done consciously at all.
So looking for sentience in AI, I would be looking for that "upper" layer.