A current question is whether AI bots possess consciousness with the answer depending on what consciousness is. If it indicates having total control of one's actions then humans, who possess a powerful unconscious capable of making serious errors of judgment, cannot be said to possess consciousness. Nor can humans of very low intellectual ability who largely rely on the simple pain-pleasure principle (which ordinarily governs only some human behaviors), to simplify their environment so they know what to do. Which is also how incarcerated individuals in their highly controlled environment but having normal intelligence ordinarily behave.
AI bots have been described as experiencing hallucinations, making greatly erroneous judgments, having errors of thinking. As do humans with conclusions that their powerful unconscious creates or influences because of faulty early developmental experiences. Thus the important question is not whether AI bots are conscious but if their self-correcting mechanism is adequate for the task assigned. Yet creating this critical certainty is not an easily accomplished task.
Humans instinctively learn the grammatical structure of the language of the nation into which they are born. Thus one born in Spain naturally learns Spanish while one born in France naturally learns French. Not by learning to place one word after another, which it has been hypothesized would take them a hundred-thousand-years, but by instinctively inducting the grammar of their nation's language.
Which is also how humans can easily recognize illogicalities which AI bots cannot. For example, would an AI bot identify a summary as erroneous if it stated that a person with a doctoral degree and lengthy professional achievements was four-years-old? Maybe not since comprehensive corrective bot instruction is not easily done. Perhaps "instructing" (coding) bots about the psychological nature of humans, the essence of being human, could reduce its tendency to make glaring factual errors. But will they then develop an unconscious or is this what current AI hallucination already reflects? An interesting question.