AI is created by human programmers, and tends to share their faults and prejudices.
Further, the data it would rely on would be supplied by humans, and the data would share these people's faults and prejudices.
So on two levels, AI would be influenced by human faults and prejudices.
I'm not sure that is true. We are discussing something that most of us will have little expertise in (I did dabble in expert systems some decades back, which were allied to the development of AI, but simple in comparison, and have tried to follow the progress of AI), such that one has to rely on those who are the best in their field for predictions. According to some of these, it is highly likely that advanced AI will be designed by the previous most advanced AI, and as such, might be out of our hands and possibly our understanding. The days of garbage in, garbage out would be long gone by then when such an AI would be able to gather as much information as necessary, and from wherever needed.
We do though have the problem of any aims of such advanced systems coinciding with what would be best for humans, especially if such conflicted with what was better for the planet or for the AI itself. Which then raises the question of how much autonomy one gives such AI, and trust, but where such might be out of our hands if we wanted the best answers. Constraining such systems might seem as the only way but we can't be sure that such would happen. And all this plays out against the competition between nations let alone the world views (including religions) that might shape such AI systems. One question to end with - what might happens if we get different AI systems (developed by different nations perhaps) with different overall aims?