ecco
Veteran Member
...
This is all very impressive but it did not address what I said. They said Deepmind played millions of games for it to improve and compared it's play-style to intuition. I'm not entirely sure how far any so-called AI can get with this. Intuition may not work this way and it may use an accumulation of emotions, experience and taking chances(I.E. gut feelings). No doubt, they are getting the experience part but they're missing vital ingredients for actual AI.
...
In this video at 58:02 Deepmind had no idea what to do. It was akin to a robot not even comprehending what's going on. I won't even say animal, because even animals know when to give up. Even after millions upon millions of games, Deepmind did not stop attempting to reach the air units. As the game went on, it did not understand or comprehend taking a chance for a base race. It did not give up and when it only had 1 unit left, it's API was still 300. These are all indicative of robot that can learn through repetition, not AI. It was unable to adapt intuitively. This reminds me of movies/Star Trek where a robot faces a logical problem and explodes or a robot that has some error so it continuously hits itself against the wall.
Your video notwithstanding...
DeepMind’s AlphaStar AI sweeps StarCraft II pros in head-to-head match
Computers are getting more sophisticated than ever at understanding and playing complicated games. DeepMind, one of the leaders in artificial intelligence, proved that once again today with its latest A.I. agent called AlphaStar. During a livestream, this program took on two StarCraft II pros in a series of five matches for each, and AlphaStar swept all 10 matches.
I've played Civilization, on and off, since its inception in 1991. I have never seen an AI leader act in the way described in your video. In any case, since Alphastar is ten for ten in its latest contests, I guess it learned. If that meant a human stepped in and tweaked the code a little, that's no different from having a chess master correct some faults it sees in a student.Computers are getting more sophisticated than ever at understanding and playing complicated games. DeepMind, one of the leaders in artificial intelligence, proved that once again today with its latest A.I. agent called AlphaStar. During a livestream, this program took on two StarCraft II pros in a series of five matches for each, and AlphaStar swept all 10 matches.
As to the game GO, it learned the way humans learn - observation and practice. And then did even better.
Excerpts with my emphases...How Google's AI Viewed the Move No Human Could Understand
HOW GOOGLE'S AI VIEWED THE MOVE NO HUMAN COULD UNDERSTAND GEORDIE WOOD FOR WIRED
SEOUL, SOUTH KOREA — The move didn't make sense to the humans packed into the sixth floor of Seoul's Four Seasons hotel. But the Google machine saw it quite differently. The machine knew the move wouldn't make sense to all those humans. Yes, it knew. And yet it played the move anyway, because this machine has seen so many moves that no human ever has.
In the second game of this week's historic Go match between Lee Sedol...this surprisingly skillful machine made a move that flummoxed everyone from the throngs of reporters and photographers to the match commentators to, yes, Lee Sedol himself. "That's a very strange move," said one commentator, an enormously talented Go player in his own right. "I thought it was a mistake," said the other. ....
Fan Hui, the three-time European Go champion who lost five straight games to AlphaGo this past October, was also completely gobsmacked. "It's not a human move. I've never seen a human play this move," he said. But he also called the move "So beautiful. So beautiful." Indeed, it changed the path of play, and AlphaGo went on to win the second game....
It was a move that demonstrated the mysterious power of modern artificial intelligence... In the wake of Game Two, Fan Hui so eloquently described the importance and the beauty of this move..
...
"You're listening to the commentators on the one hand. And you're looking at AlphaGo's evaluation on the other hand. And all the commentators are disagreeing."
...
Using a second technology called reinforcement learning, they set up matches in which slightly different versions of AlphaGo played each other. As they played, the system would track which moves brought the most reward—the most territory on the board. "AlphaGo learned to discover new strategies for itself, by playing millions of games between its neural networks, against themselves, and gradually improving," Silver said when DeepMind first revealed the approach earlier this year.
...
So, in the end, the system learned not just from human moves but from moves generated by multiple versions of itself. The result is that the machine is capable of something like Move 37.
"That's how it guides the moves it considers," Silver says. For Move 37, the probability was one in ten thousand. In other words, AlphaGo knew this was not a move that a professional Go player would make.
But, drawing on all its other training with millions of moves generated by games with itself, it came to view Move 37 in a different way. It came to realize that, although no professional would play it, the move would likely prove quite successful. "It discovered this for itself," Silver says, "through its own process of introspection and analysis."
Is introspection the right word? You can be the judge. But Fan Hui was right. The move was inhuman. But it was also beautiful.
Another thing to keep in mind is the advances in computer hardware. In the '60s an IBM 7090 computer was the size of a car and capable of 200,000 flops. Today's PS4 is the size of a small cake and can crank out 1,800,000,000,000 flopsHOW GOOGLE'S AI VIEWED THE MOVE NO HUMAN COULD UNDERSTAND GEORDIE WOOD FOR WIRED
SEOUL, SOUTH KOREA — The move didn't make sense to the humans packed into the sixth floor of Seoul's Four Seasons hotel. But the Google machine saw it quite differently. The machine knew the move wouldn't make sense to all those humans. Yes, it knew. And yet it played the move anyway, because this machine has seen so many moves that no human ever has.
In the second game of this week's historic Go match between Lee Sedol...this surprisingly skillful machine made a move that flummoxed everyone from the throngs of reporters and photographers to the match commentators to, yes, Lee Sedol himself. "That's a very strange move," said one commentator, an enormously talented Go player in his own right. "I thought it was a mistake," said the other. ....
Fan Hui, the three-time European Go champion who lost five straight games to AlphaGo this past October, was also completely gobsmacked. "It's not a human move. I've never seen a human play this move," he said. But he also called the move "So beautiful. So beautiful." Indeed, it changed the path of play, and AlphaGo went on to win the second game....
It was a move that demonstrated the mysterious power of modern artificial intelligence... In the wake of Game Two, Fan Hui so eloquently described the importance and the beauty of this move..
...
"You're listening to the commentators on the one hand. And you're looking at AlphaGo's evaluation on the other hand. And all the commentators are disagreeing."
...
Using a second technology called reinforcement learning, they set up matches in which slightly different versions of AlphaGo played each other. As they played, the system would track which moves brought the most reward—the most territory on the board. "AlphaGo learned to discover new strategies for itself, by playing millions of games between its neural networks, against themselves, and gradually improving," Silver said when DeepMind first revealed the approach earlier this year.
...
So, in the end, the system learned not just from human moves but from moves generated by multiple versions of itself. The result is that the machine is capable of something like Move 37.
"That's how it guides the moves it considers," Silver says. For Move 37, the probability was one in ten thousand. In other words, AlphaGo knew this was not a move that a professional Go player would make.
But, drawing on all its other training with millions of moves generated by games with itself, it came to view Move 37 in a different way. It came to realize that, although no professional would play it, the move would likely prove quite successful. "It discovered this for itself," Silver says, "through its own process of introspection and analysis."
Is introspection the right word? You can be the judge. But Fan Hui was right. The move was inhuman. But it was also beautiful.
Today, an IBM quantum computer is the size of a car with processors in the "transistor" stage. What do you think will happen in the next 50 years?