That would certainly work. But it would still be limited to video.
It would make no difference to bots posting text.
Now they have troll farms to do such, which are already very efficient but which nevertheless still require some organization, budget and quite some man power to operate.
All of it could be replace by a single "troll farm" AI on a single reasonably powerful computer. That AI could also very easily fake an IP for every "bot" making it look as if they post from different locations inside whatever country they want.
I'm not one of those guys who's worried that AI is going to become some self-aware skynet-like entity hunting humans.
But I am VERY worried about the havoc it can cause online by blurring the lines between what is real and what is not.
Fake news was already a problem even without AI. Psychological profiling and targetted spread of fake news was also.
Imagine the damage a powerful AI could do by exploiting those same human vulnerabilities.
You just KNOW that that is exactly what countries like China and Russia are going to use it for.
And it won't even be hard to do either. In fact, it might even be easier to do then operating massive troll farms.
It would do everything on its own automatically and it would also generate its own fake news to push buttons, whereas pre-AI, someone still needs to create and write that fake news and then dozens / hundres / thousands of trolls need to spread it.
A properly coded AI engine would do all that by itself after you click the proverbial "start" button.