Every stat has a two percent margin for error and many of the states were 51% to 49% very narrow victories. Be better off flipping a coin. Florida is always a toss up along with other swinger states. I can predict with high accuracy that the president election in 2020 will be a two percent difference on who wins.
The confidence interval varies from poll to poll, depending mainly on the sample size. For instance, this random poll from Michigan just before the election gave a 95th percentile confidence interval of +/- 3%:
Michigan (November 5, 2016) v1 (1) | Hillary Clinton | United States Government
And within that confidence interval, they were bang on.
Now... something I see in Canadian election coverage but - at least from what I've seen - not so much in American coverage: the term "statistical tie." If two parties/leaders/whatever come out so close in the poll results that the two results are within each others' confidence intervals, the news will just describe it as a tie for all practical purposes.
... but I guess when every poll in almost every state results in a statistical tie, reporting it that way doesn't create much fodder for panel discussions with analysts. A +/-3% confidence interval is massive when the winning margin in the state is 0.3%.
Oh - the other thing to keep in mind with confidence intervals: they're almost always given for the 95th percentile. IOW, 1 out of 20 times, on average, the real result will be
outside the confidence interval. So if you, say, slice the electorate into 50 state-sized pieces, odds are that in most elections, 2 or 3 states will be way off what the polls projected, even after taking into account all the limitations of polling data.
Edit: I'm being serious about the difference in how polls are reported. Here, it's very common for a reporter to say something like "the polls show the Liberals 0.5% ahead of the Conservatives, but the poll margin of error was plus or minus 2.5%, so that lead isn't meaningful. The two parties are in a statistical tie."