Both sides have an irritating tendency to argue that the more extreme opinions associated with their side are actually rare (and being overemphasized by their opponents) and that the extreme opinions associated with the other side are actually common (and being downplayed by their opponents). Theoretically, one of the ways we're supposed to get a more precise idea about the level of support for certain ideas (and people) is by asking about those ideas in a systematic fashion - or polling. Realistically, the polls themselves become a point of contention, with both sides arguing about how accurate or relevant they are, and so having polls to talk about tends to change the shape of the argument rather than actually allowing one side or the other to win.
The data we get from polls is still a very useful thing to have, though. Since one of my favorite news organizations put out an article a week or so ago about proper use of polls, I figure now is as good a time as any to talk about it.
That article has a lot of good advice in it about what useful conclusions we can and can't make based on polling data, but it is strongly focused on presidential primary polls, and there's one mistake with polling in general that I'd like to talk about in more detail. Specifically, that would be jumping to conclusions based on data which in reality is a bit more nuanced. This is another (rare) moment where I legitimately can argue that both sides tend to fall short.
A lot of the left-leaning people I know don't always look past the top-line numbers to ask about how the questions were asked or other factors that affect the survey. There's a lot of polling data out there which legitimately indicates that some liberal policies are popular, but if those policies are misunderstood or voters don't count those issues as their top priority, then that doesn't necessarily translate into popularity for liberal politicians themselves.
The right-leaning people I know, on the other hand, tend to discard a lot of polls entirely because they don't see how the data could possibly be accurate given what they see from their other sources of information. They give too much weight to the various sources of error and fail to realize that it is possible to compensate for them or compare these polls to their past performance to check them. Alternately, they don't apply the same suspicion to their other sources of information, many of which are less reliable than a properly conducted poll.
In both cases, there's a legitimate argument being made, focusing either on what makes the results reliable or on the sources of potential error. However, there's also a failure to look at possible counterarguments or alternate explanations, and accurately figuring out what's going on in the world requires avoiding that as best as possible.