Super PAC App: Who Lies Most?

I’ve previously written a bit about the Super PAC App, an iPhone app that allows users to rate election cycle ads based on whether or not the claims appear to be accurate. It was co-founded by my friend Dan Siegel, and throughout this past election cycle, the app appears to have been a success.

According to a recent email from Dan and the other co-founder, the app resulted in “119,815 user sessions. 50,014 claims explored. 38,351 ad ratings. 122 countries represented.” Pretty impressive.

They decided to post all of their code and data online so that researchers and others can dig in and see if there’s anything interesting going on. You can download the code and data here.

So I decided I’d play around with the data, and see if anything popped out at me. I chose not to focus exclusively on liberal vs. conservative trends, as the app appeared to have a slightly liberal  user-base bias. Instead, I took a look at the types of organizations that created ads, and which organization types were rated most dishonest. I wanted to see if certain types of ad sponsors were more likely to be perceived as dishonest than others.

To do this, I focused on ads that were rated as ‘Fail’ by the users. The other rating categories — ‘Love’,  ‘Fair’, and ‘Fishy’ — might be interesting for other types of analyses, but these seem more likely to have certain emotional biases (e.g., someone might love candidate ‘x’, or think a particular ad was funny and rate it as ‘Love’). While the ‘Fail’ category probably has some bias — most likely the result of users rating any ad that isn’t consistent with their own views as ‘Fail’ — it still serves as the strongest indication that a given ad was dishonest.

I charted the percent of ads that were rated as ‘Fail’ by the type of organization sponsoring the ad. I did this overall, and also looked at a 10 day moving average over the period that the Super PAC App was active (I used a moving average to smooth the data to remove erroneous noise that makes it more difficult to interpret). Here’s what I found:

 

A few things immediately jump out, and after giving it some thought, the trends make quite a bit of sense. Here are a few initial observations:

  • Overall, ads sponsored by the official campaigns were the least likely to be rated as ‘Fail’. I would assume this is because any claim officially sponsored by a campaign could be directly tied to a candidate. For this reason, it’s not surprising that the actual campaigns were either (1) more conservative in the claims they made, (2) more vague in the way that the claims were made so that the candidates could not be pinned down to or tied to a specific factual statement, or (3) effective at making their points in a persuasive way. In my opinion, it was probably a combination of all three on both sides.
  • Ads sponsored by PACs, Super PACs, and the National Parties were rated as ‘Fail’ 8-10% more often than those coming directly from the campaigns. This also makes sense. These organizations, while still often closely associated with the candidates, were an arms length away, making them a great resource for the campaigns to use for their ‘dirty work’. When the ads went well, the candidates could stand behind them. When there was backlash, unless they had endorsed the ad, the candidates could easily distance themselves.
  • It’s clear that Super PAC App users overwhelmingly found ads sponsored by non-profits to ‘Fail’ the honesty/accuracy test.  I initially found this surprising. After thinking about it, one plausible hypothesis is that these organizations were just a bit more crude in their execution of the ads they created/sponsored. While in many cases Super PACs were well oiled machines with lots of funding, resources, brain power, and experience, it wouldn’t surprise me to learn that many of the non-profit organizations that sponsored ads were not on the same playing field in their execution competency. This could easily result in the inclusion of arguments that were either not as well fact checked, or were not as thoughtfully constructed, making them an easy target for criticism. Alternatively, another obvious hypothesis would be that these organizations simply had less to lose, and were more open to saying anything — true or not — that they thought would support their objectives.

Looking at the overall trends over time in the 10 day moving average chart, there are a few notable points. First, looking at the Super PACs / PACs, campaigns, and national parties, there seems to be an oscillation. It’s difficult to isolate time trending effects, as many of the ads rated on a given day may have been created and aired in the past, but the pattern is interesting nonetheless.

For these three organization types, it appears that the proportion of ads rated ‘Fail’ hit a low right before the first debates in early October. They then progressed upwards after the debates, and then hit lows again in mid-late October around the time of the final debate, before progressing upward one last time right before the election.

Putting the above-mentioned potential bias aside, one reason for this could be that it it didn’t make sense for these organizations to take big (expensive) risks by airing ads with questionable arguments right before an uncertain event such as a debate. Since the debates could (and did) change the tone of the campaigns and relevant discussion topics, it likely made more sense to wait and see what happened, and then air the most relevant, and potentially provocative, ads. Another reason could be the type of Super PAC App engagement — which types of people were using the app — at these different times throughout the campaigns.

One last thought for now — I can’t think of a good explanation for the spike in non-profit ‘Fail’ ratings beginning in mid-September. It might be driven by one or two particularly dishonest ads that were aired only in that time period, but it would be interesting to find out.

This post has gotten a bit longer than I initially intended. I was hoping to dig in a bit further to look at which organizations were represented, which parties they supported, and  if there were other variables (potentially not included in the Super PAC App data set) that help describe whether an add was likely to be viewed as dishonest. I’d also like to get a better sense of what drove some of the time trends. I’ll shoot to get to this sometime this week. In the meantime, I’d love to hear any thoughts any readers have. The data is there for the taking.

Thanks Dan.