DATA DRIVEN RESEARCH
Measuring Our Impact
We’re part of a proud tradition of progressives using data and behavioral science to hone campaign strategy. We synthesize existing research across a variety of fields and then run our own experiments to measure the impact of our tactics.
Whether we’re asking someone on the phone to rank how likely they are to vote or talking to someone at their door about what motivated them to vote most recently, we assess the effect of each program using a randomized control trial experiment (RCT).
To run an RCT, we allocate people in our universe at random to receive one of several treatments. One group of people will receive no communication – this is the “control group.” By testing the result of the universe of people we did outreach to versus the universe of people in our control group, we can measure the precise impact that our programs had. In many cases for us, the result we are testing for is whether or not a person voted.
In our experimental work, we’re trying to answer three main questions:
- How much can we boost turnout with all available tools?
- How effective are different methods of contacting voters?
- Do different tools work better on certain people than others?
Heading: 2017 Focus
While field programs remain our flagship, we have seen results from various forms of media, and we will continue deploying and analyzing these additional methods to determine the most successful and cost-effective means of contacting voters. Read more about our data-driven projects here.
Heading: Research Partners
To further our program, we have partnered with researchers at the University of Chicago and Northwestern University to learn how we can make our programs more effective.
With the University of Chicago’s Becker Friedman Institute, we are working on how to properly formulate nudges to employ across all aspects of our programs. Nudges are indirect suggestions that mobilize people to vote.
With Northwestern University’s Kellogg School of Management, we are reviewing our 2016 data to identify subgroups that responded differently to our treatments than others in order to better focus our efforts in future elections. We’re also evaluating the best qualities of our field staff in an attempt to replicate as many of these qualities as possible when designing and staffing future programs.