On Monday, MyDemocracy.ca—the website where the government is seeking input from the public on electoral reform—went live. Virtually none of the questions touch specifically on voting systems, but rather focus on democratic values and citizens’ broad, theoretical preferences for elections and their government.
The site was built by Vox Pop Labs, a Toronto-based company “specializing in data science and civic technology,” at a cost of $326,500. Maclean’s spoke with Clifton van der Linden, founder and CEO of the company, about how the survey was crafted and what insights it aims to draw from Canadian voters.
Q: How did Vox Pop Labs become involved in MyDemocracy.ca?
A: We were commissioned by the government of Canada to develop an engagement platform that was focused on democratic values specifically, and to try and reach a broader audience than conventional modes of engagement may be able to reach. My company has done many presentations about our work in digital engagement, and it was on the basis, I think, of our reputation in that space that the government of Canada sought us out to see if this kind of initiative might be possible.
Q: What exactly did your company do in shaping this survey? How do you go about creating something like this?
A: Once we had the parameters of the survey, we worked with an advisory panel of prominent scholars in areas such as research design, survey methodology and electoral politics. We developed a survey that drew from the existing literature on electoral reform in Canada and tried to identify various values that structure that discussion. We then built a very large survey with a number of questions—many more questions that you see included in MyDemocracy.ca—and field-tested those questions to random panels of Canadians to ensure that we were controlling for biased survey design or any other deficiencies.
Once we did all that field testing, we then went to quite a large panel of Canadians—more than 3,000—and asked them the remaining questions and used their responses to generate a cluster analysis. A cluster analysis finds correlations in responses to a survey and identifies how different groups of respondents cluster in a specific space.
If you go to MyDemocracy.ca and get your initial result, you can see the themes that emerged and how you are positioned on those themes, and also the distribution of Canadian public opinion. Once we had those clusters, which we represent as archetypes in the tool, we were able to look at their positions on issues and develop a narrative around them and also provide a label for them that spoke to some of their positions.
Q: Why did you design the survey the way you did? Why was it designed to turn on broader, almost fuzzier, ideas like democratic values? How was that an effective way to get at attitudes toward electoral reform?
A: That was what we were commissioned to do by the government. That was the mandate that we were given: to focus on democratic values and not on electoral systems. I can’t speak on behalf of the government, but I think the minister [Maryam Monsef] has spoken to the view she takes on this, that democratic values help inform ideas around electoral systems.
Q: So the tool would not, for instance, tell a given voter that proportional representation best fits what they say are their priorities. It doesn’t match up directly with voting systems, is that right?
A: That’s right. A lot of these themes are themes that you’ll also find in the literature on electoral systems. But our explicit mandate here was to focus on democratic values and not electoral systems.
Q: How are those archetypes assigned to people? What’s the cluster of indicators that tells you someone is more likely to fit into one category or another?
A: These archetypes are probabilistic. So they take the aggregation of your responses and make a probabilistic determination as to which archetype you’re most closely aligned with. We try to use that aspect to promote broader engagement, especially engagement online. That initial archetype label and narrative are shareable items that might give people an interesting point of entry into the discussion. Our hope is that users will read that and then continue on through the application to look at some of the more substantive analysis about where they’re situated within these various themes in the discussion on democratic values.
Q: What’s your response to criticism that some of the questions seem designed to elicit a particular answer? There are a lot of questions that use phrasing like “even if” that sound like they’re designed to lean people one way or the other. What’s your response to that kind of criticism?
A: First of all, I would say that a lot of these questions reflect questions in the academic literature. The questions were developed by a team of academics who are specifically trained to develop questions to get at real opinion, and in an objective, dispassionate way. It’s also important to realize that no single one of these questions determines one’s position on a particular theme. The whole point is to take several different items, frame them in different ways and see what kind of variance you get in the response. And once you do that you get a better sense of what a person’s true position is on that issue. We include the “even if” questions specifically to understand the extent of one’s agreement with a particular item.
Q: It sounds like triangulating toward an answer by coming at it from different directions and saying, “Does this still hold for you? How about now?” Is that what you’re saying?
Q: Is it true that people can vote more than once, either from different devices or the same one?
A: It’s not really voting.
Q: Responding, then?
A: Yes. People can respond or they can engage with the initiative from more than one device.
Q: How do you prevent stuffed ballot boxes or weigh the results, given that?
A: We have a series of techniques that identify possible entries that are the same user multiple times, or people who are actually trying to skew the results. We have safeguards that are already in place to identify and control for these kinds of multiple entries. In addition to that, once we have the dataset we have a series of screening measures that we apply to ensure that the observations and dataset that are consistent with unique respondents.
Q: Is it possible for you to explain in a way that laypeople will understand how you do that? How you would control for, say, proponents of proportional representation who are highly motivated voting over and over, or other factors that might skew it?
A: Well, there are the common instruments that you would use—things like tracking IP addresses and using cookies and devices that will allow you to detect if a user has used the application before.
Q: If that were the case, would you take the first entry from someone and discount the later ones?
A: Typically, when we’ve identified observations that are assumed to be from the same user, we take the first set of responses. We make a theoretical assumption that the first time they use the application they are really giving an earnest and honest go, and then after that they are likely testing it or revisiting it to see how small adjustments may change their outcome.
Q: Is there anything else you’d like to add?
A: I think we’re very focused on the elements of the initiative, which is great. And we welcome scrutiny of our techniques and our methods. But I think we’re missing out a bit on the bigger picture if we don’t go into the fact that this is in many ways a really innovative way for the government to engage with and consult the public on an important issue. I think that we will reach and engage with Canadians who would never have been engaged through the conventional outreach processes the government usually engages with. Their voices will be reflected in the work that we do in a way that they otherwise wouldn’t be.