LOADING

Type to search

In Conversation With

In Conversation With: Mario Canseco

Share

Each month, PlaceSpeak presents a Q&A with experts in public engagement and civic technology.

Mario CansecoThis month, we spoke with Mario Canseco, the President of Research Co. Mario has analyzed and conducted public opinion research for more than 15 years, designing and managing research projects for clients across the private and public sectors, as well as non-profit organizations and associations. From 2013 to 2017, Mario established Insights West’s electoral forecasting program, issuing 23 correct predictions of democratic processes in Canada and the United States, including the 2015 Metro Vancouver Transportation and Transit Plebiscite, the 2015 Alberta provincial election, the 2016 United States presidential election, and the 2017 British Columbia provincial election.

We recently heard that the Marketing Research and Intelligence Association (MRIA) was shutting down. Given the string of failures in recent years, are pollsters learning from their mistakes? What must they start to do differently?

I think there have been problems in the past, particularly BC 2013 and Alberta 2012. Calgary was a bit of a strange one as there were some differences in the way in which polls covered that election. Some of them had Nenshi winning, some of them had Nenshi losing. Municipal races are particularly difficult to do because of the low voter turnout and the difficulties in talking to people who are actually going to cast a ballot.

But I don’t think the industry necessarily has been wrong on a lot of things. There are certainly all the elections which nobody talks about, and I do think it’s similar to flying an airplane. No one is going to report when the airplane landed safely, you only hear reports when the airplane crashes.

I think the media has also been particularly lazy in looking into it and there are a lot of people who have framed the issue as “they got Brexit wrong”, when half the pollsters in the UK got it right and half of them didn’t. Nobody predicted that Trump would win the popular vote, but we spent months talking about the difference between the Electoral College and the national vote. People tend to look at that information and make an assertion about the entire industry, but you need to look into specific pollsters and how they’ve done, and what their surveys were meant to do. You’re not calling an election six months before, three months before, or even two weeks before. You need to look as close to election day as you can, and that’s the better way to have the right outcome. Ultimately, part of the issue relies on a misunderstanding of what polling is actually doing when it comes to elections, and a poor analysis after the elections of “who got it right and who got it wrong”.

Do you think that there is anything that pollsters should start to do differently at this point?

I think there are several things that we can do. I think the key here is to do a good questionnaire, and figure out who’s actually going to be casting a ballot. One of the problems with polling – whether you’re doing it over the phone, or doing it through an online panel – is that people may not actually want to vote, but they still want to tell you how they feel.

To me, the challenge is figuring out who is actually going to be casting a ballot and I think it worked very well in the last elections that I’ve done, particularly the BC 2017 and the US election back in 2016. There were a lot of people who underestimated the levels of votes for Donald Trump because it seemed like a lot of people were against him. The poll I conducted had him at a fairly high level compared to the rest of the surveys, and I ended up getting his numbers at the national levels essentially closer than anybody else.

So to me, it’s not only about figuring out who’s really active, and who’s saying specific things about the candidate and who’s really well-informed and following the news, because the vote of somebody who watches the news 18 hours a day is going to be worth the same as the vote of someone who doesn’t watch the news and just votes out of spite. So to me, that’s what the challenge is: figuring out who a voter is, and figuring out whether someone who takes a survey is actually going to cast a ballot.

You recently started your own polling firm, Research Co. What makes your approach different?

I wanted to do something where I could look into specific issues which nobody else was looking into. I felt it was time to become more independent. I like the fact that I now have more control over the things that I want to do for the media and for clients, and it’s been a lot of fun to look into different data sources, to be able to look into different ways of analyzing the numbers that you’re getting and talking to several providers who do that kind of analysis.

So I think this independence has made me a better researcher and one of the reasons for this is that there are so many exciting things happening this year. We’ve got the municipal elections in BC, we’ve got a Toronto election, we just did the Ontario election which went very well so I’m very happy with that analysis, there’s also Quebec coming up. There’s lots of movement and it’s an important time to have data that makes sense. I think there’s a visceral reaction towards pollsters because of some of the mistakes of the past, but I don’t think you can deal with that situation by just walking away.

The line between citizen engagement and market research has becoming increasingly blurred. In your experience, what is the main difference?

I think there are a couple of issues at play. One of them is guaranteeing that the questionnaires are written in the best way possible. The notion of a city handling its own affairs and having its own online panel or any kind of tool where they can ask things without the support of professional questionnaire writers can lead to bad data and bad decisions. Sometimes, you need that level of detachment – someone who is coming from the outside and asking the right sets of questions. I think there’s a huge possibility that when a city controls the polls, the questionnaire development, the data – you wind up in a situation where people “agree” with what you’re doing 90% of the time, and we know that’s simply not true.

The other one would be sample selection. I think there’s a very complex scenario now, particularly with telephone polls having very low response rates and online panels replacing the phones, but also on the citizen engagement front. You might be talking to a group of people who are very highly engaged in their communities, who are really informed, who are more willing to participate, and they may not be representative of the entire population, so we are all facing challenges when it comes to data collection. I don’t think citizen engagement will necessarily replace polling, or polling will replace citizen engagement. I think we are facing different challenges in the way we collect our data and report it, and the main one for me continues to be questionnaire development. You can ask a group of people one question and they may be 50-50; you can ask the same sample a different question and get 100% of them to say no to something. So to me, there has to be some nuance as to how questionnaires are developed to allow people to say how they feel.

Decision-makers sometimes seek to obtain “representative” samples of the population during citizen engagement processes. However, any survey is by default self-selecting (people choose to participate, they are not forced to). What are your thoughts?

It’s always been like that. This is one of the problems that the MRIA had in its beginning – the notion that you could only get a representative sample on the phone. From 2008 to 2011 the MRIA consistently said that you couldn’t quote a margin of error on online polls – that they were self-selecting.

But I always argued that if you answered the phone, or didn’t answer the phone, that was a form of self-selection as well. Basing your margin of error in a perfect world, where everyone answers the phone and they’re all the demographic you’re looking for, is not possible. Ultimately, part of it is not a situation where you can force people to answer something. I don’t think there’s a perfect way of doing it. The main issue we kept having with the phone people was that they kept arguing that their system was more perfect because it gave everyone an equal chance – well, how can everyone get an equal chance when 90% of people hang up on you? That is as self-selecting as it gets.

The problem with the analysis of the self-selection of online panels, and I’ve faced this question many times, is that people assume everyone gets to answer everything, and that’s just not the case. Sometimes you might be asking about politics, or about policy, or about sports, and not everybody is going to get the same survey. You’re going to get a representative sample from within the online panel to do the survey, or you’re combining one or two different online panels to get a better view, or try your best to have that balance. Ultimately, we’re not in a situation where every single person you ask is going to reply to your emails, or to answer the phone, because the system isn’t perfect.

What are some main considerations that decision-makers need to take into account when engaging online with the community?

We live in a world with the advent of social media where the negativity can be quite destructive – it might make you reconsider your career choices or even abandon social media altogether! I think that way of looking at things has crept into some of the research and some of the ways that governments are interested in having those conversations.

I think the main message would be to ask a question and find a sample that is going to give you the most relevant information for you to make a good decision. Most of my time is spent designing questionnaires, and there’s always the moment of, “Well, if we ask it this way, they’re going to hate it. If we ask it this way, they’re going to love it.” I’m not here to provide any sort of reassurance about how you feel about things, I’m here to give you an actual representation of how people feel about things. So if you have a questionnaire that is honest, that is fair, that allows people to be undecided when they have to, that allows them to say “I love all of those things about my city or my province or my country, but I also hate all these other things,” then that is the kind of information that allows you to make decisions. If you’re doing a survey that is based on making everyone happy, then you’re not going to get the data you need, and that’s the moment where problems happen.

There have been cases in which the actual questionnaires are written to lead the governments to think that everybody is happy with the situation, and they end up making decisions that are quite unpopular, and they end up going back to the pollsters or researchers and say, “You told us that this was going to be popular!” Well, when you ask the question that way, it is popular! But when you actually implement it, it’s not going to be as popular.

Is there anything else you’d like to add?

Well, we are always working with the same goal in mind. There’s a difference in the way samples are drawn – I think citizen engagement definitely requires a very different group of people who are more likely to participate and discuss things, and it’s a good tool for cities to be more mindful of the situation.

I think the citizen engagement industry and my industry, that is, research and polling, are facing the same dilemma. Samples are very cheap, and everyone thinks they can write questionnaires and build their own online tools. I don’t think it’s necessarily a matter of establishing guidelines and standards – I think that’s one of the reasons why the MRIA is no longer around, because there’s no way to actually police these. Ultimately it’s just like any other competition. If you do things properly, if your clients care about the work you’re doing, if you’re honest and fair, then you’re going to go a long way. Those who are using these tools to their own advantage and not being honest are not going to be around any more.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.