In Conversation With Ian McKinnon
Each month, PlaceSpeak presents a Q&A with experts in urbanism, public engagement, and civic technology.
This month, we spoke with Ian McKinnon, Chair of the National Statistics Council of Canada and former President of Decima Research. He has worked extensively with private and public sector clients to design and analyse survey research for both market research and policy issues, and has been a leading voice on the importance of reinstating Canada’s long-form census. Ian’s decades of experience in public opinion research and statistical analysis bring new perspectives to citizen engagement in the digital age.
1. The line between public consultation and market research is becoming increasingly blurred. In your experience, what is the main difference between citizen engagement and market research?
Market research assumes that the only interaction between the people whom you’re surveying and the client/vendors/manufacturer is a simple purchase decision.
Citizen engagement, particularly at the community level, assumes that the relationship is much more complex, that there’s feedback, that people will do more that make a simple buy or not buy decision, but rather will choose to engage in varying degrees with either the government or the community, whatever that non-commercial market research entity is. In fact, there’s an interesting foreshadowing of this in Albert O. Hirschman’s seminal work called Exit, Voice, and Loyalty, that talked about the engagement of people both as citizens and as consumers, and that economic theory had always had a very straightforward view that the interrelationship was commercial, but Hirschman looked in great detail about voice and loyalty as attributes in the relationship between the individual and another organization.
2. What do you think some of the implications are for how people engage with their local governments?
I think that one of the big opportunities for organizations is that people increasingly feel that they should be engaged, that their voices should be heard. Rather than a series of one-off purchase decisions, I think there is an increased desire to be able to feedback their concerns and priorities on an iterative and ongoing process.
3. How should decision-makers adapt to new challenges in gathering statistically relevant data, such as a decline in landline use and increased opt-out methods?
You’re really talking about fundamental changes to the underlying premises that classical survey research depended upon. You defined the universe, drew a sample from it, and if you worked very hard you could get a significant proportion of those people to respond. There was always a question about bias – in other words, the difference between those who responded and those who didn’t. Private survey research firms used to be able to get 50-60% response rates with effort. When they’re getting 3 or 4% response rates, it’s dramatically different. You get data, but it is very hard to ensure that the people who choose to respond are representative of the population. These changes in attitudes make it harder to get a scientific, projectable sample that is robust.
One of the real differences is that people are starting to use panels for surveys. If you look at the coverage in the US election, both academics and people like Nate Silver will talk about panels. The LA Times is running one that is getting a lot of attention, and it’s getting results that are slightly different than most other survey groups. The really interesting thing about going over and over again to the same group of people is that you can have a great deal of confidence. For example, if Clinton’s doing better in that panel from one week to another, she really is and it’s not a sampling artifact. You’re very confident about the trends over time.
Take this back to a public sector organization or government. In panels, you’re talking to the same people or an expanding pool of people, but with a group who are constant over time. If they have reference panels who they talk to on a continuous basis, they can have confidence that the changes that they see in those consultations are real.
4. What are some main considerations that public sector organizations need to take into account when engaging online with residents?
Because people are now finding it easy to express opinions online, they increasingly value organizations whom they see to be listening and responding. It used to be very indirect – if there was a write-in campaign about something, and it managed to become high-profile enough to hit the news, then you would be aware of it. Now, there are very low-cost and much more time-sensitive ways of getting back to people. You can both gather views and respond to them, have a series of iterative solutions.
Often in public policy, you put options in front of people, you consult, and then make a decision and implement. That’s a one-off process. People increasingly expect that there will be ongoing adjustments and responses to changed circumstances and priorities. Well-functioning organizations are now assessed by how they will make incremental, iterative changes responding to changed demand or circumstances, instead of large one-off changes.
5. What is the best possible outcome of using digital technologies to gather public opinion?
Firstly, I don’t think you should view it as simply gathering public opinion. What is revolutionary about our current communications technology is both its timeliness and the potential that it opens up. We use the term, “conversation”. It’s opening up dialogue that runs both ways. An institution can say, “We’re thinking about something.” People can respond to that, and the institution can then respond. Or they can take incremental suggestions, responses, ideas from people and engage in a manner that is real-time or very rapid, and above all, which can be responsive, not in a spasmodic way, and create a situation where people are used to and expect to see the results of their engagement with the service provider in an ongoing iterative manner. That’s really what’s quite different.
6. Is there anything else that you’d like to add?
There’s an analogy that I like to use about how institutions (public or private) ought to use research and engage with their clientele/the people who they serve. That is, you shouldn’t look at feedback data as a set of instructions. Instead, what you really want is to create a map of preferences, external realities, and people’s current behaviours. You can then go back and say to people, “Here is what we heard. Here are our suggestions based on that – how do you respond to it?” It’s not about delivering instructions, but rather, about better understanding the terrain in which you’re operating.