Improving private sector privacy for Ontarians: Schwartz Reisman recommendations for data governance in the digital era

Published on Nov 17, 2020

NOV 17, 2020 BY JOVANA JANKOVIC (Who decides? Consent, meaningful choices, and accountability — Schwartz Reisman Institute (utoronto.ca))

Our rights to privacy and to the protection of our personal data are fundamental.

But in a digital and increasingly data-rich world, legislation is struggling to keep up with the unique characteristics of the ways in which data is generated, circulated, and used today.

Recently, the Ontario government released a discussion paper on improving private sector privacy for Ontarians. The document sought to “address the gaps in Ontario’s legislative privacy framework, and to establish comprehensive, up-to-date rules that will protect privacy rights and increase confidence in digital services.”

As part of the ongoing discussion, the provincial government sought formal responses from  organizations housing legal or technical experts on the issues at hand—like the Schwartz Reisman Institute.

SRI Research Lead Lisa Austin and Director Gillian K. Hadfield submitted a response advising, among other things, a rethinking of existing legal models in light of the complexity and opacity of current data flows and data ecosystems—areas of research for both Austin, a law professor specializing in law, technology, and privacy legislation, and Hadfield, a professor of law and strategic management whose work focuses on new legal and regulatory systems for AI and other complex technologies.

Read Austin and Hadfield’s response (PDF).

Lisa Austin
Lisa Austin

Austin and Hadfield note that today’s legal models conceive of data as flowing between an individual and an organization, but it’s no longer that simple. There are often third parties involved in data collection and distribution, data is no longer a simple quantifiable piece of information collected through formal processes, and methods of data processing are growing increasingly complex.

“Consumers do not understand how their data is being collected, used, or disclosed,” write Austin and Hadfield—a circumstance that is “not protective of consumer interests.” However, they cautioned against Ontario’s proposal in its discussion paper, which was about not requiring consent in some cases. Instead, they propose that when consent is not relied upon that  “the collection, use, and disclosure be reasonable.” This model would also require transparency.

“The idea of ‘personal information’ no longer maps onto our data processing practices,” write  Austin and Hadfield. We have to fundamentally rethink what “data” is and how it behaves. Some recently-developed data-protection mechanisms, like “deidentifying” data, still rely too heavily on the idea of data as discrete and categorizable—a singular thing. This mischaracterization results in a focus “on features of the data alone rather than the analysis that is done on the data or the computing environment within which this is done.”

Gillian K. Hadfield
Gillian K. Hadfield

New research in computer science is currently looking at making the analysis of data itself private. A popular framework for doing so is called “differential privacy:” essentially, this is a statistical guarantee that no individual contribution to a large data set can be extracted from that set—the individual contribution simply acts to reinforce patterns already found in that data set, thereby improving the accuracy of the findings without revealing the specific sources of those findings.

Overall, Austin and Hadfield’s recommendations to the provincial government assert that we are “trying to shoehorn new approaches into old categories. The categories simply do not fit the techniques.”

Their recommendations?

“There is a great opportunity to develop new data governance models that can simultaneously improve the democratic quality of data governance and unlock the power of big data and new technologies such as machine learning for public benefit.”


Want to learn more?