
3 minute read
Redefining harm in the information economy
By Julia Dyck
“In a world where data is the lifeblood of the economy, figuring out how to protect people from data harms is paramount,” says Ignacio Cofone, associate professor and Canada Research Chair in AI Law & Data Governance at the Faculty of Law. “Current laws and regulations fail to protect us because they are built on outdated ideas that trap lawmakers, regulators, and courts into wrong assumptions about privacy.”
Cofone’s research focuses on online data governance, with a major focus on law reform in response to technological and economic advancements. He is a 2023 laureate of the McGill Principal’s Prize for Outstanding Emerging Researchers and an affiliated fellow at the Yale Law School Information Society Project.
In 2020, he authored the Office of the Privacy Commissioner of Canada’s report setting out the direction of the most significant reform to federal privacy law of the last two decades. This year, the Parliament of Canada is discussing a reform proposal that incorporates most of the report’s principal recommendations.
In his new book, titled The Privacy Fallacy: Harm and Power in the Information Economy (Cambridge University Press, December 2023), Cofone proposes a novel legal framework for properly recognizing the value of privacy in the age of AI.
Cofone argues that consent provisions, which have long been considered a cornerstone of privacy protection, are insufficient in preventing AI and privacy harm. This is because much of the harm arises from inferences made about individuals and groups, where individual agreements become irrelevant. For instance, one of Cofone’s projects explores how the US correctional system uses AI to predict whether inmates are likely to re-offend when released. Because the algorithms are based on arrest data in which racialized people are disproportionately affected, these people are mistakenly flagged as “risky” more frequently by the AI, magnifying existing systemic inequalities.
Risks extend beyond the criminal system to the entire economy. “Our privacy is increasingly besieged by tech companies,” he highlights. “The number of data breaches and harms to individuals, to groups, and even to democracy that we have witnessed in the last decade shows us that we simply cannot live with that system anymore.”
Under current regulations, organizations have little incentive to prevent harm. Many privacy scandals involve companies that have obtained users’ required consent and fulfilled procedural requirements. When there’s a data breach, people can typically pursue legal action only if it leads to outright identity theft or financial/reputational harm. Cofone advocates for a more robust system that holds companies accountable for their data practices based on the data practices’ consequences, suggesting that they should be obligated to provide reparations when a harmful privacy breach occurs.
“Once we figure out how to conceptualize the idea of privacy harm, which is what this book attempts, we can build a more powerful liability system that can create accountability for data practices in ways that the current system cannot,” he explains. Cofone’s proposed reforms aim to shift the focus from collecting superficial consent to actively protecting people from online harms.
While the rapid pace of AI innovation might seem like a major challenge, Cofone argues that policymakers should focus on the economic and social relationships that underlie the technology, not the technology itself, to design laws that are future-proof. “Striking the right balance between specificity and durability is crucial: regulations should not be overly ambiguous to the point of lacking practicality, nor should they be excessively specific and limited in utility as technology advances.”
Learn more about The Privacy Fallacy: Harm and Power in the Information Economy : mcgill.ca/x/U7m