
10 minute read
Facial Recognition Technology Used by Private Firms in Public Spaces: Privacy, Surveillance and Reform
By Jeannie Marie Paterson
Introduction
Facial Recognition Technology (‘FRT’) has moved from science fiction to reality. FRT is being used at ePassport gates, to track missing children, and by police to identify suspects in a crowd. In 2022, Choice revealed that retailers were using FRT at the store front, and some clubs, pubs and casinos have indicated they will be using FRT to identify problem gamblers. Supermarkets are using some version of computer-vision, the technology underlying FRT, to monitor customers using self-checkouts. The rising use of FRT has been controversial because, while the technology appears to offer opportunities for more effective policing and
Professor Jeannie Marie Paterson

Professor of Law (Consumer Protection and Emerging Technologies)
Co-Director: Centre for AI and Digital Ethics (CAIDE) –
Co-leader: Digital Equity and Access Research Program, Melbourne Social Equity Institute (MSEI), The University of Melbourne security, it also raises profound concerns about human rights and civil liberties. These concerns have led to investigations by the Office of the Australian Information Commissioner (OAIC), and proposals for urgent law reform.
Understanding FRT
The term ‘FRT’ may be used to cover different processes:1
• One to one facial matching (verification) involves matching an image of a face against verified image of the identity being claimed, as in ePassport gates, phones, and some workplace entrance security systems.2
• Object recognition involves using computer vision to identify a given object not verify identity, for example monitoring poachers,3 feral animals,4 and unlawful mobile phone use while driving.5
• Emotion detection represents an attempt to identify emotion, character, or illness from biometric data such as face prints, gait or iris scans.6 It is controversial and inaccurate. Former Human Rights Commissioner, Ed Santow, describes this use of FRT as ‘junk science’.7
One to many facial matching (identification) involves identifying a person from a crowd. It is the form of FRT used for crime prevention and security purposes. The remainder of this article discusses one to many FRT.
One-to-many FRT relies on neural networks trained on large data sets to distinguish different individuals’ faces. It identifies the unique geometry of an individual’s face and compares that ‘faceprint’ against a dataset of faceprints to make a match.8 In theory, this allows a user of the FRT to identify a particular person in a crowd. For example, FRT might be used in ‘real time’ to scan a crowd at a sporting match for people identified in a police database as having been convicted or suspected of crimes.
FRT can also be used ‘after the fact’ to identify a person from an otherwise anonymous image, for example to identify a shoplifter caught on CCTV cameras. This use of FRT has brought the company Clearview AI to prominence.9 Clearview AI’s app allows law enforcement and security companies to upload an image of a ‘person of interest’ and have it compared against the Clearview database for the purpose of identification. The images used in the app come from photos scraped from social media and websites. The OAIC states that Clearview AI has provided ‘trials of the facial recognition tool to some Australian police forces’.10
Concerns about FRT
The technology behind FRT will no doubt continue to improve, but currently there are concerns about its accuracy, fairness and effect on privacy.11
FRT is less effective in ‘noisy’ conditions, namely when it is used to identify an individual in a crowd of faces, where people are moving so the image is unclear, and in low light 12 Inaccurate, incomplete or unrepresentative data used to train the FRT will taint the outputs of the system potentially leading to unlawful discrimination.13 For example, If the training data sets include different numbers of people from different demographics, for example, features predominantly white male faces, then the model trained on this data will be less accurate for women and people of colour.14 Less accuracy means more misidentifications and potentially more people being wrongly treated as suspects in a crime or excluded from public venues. Of course, it may be responded that people are also biased and poor at remembering faces. However, embedding this bias in automated systems such as FRT carries the risk of magnifying already existing discrimination on a colossal scale.
Even if the system is accurate, there are still concerns about privacy and surveillance raised by FRT.15 Sometimes people say they are not worried about facial recognition and other surveillance technologies because they have ‘nothing to hide’. But privacy is essential to human flourishing, allowing us to take risks, be creative, and experiment with our identity. If we perceive ourselves as being watched, that may have a chilling effect. Widespread surveillance gives significant power to the state and may act to curtail freedoms of speech, assembly, and expression.
The Law of FRT Human Rights
Concerns have repeatedly been raised that the use of FRT risks breaching human rights law.16 In R (Bridges) v Chief Constable of South Wales Police, 17 the UK Court of Appeal unanimously reached a decision that a facerecognition system used by the South Wales Police force was unlawful. The court identified problems arising from:
• a lack of guidance as to when the force could use the FRT, which meant the discretion was too broad having regard to Article 8 of the European Convention on Human Rights;18
• no reasonable steps to make enquiries about whether the software had bias on racial or sex grounds contrary to the Public Sector Equality Duty in s 149 of the Equality Act 2010 (UK).19
We do not have the same comprehensive human rights regime in Australia, but we do have protections against discrimination.20 The use of FRT by private firms will be additionally subject to the requirements of privacy legislation applying throughout Australia.
Privacy law
Privacy laws in Australia apply to limit the collection and use of personal data. The Privacy Act 1988 (Cth) applies to ‘Australian government agencies; private entities … with an annual turnover greater than $3 million; and some other entities in limited circumstances’.21
The Privacy Act applies to protect an individual’s ‘personal information’.22 The OAIC considers that ‘personal information’ includes the data used in FRT and other biometric identification systems.23 The OAIC has stated that ‘voice print and facial recognition biometrics are personal information because they collect characteristics that make an individual’s voice or face unique’.24 FRTs collect unique biometric information about an individual for the purpose of using the faceprints collected to identify a specific individual.25
The Privacy Act rests on thirteen Australian Privacy Principles (APPs).26 Under the APPs a regulated entity:
• may only collect personal information that is reasonably necessary for one or more of its functions or activities (APP 3.1 and 3.2);
• may only collect sensitive information if the individual consents to the sensitive information being collected, unless an exception applies (APP 3.3);
• that collects personal information about an individual must take reasonable steps to provide notice (APP 5).
‘Sensitive information’ under the Privacy Act covers categories of highly personal information, including biometric information and templates.27 The Information Commissioner has held that an image of a face, and the faceprints used for FRT, will be sensitive information.28 This means that, in most instances, consent will be required to use a person’s face for FRT, a level of protection that is itself controversial.
Consent under the Privacy Act may be ‘express or implied’.29 The Act does not set out requirements for how consent is obtained.30 It is often unclear whether notice or consent processes used to justify data collection gives rise to a contract.31 Notably, in contract law, signature is enough to signal consent to the terms of a signed contract, whereas notice is needed to show consent to incorporated terms in unsigned agreements.32 These are formal rather than substantive requirements, ie it is not necessary to show a party read or understood the terms to be bound, although where reliance is placed on notice, proportionately more must be done to bring attention to unusual terms. By contrast, the Consumer Data Right imposes a robust standard for consent, requiring consent to be ‘voluntary, express, informed, specific as to purpose, time limited and easily withdrawn.’33 A similar standard of consent is required by the EU’s GDPR.34 The Australian Information Commissioner has indicated this robust approach to consent represents best practice under the Privacy Act.35
Findings by the OAIC
The Information Commissioner has considered the use of FRT by firms on several occasions. In applying the requirements of the Privacy Act, key questions concern (1) the justifications that will qualify as reasonably necessary for the firms’ activities; and (2) the steps that will satisfy the requirements of consent.
Clearview AI
Clearview AI provides its FRT service by using images scraped from the internet and social media. In a joint investigation with the UK Privacy Commissioner,36 the Australian Information Commissioner held that Clearview AI had contravened the Privacy Act.37 Clearview AI failed to comply with the Privacy Act, including by collecting and using images without consent (APP 3.3) 38 or notice (APP 5).39 Consent could not be inferred from individuals making the images available on the web, the publication of a privacy policy, or the presence of an option to opt out in circumstances where the onus was on the individual to find out about the practices of the entity who was collecting the information.40
7-Eleven
7-Eleven used FRT to collect demographic information about consumers who entered an in-store competition using a tablet and also to determine if a consumer was entering multiple times.41 The possibility of biometric and photographic information being collected was referred to in the 7-Eleven Privacy Policy.42 The stores also displayed a notice at entry:
Site is under constant video surveillance. By entering the store you consent to facial recognition cameras capturing and storing your image.43
The Information Commissioner held 7-Eleven’s conduct contravened the Privacy Act 1988. The use of FRT was not necessary for 7-Eleven’s business – understanding and improving customers’ experience –as feedback could be obtained in other ways (APP 3.2).44 The conduct collected individuals’ sensitive information without consent (APP 3.3).45 Consent could not be inferred merely from the notice at the entry to the store46 or the use of a privacy policy.47
These sources of information did not provide an adequate explanation of the use of biometric information.48 The information was not displayed in the vicinity of the tablets.49 Nor was the use of FRT in using the tablets addressed.50
Kmart, Bunnings and the Good Guys
The information Commissioner is currently investigating the use of FRT by Kmart, Bunnings and the Good Guys. The firms have said they are using FRT to protect their staff by identifying previously abusive customers, who are banned from entering the store, and will not retain any images collected of consumers.51
Whether the conduct is reasonably necessary for the conduct of their business (APP 3.2) is likely to depend on factors such as the severity of the problem being addressed, the availability of less privacy-intrusive methods of responding to the problem and, as seen in the South Wales Police case discussed above, the protections against bias and governance mechanisms placed around the usage of the FRT.
The case also puts to the test the question of whether consent (APP 3.3) can be inferred from consumers entering the store in circumstances where there is a notice about the use of FRT at the entrance. In contract law more fulsome notice (e.g. ‘… printed in red ink with a red hand pointing to [the unusual term] …’) must be provided where terms are unusual.52 A small discretely placed notice may not satisfy this requirement.53 The 7-Eleven decision suggests a close nexus between the notice and the conduct from which consent is inferred will be required, as well as clear, relevant information about the information being collected and the uses made of it.
Reform
Uncertainties in the law applying to FRT and concern about its scope have led to suggestions for reform.
The Commonwealth Attorney General’s Department recently released the Privacy Act Review Report. The Report recommends that the ‘quality of privacy collection notices and consents obtained from individuals should be improved’.54 It recommends that the definition of consent should be amended to specify it must be ‘voluntary, informed, current, specific, and unambiguous’.55
Some argue that even robust consent standards are insufficient to deal with risks from technologies such as FRT. People simply don’t have the time or expertise to read privacy notices, and when they do they still may not understand the risks of consenting to biometric surveillance.56 The Review proposes a new ‘fair and reasonable’ test for entities when handling personal information.57

Is this enough? Some argue that there should be specifically directed rules, or even a ban, on uses of data that are highly invasive of personal privacy.58 The draft EU AI Act contains a ban on real time biometric surveillance in public places.59 The Human Technology Institute at the University of Technology Sydney have proposed a model law on FRT, which uses a risk-based human rights approach.60 High-risk uses of FRT would be banned except for law enforcement or national security, academic research, or when the regulator provides authorisation.61
Conclusion
We don’t know how technology will develop in the future. The one certainty is that data-driven ‘AI’ technologies are only going to become more prevalent, meaning that a good understanding of the human rights and ethical concerns raised by the proposed use of the technology will be necessary to give sound advice to clients or governments about their use, as well as ensuring responsible and effective regulation of them.
Jeannie Marie Paterson specialises in the areas of contracts and consumer protection, along with the law and regulation of emerging technologies.
Jeannie is the co-director of the Centre for AI and Digital Ethics, a collaborative, interdisciplinary research, teaching and policy centre at the University of Melbourne involving the faculties of Computing and Information Systems, Law, Arts and Science and Co-leader of the Digital Ethics research stream at the Melbourne Social Equity Institute, an interdisciplinary research institute focused on social equity and community led research.
Jeannie holds a current legal practising certificate and regularly consults to government and not-for-profit organisations, as well as speaking regularly on issues around AI, ethics and consumer protection.
Endnotes
1 ‘Delhi Police Tells High Court It Requires More Information from Centre on Missing Children’, Firstpost (23 April 2018).
2 ‘Facial Recognition’, NSW Police Force (Web Page).
3 Jarni Blakkarly, ‘Kmart, Bunnings and The Good Guys using facial recognition technology in stores’, Choice