Facial Recognition Technology Use in Stadiums: Key Takeaways


5 minute read | November.03.2025

The use of facial recognition technology in stadiums is becoming increasingly popular and has the potential to improve the visitor experience. Venues can use the technology to shorten queues at entrances and concession stands, facilitate the pick-up of mobile orders, and tighten security. Large-scale entertainment has faced increased security concerns over the past few years, with the Manchester Arena bombing in 2017, and the cancellation of three Taylor Swift concerts in Vienna in 2024 due to a terror plot. Facial recognition technology can help stadiums tighten their safety and security practices by enabling the rapid identification and tracking of individuals suspected of committing offenses.

The use of facial recognition technology also triggers privacy and cybersecurity concerns. Threat actors are more likely to target companies they believe to have large data sets of sensitive personal information, such as biometrics. Unauthorized access to this data puts consumers at risk of identity theft. In addition, use of the technology has previously resulted in individuals being subjected to unlawful discrimination and harassment, and there are growing concerns about companies conducting undisclosed surveillance and using the technology unfairly.

Here are some key takeaways venues should consider when deploying the technology:

There are overlapping privacy regimes

  • U.S. state laws have rules that address biometric use and facial recognition technology in particular. Texas, Washington and Illinois have specific biometric privacy laws. The Illinois Biometric Information Privacy Act is the most stringent, requiring informed written consent for collection of biometrics and providing a private right of action. The Colorado Privacy Act also provides controls on the collection and processing of biometrics of both consumers and employees, including requirements for retention and deletion policies, notice to consumers and informed affirmative consent prior to collection. California’s CCPA covers biometric information (including facial images) and defines it as sensitive personal information, subject to the right to limit use and disclosure, if it is used for the purpose of uniquely identifying the consumer.
  • Some cities have particular requirements: Portland, Oregon prohibits the use of facial recognition technologies by private entities in public accommodations, and New York City requires commercial establishments using the technology to provide disclosures to consumers. It also makes it unlawful to sell, lease, trade or share the information in exchange for anything of value or otherwise profit from transacting with the data. The New York City law provides for a private right of action if the entity fails to cure a breach.
  • Venues should be aware of European frameworks that could apply, including the GDPR and UK GDPR. Biometric data is special category data under the GDPR, which means it can only be processed in certain limited circumstances. For private businesses, in most cases, explicit consent from data subjects, and a data protection impact assessment will be required. In addition, the EU AI Act introduces a new layer of complexity to compliance, designating as “high-risk AI” any AI systems intended to be used for remote biometric verification of a specific natural person’s identity and prohibiting AI systems that create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
  • The FTC has expressed concerns about the risks posed to consumers by the compilation of large data sets of biometrics, which can lead to identity theft, and flaws in facial recognition technology that can trigger unlawful discrimination and false-positives.

Regulators are active in this space

  • The FTC brought enforcement action against a retailer, which used facial recognition technology to identify potential shoplifters. It emphasized the importance of issuing appropriate disclosures to consumers about use of the technology, continuous monitoring of its use to mitigate discrimination risks, the need to provide training to employees about the use of the technology and the importance of appropriate security and deletion protocols.
  • The FTC also took action against a facial recognition software company for making false claims in its marketing related to its model efficacy and lack of racial bias.
  • The New York Attorney General wrote to a venue owner in relation to the use of facial recognition technology to deny entry to lawyers who worked in law firms that represented parties engaged in litigation against the venues. The Attorney General expressed concern that these activities constituted violations of the New York Civil Rights Law.
  • The Texas Attorney General also brought actions in relation to the use of facial recognition technology, achieving a $1.375 billion settlement in over alleged collection of biometrics without consent.

Consumers are taking action to enforce their privacy rights

  • In a 2024 class action lawsuit filed against a sports venue, the plaintiffs alleged that the stadium collected facial scans of over 100,000 visitors since 2021 and then illegally shared and profited from that biometric data. The complaint claims that a facial recognition system scanned fans upon entry, checks them against a database and shares the data with a third-party software provider in violation of New York City laws.
  • Consumers have also sued over businesses’ use of technology provided by a facial recognition technology company, which allegedly created a database of over three billion images by scraping photos from publicly available websites. Customers could upload the image of an individual (such as a suspected shoplifter) to match it in the database and identify them. In 2020, a class action lawsuit was filed alleging violations of the Illinois Biometric Information Privacy Act. The case settled by plaintiffs receiving a 23% stake in the technology company rather than a cash amount.

Steps to consider before implementing the technology

  • Ensure there is a comprehensive cybersecurity program in place commensurate to the sensitivity of the data being collected.
  • Create and implement a data deletion policy to ensure that biometrics are deleted when they are no longer required.
  • Provide clear and conspicuous notice to consumers attending the venues that the technology is being deployed and provide information about their rights (such as the ability to opt out).
  • Ensure all necessary consents are obtained, if required under local law.
  • Implement training programs for staff so they understand the potential limitations of the technology, particularly in relation to its accuracy, and are able to answer questions from visitors about the use of the technology if asked.
  • Implement audit procedures to monitor closely the efficacy of the technology and make adjustments where needed.