In her first major remarks as Acting Chairwoman of the Federal Trade Commission (FTC), Rebecca Kelly Slaughter outlined her enforcement priorities under the new administration in a conversation with the Future of Privacy Forum, including oversight and enforcement strategies directly impacting big data and artificial intelligence.
Slaughter underscored last month’s digital photo album settlement—which required the company to destroy all facial recognition models and algorithms built upon improperly collected data—as an “innovative” model for future enforcement actions: “[W]here companies collect and use consumers’ data in unlawful ways: we should require violators to disgorge not only the ill-gotten data, but also the benefits—here, the algorithms—generated from that data.” Slaughter’s comments suggest algorithmic disgorgement is here to stay in the FTC’s arsenal of enforcement mechanisms.
Companies facing future FTC orders can expect Slaughter to push FTC staff to include provisions requiring notice as a matter of course. Slaughter explained in her remarks how effective consumer notice of wrongdoing is likely to have greater impact than unknown FTC enforcement orders: “Notice lets consumers ‘vote with their feet’ and helps them better decide whether to recommend the services to others,” but it also “accords consumers the dignity of knowing what happened.”
In recent years, major players in big data digital markets have found themselves in the crosshairs of European competition authorities. Slaughter’s remarks suggest the FTC may take a more aggressive stance on antitrust analysis of big data players by “think[ing] carefully about the overlap between [the FTC’s] work in data privacy and in competition.” She notes that “[m]any of the largest players in digital markets are as powerful as they are because of the breadth of their access to and control over consumer data.”
Emphasizing the importance of racial equity, Slaughter also outlined biased and discriminatory algorithms and facial recognition technologies as two key priority areas for future privacy enforcement at the FTC. Moving forward, Slaughter noted that she has “asked staff to actively investigate biased and discriminatory algorithms,” that she is “interested in further exploring the best ways to address AI-generated consumer harms,” and that the agency will “redouble [its] efforts to identify law violations” in the area of facial recognition technologies.
Slaughter highlighted the need to think creatively about how to make the FTC’s current enforcement efforts even more effective. She pointed to her previous dissents in FTC cases as examples of where she felt the FTC should have obtained stronger relief for consumers, including by pursuing litigation if the FTC is unable to negotiate sufficient relief in settlement. She further committed to ensuring the FTC is identifying and pleading all violations that are applicable. This means companies may find the FTC less willing to provide flexibility in settlement negotiations under the new administration. With enforcement mechanisms like algorithmic disgorgement and direct consumer notice on the table, this is not a welcome development.
In light of these priorities, it is critical that companies managing big data or developing and deploying artificial intelligence (AI) and machine learning (ML) consider implementing policies, procedures and training designed to ensure: