White House Cautions Schools Against ‘Continuous Surveillance’ of Students
Biden administration takes aim at algorithmic discrimination with non-binding ‘Blueprint for an AI Bill of Rights’
Support The 74's year-end campaign. Every gift will be matched dollar for dollar.
Updated, Oct. 5
The Biden administration on Tuesday urged school districts nationwide to refrain from subjecting students to “continuous surveillance” if the use of digital monitoring tools — already accused of targeting at-risk youth — are likely to trample students’ rights.
The White House recommendation was included in an in-depth but non-binding white paper, dubbed the “Blueprint for an AI Bill of Rights,” that seeks to rein in the potential harms of rapidly advancing artificial intelligence technologies, from smart speakers featuring voice assistants to campus surveillance cameras with facial recognition capabilities.
The blueprint, which was released by the White House Office of Science and Technology Policy and extends far beyond the education sector, lays out five principles: Tools that rely on artificial intelligence should be safe and effective, avoid discrimination, ensure reasonable privacy protections, be transparent about their practices and offer the ability to opt out “in favor of a human alternative.”
Though the blueprint lacks enforcement, schools and education technology companies should expect greater federal scrutiny soon. In a fact sheet, the White House announced that the Education Department would release by early 2023 recommendations on schools’ use of artificial intelligence that “define specifications for the safety, fairness and efficacy of AI models used within education” and introduce “guardrails that build on existing education data privacy regulations.”
During a White House event Tuesday, Education Secretary Miguel Cardona said officials at the department “embrace utilizing Ed Tech to enhance learning” but recognize “the need for us to change how we do business.” The future guidance, he said, will focus on student data protections, ensuring that digital tools are free of biases and incorporate transparency so parents know how their children’s information is being used.
“This has to be baked into how we do business in education, starting with the systems that we have in our districts but also teacher preparation and teacher training as well,” he said.
Amelia Vance, president and founder of Public Interest Privacy Consulting, said the document amounts to a “massive step forward for the advocacy community, the scholars who have been working on AI and have been pressuring the government and companies to do better.”
The blueprint, which offers a harsh critique of online proctoring tools and systems that predict student success based on factors like poverty, follows in-depth reporting by The 74 on schools’ growing use of digital surveillance and the tech’s impact on student privacy and civil rights.
But local school leaders should ultimately decide whether to use digital student monitoring tools, said Noelle Ellerson Ng, associate executive director of advocacy and governance at AASA, The School Superintendents Association. Ellerson Ng opposes “unilateral federal action to prohibit” the software.
“That’s not the appropriate role of the federal government to come and say this cannot happen,” she said. “But smart guardrails that allow for good practices, that protect students’ safety and privacy, that’s a more appropriate role.”
The nonprofit Center for Democracy and Technology praised the report. The group recently released a survey highlighting the potential harms of student activity monitoring on at-risk youth, who are already disproportionately disciplined and referred to the police as a result. In a statement Tuesday, it said the blueprint makes clear “the ways in which algorithmic systems can deepen inequality.”
“We commend the White House for considering the diverse ways in which discrimination can occur, for challenging inappropriate and irrelevant data uses and for lifting up examples of practical steps that companies and agencies can take to reduce harm,” CEO Alexandra Reeve Givens said in a media release.
The document also highlights several areas where artificial intelligence has been beneficial, including improved agricultural efficiency and algorithms that have been used to identify diseases. But the technologies, which have grown rapidly with few regulations, have introduced significant harm, it notes, including discrimination in tools that screen job applicants and facial recognition technology that can contribute to wrongful arrests.
After the pandemic shuttered schools nationwide in early 2020 and pushed students into makeshift remote learning, companies that sell digital activity monitoring software to schools saw an increase in business. But the tools have faced significant backlash for subjecting students to relentless digital surveillance.
In April, Massachusetts Sens. Elizabeth Warren and Ed Markey warned in a report the technology could carry significant risks — particularly for students of color and LGBTQ youth — and promoted a “need for federal action to protect students’ civil rights, safety and privacy.” Such concerns have become particularly acute as states implement new anti-LGBTQ laws and abortion bans and advocates warn that digital surveillance tools could expose expose youth to legal peril.
Vance said that she and others focused on education and privacy “had no idea this was coming,” and that it would focus so heavily on schools. Over the last year, the department sought input from civil rights groups and technology companies, but Vance said that education groups had lacked a meaningful seat at the table.
The lack of engagement was apparent, she said, by the document’s failure to highlight areas where artificial intelligence has been beneficial to students and schools. For example, the document discusses a tool used by universities to predict which students were likely to drop out. It considered students’ race as a predictive factor, leading to discrimination fears. But she noted that if implemented equitably, such tools can be used to improve student outcomes.
“Of course there are a lot of privacy and equity and ethical landmines in this area,” Vance said. “But we also have schools who have done this right, who have done a great job in using some of these systems to assist humans in counseling students and helping more students graduate.”
Ellerson Ng, of the superintendents association, said her group is still analyzing the blueprint’s on-the-ground implications, but that student data privacy efforts present schools with “a balancing act.”
“You want to absolutely secure the privacy rights of the child while understanding that the data that can be generated, or is generated, has a role to play, too, in helping us understand where kids are, what kids are doing, how a program is or isn’t working,” she said. “Sometimes that’s broader than just a pure academic indicator.”
Others have deemed the blueprint toothless and just another policy position in a crowded field of recommendations from civil rights groups and tech companies. Some of the most outspoken privacy proponents and digital surveillance critics, such as Albert Fox Cahn, founder and executive director of the Surveillance Technology Oversight Project, argued it falls short of a critical policy move: outright bans.
As Cahn and other activists mount campaigns against student surveillance tools, they’ve highlighted how student data can wind up in the hands of the police.
“When police and companies are rolling out new and destructive forms of AI every day, we need to push pause across the board on the most invasive technologies,” he said in a media release. “While the White House does take aim at some of the worst offenders, they do far too little to address the everyday threats of AI, particularly in police hands.”
Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter