A request for comments is polling governments and the private sector on the full breadth of uses, whether it’s IDing faces or predicting malintent.
Media www.rajawalisiber.com – The White House office in charge of advising broadly on federal technology policy is trying to get a better sense of how the government and private sector use biometric technologies like facial recognition.
The Office of Science and Technology Policy released a request for information seeking specifics about how federal agencies and the private sector are employing biometrics, whether for security and law enforcement—by far the most widely seen use—or other uses like making hiring decisions or predicting intent.
Federal agencies are using biometric technologies for a host of applications.
For instance, a recent Government Accountability Office report showed 18 agencies deployed some kind of facial recognition program in 2020, including 16 cybersecurity use cases, six for criminal investigations and five for physical building security. That report did not go into other forms of biometric identification such as fingerprints or iris scanning, or biometric analyses like how a person walks, speaks or types and other behavior patterns.
While some lawmakers are calling on Congress and the administration to pause deployment or outright ban the use of facial recognition and other biometrics, the Biden administration wants to get a better idea of how the technologies are currently being used and what agencies have planned for the near future.
The RFI requests feedback on traditional uses of facial recognition—verifying a person is who they claim to be or identifying an unknown individual—as well as “inference of attributes including mental and emotional states.”
The request also looks outside facial recognition to other forms of biometrics, such as analyzing patterns in voice, gait and keystrokes.
The document notes these technologies are already being used by federal, state and local governments in a number of ways, citing:
- The use of facial recognition to control initial and continuing access to resources such as housing, medical records, schools, workplaces and public benefits.
- Facial or voice analysis in employment (e.g., to screen potential hires for trustworthiness and competence), education (e.g., to detect risks to safety, determine student focus and attention in the classroom and monitor online exams) and advertising (e.g., to determine responses to advertising displays or track behavior in physical shopping contexts).
- Keystroke analysis for detection of medical conditions and cognition or mood.
- The use of gait recognition, voice recognition and heart rate analysis for inference of level of cognitive ability and performance in healthcare (e.g., for stroke recovery and aids for autistic individuals).
- Inferring intent in public settings.
Specifically, the RFI is seeking feedback on “the extent and variety of biometric technologies in past, current or planned use; the domains in which these technologies are being used; the entities making use of them; current principles, practices or policies governing their use; and the stakeholders that are, or may be, impacted by their use or regulation,” the document states.
Officials note the potential harm these technologies can cause or exacerbate and asked commenters to discuss these issues, as well, with an eye toward mitigating harm rather than banning the tech.
“OSTP welcomes any responses to help inform policies, especially those with a view toward equitably harnessing the benefits of scientifically valid technologies approved for appropriate contexts with iterative safeguards against anticipated and unanticipated misuse or harms,” the RFI states.
The comment period closes at 5 p.m. on Jan. 15. OSTP is looking for feedback from the public, as well as government agencies.