New Paper Addresses Bias and Insider Threat Programs
ARLINGTON, VA (Jan. 13, 2022)—The Intelligence and National Security Alliance (INSA) today released a new white paper, Strategies for Addressing Bias in Insider Threat Programs, that can help insider threat and security managers identify and mitigate biases that undermine the effectiveness of insider threat (InT) programs in both government and industry.
Developed by INSA’s Insider Threat Subcommittee, this paper identifies sources of potential bias that can be attributed to both people and technology. Through personal cognitive biases, whether implicit and unrecognized or overt, InT program staff can affect the objectivity of InT analyses. Organizations can introduce bias at a systemic level through enterprise-wide hiring practices and personnel decisions. Technology can create bias through the selection, weighting, and categorization of data and the design or risk analysis models.
The consequences of both human and technological bias in InT programs are high. Bias undermines the effectiveness of insider threat programs by diverting attention to low risks and causing higher risks to go unexamined. In addition, bias wastes resources, creates potential legal liability, and impairs an organization’s ability to hire and retain staff.
“We all know by now that algorithms don’t yield perfectly objective analyses – they reflect the assumptions and the data that humans provide them,” said Larry Hanauer, INSA’s Vice President for Policy. “If government agencies and corporations are to manage risk effectively, they must take proactive steps to identify and mitigate both human and technological biases in their approach to insider threats.”
The paper offers several recommendations for how organizations can identify and mitigate bias in InT programs:
- Raise awareness about types of bias, susceptibility to bias, and actions that can be taken to mitigate negative impacts of bias;
- Provide tools and techniques to make decisions in complex environments involving uncertain data;
- Remove associations between data and the individuals generating the data to help decrease the chances that demographic information will influence decision-making;
- Reduce the subjective nature of data labeling through diversity in threat analysis teams and standardized methods for labeling and adjudicating incident data.