Glossary

178 terms from AI Ethics: Bias, Fairness, Accountability, Transparency, Societal Impact, Governance, Privacy, and Security

# A B C D E F G H I J K L M N O P Q R S T U V W

#

1. Validity Assessment
What does the tool claim to measure? - What independent validation evidence exists? - Rate the validity evidence strength: None / Insufficient / Adequate / Strong → Exercises — Chapter 10: Bias in Hiring and HR Systems
15-Week MBA Course (AI Ethics for Managers):
Weeks 1–2: Part 1 (Foundations) — Chapters 1–3 - Weeks 3–4: Part 1 continued — Chapters 4–6 - Weeks 5–6: Part 2 (Bias) — Chapters 7–9 - Weeks 7–8: Part 2 continued — Chapters 10–12 - Week 9: Part 3 (Transparency) — Chapters 13–14 - Week 10: Part 3 continued — Chapters 15–17 - Week 11: Part 4 (Accoun → How to Use This Book
2. Adverse Impact Profile
What protected groups are at risk of adverse impact? - What adverse impact data has the vendor published? - Apply the four-fifths rule to any available demographic data → Exercises — Chapter 10: Bias in Hiring and HR Systems
Does the tool appear to comply with Title VII, ADA, and ADEA requirements? - Does it meet NYC Local Law 144 requirements (if applicable)? - Does it meet EU AI Act requirements (if applicable for your scenario)? → Exercises — Chapter 10: Bias in Hiring and HR Systems
4. Accommodation Adequacy
What accommodation pathways are documented? - Are they proactively communicated to candidates? - Are they operationally feasible? → Exercises — Chapter 10: Bias in Hiring and HR Systems
5. Recommendation
Deploy / Deploy with conditions / Do not deploy - If deploying with conditions: what conditions? - If not deploying: what alternative do you recommend? → Exercises — Chapter 10: Bias in Hiring and HR Systems
## Foundational Exercises (⭐) → Chapter 3: Exercises
## Part A: Foundational Knowledge → Exercises — Chapter 10: Bias in Hiring and HR Systems

A

Accommodation:
What alternative assessment pathways are available for candidates who cannot use the primary tool due to disability? - Has your accommodation process been reviewed by legal counsel with ADA expertise? - How do candidates request accommodation, and what is the typical response time? → Chapter 10: Bias in Hiring and HR Systems
Accountability mechanism:
City council review of the system at defined intervals, with community input processes preceding each review, and clear authority to discontinue the contract if review findings warranted it. → Case Study 4.1: Mapping Stakeholders in a Predictive Policing Deployment
Actions Required:
[ ] Action item 1 - [ ] Action item 2 → Appendix F: Templates and Worksheets
Adverse action notice
Under ECOA and Regulation B, a written statement required when a creditor denies credit or offers less favorable terms than requested, specifying the principal reasons for the decision. Required to be accurate, specific, and actionable. → Chapter 11: Key Takeaways
Adverse impact:
Can you provide adverse impact data — disaggregated by race, gender, age group, and disability status — from the tool's deployment across your client base? - What is the adverse impact ratio at standard score thresholds for each protected group covered by Title VII and the ADEA? - Have any of your c → Chapter 10: Bias in Hiring and HR Systems
affected individual
the loan applicant, the job candidate, the benefits recipient — wants to understand why the system treated them the way it did, and what they could do differently. The appropriate explanation is local (specific to their case), actionable (pointing to things they can change), and accessible (not requ → Chapter 14: Explainable AI (XAI) Techniques
AI ethics boards with genuine authority
boards that include external members, have access to product development processes, and have authority to delay or require modification of AI deployments — are more effective than boards that are purely advisory. The distinction matters: a board that can say no creates accountability; a board that c → Chapter 4: Stakeholders in the AI Ecosystem
Algorithmic redlining
The practice, by an algorithmic model, of producing differential outcomes by geography that correlate with the racial composition of neighborhoods, typically through facially neutral geographic or neighborhood-characteristic variables that encode the effects of historical redlining. → Chapter 11: Key Takeaways
Annotated bibliography — 19 sources
## Foundational Empirical Work → Chapter 9: Further Reading
Answer: a
**10.** In the Robert Williams wrongful arrest case, what was the significance of the photo array conducted after the facial recognition match? → Chapter 26: Quiz — Biometrics and Facial Recognition Ethics
Answer: b
**2.** In NIST's Face Recognition Vendor Testing (FRVT) 2019 report, what was the general finding about false positive rates for African American faces compared to white faces across most tested algorithms? → Chapter 26: Quiz — Biometrics and Facial Recognition Ethics
Answer: c
**3.** What is the critical distinction between 1:1 verification and 1:many identification in facial recognition? → Chapter 26: Quiz — Biometrics and Facial Recognition Ethics
Answer: d
**5.** Under the EU AI Act, what is the default regulatory treatment of real-time remote biometric identification in public spaces for law enforcement purposes? → Chapter 26: Quiz — Biometrics and Facial Recognition Ethics
Answer: False
While new job categories have historically emerged, they have done so over generational timescales and in different geographic locations than the displaced work, resulting in prolonged transition periods with significant human cost. → Chapter 28: Quiz — AI and Employment
Answer: True
Because gig workers are classified as independent contractors, they do not receive the notice, severance, and unemployment insurance protections that apply when employees are terminated. → Chapter 28: Quiz — AI and Employment
Application and Assessment (45 minutes)
Case study analysis: small group exercise applying framework to a new scenario (25 min) - Q&A and debrief (10 min) - Assessment: 15-question quiz covering key concepts (10 min) → Appendix F: Templates and Worksheets
Applies to:
Chatbots and conversational AI (must disclose AI identity) - Deepfakes (must label as AI-generated) - Emotion recognition systems (must notify users) - AI that generates or manipulates images, audio, video (must label) → Appendix G: Quick Reference Cards
Approach A: Process-based regulation
requiring specific governance processes (bias testing, documentation, oversight) before algorithmic credit models are deployed - **Approach B: Outcome-based regulation** — requiring lenders to meet specific demographic outcome standards (e.g., approval rates within a specified range) → Chapter 11: Exercises
Assessment Recommendation:
[ ] Proceed with deployment as described - [ ] Proceed with deployment subject to conditions (list below) - [ ] Do not proceed; require redesign (explain below) - [ ] Do not proceed; reject this use case → Appendix F: Templates and Worksheets
Assessment suggestions:
Weekly reading response (500 words connecting reading to student's professional context) - Case study analysis (choose one per part) - Midterm: Ethical analysis of a real AI system - Final: Capstone Project (choose one of the three) → How to Use This Book
Audit and transparency:
Do you conduct annual bias audits? By what methodology? Are results published or available to clients? - Are you compliant with NYC Local Law 144 requirements? - What data about candidates is retained, for how long, and under what security conditions? → Chapter 10: Bias in Hiring and HR Systems
Automation bias
The tendency to over-weight automated recommendations, particularly under time pressure or cognitive load, leading to insufficient independent professional judgment. → Chapter 15: Key Takeaways
Automation complacency
The gradual degradation of professional skill and vigilance that results from consistent reliance on automated systems for tasks within the professional's domain. → Chapter 15: Key Takeaways

B

Before deployment:
Independent methodological review of PredPol's algorithm by criminologists, statisticians, and legal scholars without financial relationships with the vendor, examining whether the "demographic neutrality" claim was methodologically valid given the nature of the training data. - Structured community → Case Study 4.1: Mapping Stakeholders in a Predictive Policing Deployment
Behavioral targeting
Using data about individuals' online behavior — browsing, searching, purchasing, location — to predict preferences and deliver advertising believed to be more relevant to current interests and intentions. → Chapter 16: Key Takeaways
Business decision-makers
C-suite executives and board members overseeing AI systems — need high-level summaries of model behavior, confidence intervals, known failure modes, and escalation protocols for cases where the model's reliability is low. For this audience, the appropriate explanation is an accessible model card or → Chapter 14: Explainable AI (XAI) Techniques
But-for causation
the standard form — asks whether the harm would have occurred absent the defendant's breach. In AI cases, establishing but-for causation typically requires showing that if the AI system had not been defective (biased, inaccurate, or poorly designed), the plaintiff would not have suffered the specifi → Chapter 20: Liability Frameworks for AI

C

California's CPRA
amendments to the California Consumer Privacy Act — includes rights to know about automated decision-making and to opt out of it in certain contexts, though implementing regulations are still being developed. → Chapter 17: The Right to Explanation
Categories of High-Risk AI:
Biometric identification and categorization - Critical infrastructure (energy, water, transport) - Education and vocational training - Employment and worker management (including hiring tools) - Essential private and public services (credit scoring, benefits eligibility) - Law enforcement (risk asse → Appendix G: Quick Reference Cards
Community Reinvestment Act (CRA)
1977 federal law requiring depository institutions to affirmatively meet the credit needs of all communities in their service areas, including low- and moderate-income neighborhoods. → Chapter 11: Key Takeaways
Comprehensive AI law
most importantly, the EU AI Act — takes a different approach. Rather than focusing on the data inputs to AI systems, comprehensive AI law focuses on the systems themselves: their design, their risk profiles, their documentation, and their governance. The EU AI Act is currently the only comprehensive → Chapter 33: Regulation and Compliance — GDPR, EU AI Act, and Beyond
Conference Proceedings:
*ACM FAccT* (Fairness, Accountability, and Transparency) — the premier venue for AI fairness research; proceedings freely available - *NeurIPS* and *ICML* — top ML conferences with growing ethics tracks - *AIES* (AAAI/ACM Conference on AI, Ethics, and Society) → Appendix A: Research Methods Primer
Contrastive explanation
An explanation that answers "why this outcome and not an alternative?" rather than attempting to describe the model's complete internal reasoning. Particularly useful for affected individuals because it is actionable and comprehensible. → Chapter 17: Key Takeaways
Counterfactual explanation
An explanation of an AI decision that specifies what would have been different about the situation to change the outcome. Example: "If your debt-to-income ratio had been 5 points lower, the decision would have been different." → Chapter 15: Key Takeaways
Courage
the willingness to accept costs for the sake of what is right — is revealed in Google's eventual decision to not renew the Maven contract. This decision came with real costs: the Pentagon contract, potential future DoD relationships, and the competitive pressure from companies less constrained by em → Case Study 3.2: Virtue Ethics and Corporate AI Culture — Google's Project Maven
Cynthia Rudin's argument
that inherently interpretable models should be used in high-stakes domains rather than complex models with post-hoc explanations — is technically and ethically compelling and has significant policy implications. The accuracy-interpretability tradeoff is often overstated. → Chapter 17: Key Takeaways

D

Dark patterns
User interface designs that deliberately trick or mislead users into making choices that serve the company's interests at the user's expense. → Chapter 16: Key Takeaways
Data Subjects:
People whose prior arrests formed the training data — had no voice in whether their data was used, no ability to correct erroneous records, and no awareness of their role in the system → Case Study 4.1: Mapping Stakeholders in a Predictive Policing Deployment
Deepfake
AI-generated synthetic media that represents a real person saying or doing something they never said or did, created using deep learning techniques. → Chapter 16: Key Takeaways
Deepfakes
synthetic media that place real people in fabricated situations — had existed before the generative AI era, but the dramatic reduction in the technical skill and resources required to create them brought the technology within reach of far more actors. Non-consensual synthetic intimate images became → Chapter 2: A Brief History of AI and Its Ethical Concerns
Democratic and Civic Effects:
**SECTION 4: FAIRNESS AND DISCRIMINATION ANALYSIS** → Appendix F: Templates and Worksheets
Difficulty legend:
⭐ Foundational — recall and basic application - ⭐⭐ Intermediate — analysis and comparison - ⭐⭐⭐ Advanced — synthesis and evaluation - ⭐⭐⭐⭐ Capstone — original argument and extended analysis → Chapter 3: Exercises
Difficulty ratings:
⭐ Foundational — recall and comprehension - ⭐⭐ Developing — application and analysis - ⭐⭐⭐ Advanced — synthesis and evaluation - ⭐⭐⭐⭐ Expert — original argument and professional deliverable → Chapter 5: Exercises
Difficulty Scale:
⭐ Recall and comprehension - ⭐⭐ Application and analysis - ⭐⭐⭐ Synthesis and evaluation - ⭐⭐⭐⭐ Original design and creative problem-solving → Chapter 4: Exercises
Disparate impact
sometimes called adverse impact — occurs when a facially neutral practice has a disproportionately negative effect on a protected group, without business necessity justifying it. This form is far more relevant to AI hiring. An algorithm that ranks résumés using criteria that happen to correlate with → Chapter 10: Bias in Hiring and HR Systems
Disparate impact ratio (DIR)
The approval rate for a protected group divided by the approval rate for the highest-approved group. A DIR below 0.80 indicates prima facie disparate impact under the four-fifths rule. → Chapter 11: Key Takeaways
Disparate treatment
Discrimination in which a creditor treats an applicant differently because of their race, sex, or other protected characteristic. → Chapter 11: Key Takeaways
During deployment:
Ongoing community liaison function with genuine power to surface community concerns and require departmental response. - Public reporting of prediction-box locations and police activity within them, enabling academic and civil society scrutiny. - Regular, independent audit of the system's accuracy c → Case Study 4.1: Mapping Stakeholders in a Predictive Policing Deployment
Dynamic pricing
Algorithmic adjustment of prices in real time based on demand, supply, competition, and individual user characteristics. → Chapter 16: Key Takeaways

E

ECOA (Equal Credit Opportunity Act)
1974 federal law prohibiting discrimination in any aspect of a credit transaction on the basis of race, color, religion, national origin, sex, marital status, age, or receipt of public assistance income. → Chapter 11: Key Takeaways
Epistemic justice
The fair distribution of epistemic goods — knowledge, testimony, rational agency — across society. Hermeneutical injustice (Fricker) is the harm of lacking conceptual resources to understand one's own experiences; relevant when AI opacity prevents people from understanding what is happening to them. → Chapter 17: Key Takeaways
Ethics infrastructure without authority
ethics teams, principles documents, review boards — provides the appearance of governance without the substance. Ethics infrastructure must have authority to delay or stop deployment of harmful systems; otherwise it functions as legitimacy washing rather than genuine accountability. → Chapter 2: A Brief History of AI and Its Ethical Concerns
ethics washing
the performance of ethical commitment without the substance. When an organization publishes AI principles, joins a multi-stakeholder initiative, and adopts a voluntary framework, but none of these activities constrain any specific decision or create any accountability for harmful outcomes, the ethic → Chapter 6: Introduction to AI Governance
Ethnic Affinity
A Facebook advertising targeting category that inferred users' affinity with particular racial or ethnic communities from their behavioral data and offered this as an audience targeting criterion, enabling exclusion of users by inferred race. → Chapter 16: Key Takeaways
Evidence to gather:
Developer-published accuracy and performance statistics broken down by demographic group - Third-party audits or academic studies of the system's performance - Regulatory findings or enforcement actions - User or affected-party reports of disparate treatment - The system's training data documentatio → Capstone Project 1: Ethical AI Audit
Explicit non-retaliation policies
written policies that clearly prohibit adverse action against employees who raise ethics concerns in good faith — provide a formal commitment against which the organization can be held accountable. These policies are only as effective as the organizational culture that surrounds them; a non-retaliat → Chapter 22: Whistleblowing and Ethical Dissent in AI Organizations

F

Fair Housing Act (FHA)
1968 federal law prohibiting discrimination in residential real estate transactions, including mortgage lending, on the basis of race, color, national origin, religion, sex, familial status, and disability. → Chapter 11: Key Takeaways
Faithfulness problem
The problem that post-hoc explanation methods (SHAP, LIME, etc.) produce approximations of model reasoning that may not accurately represent the model's actual decision logic, particularly for complex models. → Chapter 17: Key Takeaways
False
The chapter explicitly argues that moral intuitions "are not nothing" and encode genuine moral wisdom. Frameworks discipline intuitions, not replace them. (Section 3.1) → Chapter 3: Assessment Quiz
FICO score
The dominant U.S. credit scoring model, produced by Fair Isaac Corporation, using five categories of credit bureau data: payment history (35%), amounts owed (30%), length of credit history (15%), new credit (10%), and credit mix (10%). → Chapter 11: Key Takeaways
Follow-Up Resources
AI ethics policy document - Quick reference cards (see Appendix G) - Contact information for AI ethics questions and reporting - Links to continuing education resources → Appendix F: Templates and Worksheets
For AI developers and data scientists:
Ethics by design: building fairness considerations into the development process - Documentation obligations: model cards, datasheets - When to escalate and how → Appendix F: Templates and Worksheets
For all employees:
Appropriate reliance on AI outputs: when to defer, when to question - Recognizing potential AI errors in daily work - The organization's AI incident reporting process → Appendix F: Templates and Worksheets
For each stakeholder group, specify:
**Engagement method:** What process will be used? Options range across a spectrum from information provision (least participatory) through consultation, collaboration, and co-design (most participatory). The level of participation should correspond to the stakeholder's level of impact and vulnerabil → Capstone Project 3: Stakeholder Impact Assessment
For managers and business leaders:
Questions to ask before deploying AI - Your accountability for AI outcomes in your area - The procurement process and vendor oversight → Appendix F: Templates and Worksheets
For white defendants:
Among those who did not reoffend: approximately 23% were scored high risk (false positives) - Among those who did reoffend: approximately 48% were scored high risk (true positives) → Case Study 9.1: COMPAS and the Impossibility of Algorithmic Fairness
Four-fifths rule
A regulatory heuristic under which a selection (approval) rate for a protected group that is less than 80% of the rate for the highest-selected group is considered prima facie evidence of adverse impact. → Chapter 11: Key Takeaways

G

Gaming problem
The risk that known explanation requirements enable organizations to design AI systems that produce explanation-compliant outputs that do not accurately represent the model's actual reasoning. → Chapter 17: Key Takeaways
Goldberg v. Kelly (1970)
US Supreme Court precedent establishing due process requirements for government benefit terminations; foundational for applying due process to algorithmic benefit decisions. - **State v. Loomis (2016)** — Wisconsin Supreme Court upheld COMPAS-influenced sentence; widely criticized for inadequate due → Chapter 17: Key Takeaways
Google and YouTube
**Facebook and Instagram** - **Twitter** - **LinkedIn** - **Venmo** → Case Study 5.2: The Cost of Getting It Wrong — Clearview AI's Legal, Reputational, and Operational Consequences
Govern, Map, Measure, Manage
provide a coherent structure for organizational AI risk management. → Chapter 6: Introduction to AI Governance
governance gap
the widening distance between the speed and scale of AI deployment and the capacity of governance institutions to keep pace. → Chapter 6: Introduction to AI Governance
Grading Summary:
Part I (Multiple Choice): 16 points - Part II (True/False): 10 points - Part III (Short Answer): 40 points - Part IV (Applied Scenarios): 45 points - **Total: 111 points** → Chapter 7: Quiz

H

Hallucination
the tendency of large language models to generate confident-sounding but factually incorrect statements — became a significant documented problem. In 2023, attorney Steven Schwartz filed a legal brief in federal court that cited several cases as legal precedents; the cases had been generated by Chat → Chapter 2: A Brief History of AI and Its Ethical Concerns
High Interest, Low Power:
Residents of Black and Latino neighborhoods targeted by prediction boxes — highest stake, minimal formal voice - Community organizations representing those residents — high interest, limited resources and formal authority - Individual police officers — significant interest in how the system affected → Case Study 4.1: Mapping Stakeholders in a Predictive Policing Deployment
High Power, High Interest:
LAPD leadership — had authority to deploy and continue the system; directly measured by crime statistics shaped by deployment - PredPol/Geolitica — controlled the technology; had revenue stake in continued deployment - City of Los Angeles administration — political accountability for public safety o → Case Study 4.1: Mapping Stakeholders in a Predictive Policing Deployment
High Power, Lower Interest (initially):
City Council — had budget authority but limited engagement with technical details - State-level elected officials — could create regulatory requirements but had other priorities - Federal courts — would have authority if constitutional challenges were brought and won → Case Study 4.1: Mapping Stakeholders in a Predictive Policing Deployment
High risk
systems used in contexts where errors could have serious consequences for health, safety, or fundamental rights. This includes AI in critical infrastructure, education, employment (especially CV screening and candidate ranking), essential services (credit scoring, insurance), law enforcement, migrat → Chapter 6: Introduction to AI Governance
HMDA (Home Mortgage Disclosure Act)
1975 federal law requiring covered financial institutions to report data on mortgage applications and originations by race, ethnicity, sex, income, and other characteristics. The primary public data source for documenting racial disparities in mortgage lending. → Chapter 11: Key Takeaways
Honesty
including transparency with stakeholders about decisions that affect them — requires that an organization not obscure the nature of its work from the people whose labor makes it possible. The initial blog post that revealed Project Maven to employees was promotional, not informative. The contract wa → Case Study 3.2: Virtue Ethics and Corporate AI Culture — Google's Project Maven

I

Identification reveals a long stakeholder list:
Patients who receive scans — their diagnoses, treatment, and survival outcomes depend on the tool's accuracy - Radiologists who use the tool — their clinical judgment, professional liability, and workload are affected - Hospital administrators — cost savings, liability exposure, accreditation - The → Chapter 4: Stakeholders in the AI Ecosystem
IEEE Standards Association
IEEE's Ethically Aligned Design framework (now in its second edition) provides a comprehensive normative vision for human-centered AI. More operationally, the IEEE P7000-series standards address specific topics including algorithm bias (P7003), data privacy (P7002), and transparency (P7001). IEEE st → Chapter 6: Introduction to AI Governance
Integrity
consistency between stated values and actual behavior — requires that an organization's public commitments reflect its genuine operating priorities. A virtuous organization does not deploy the language of ethics to attract talent and build public trust while making decisions in conflict with that la → Case Study 3.2: Virtue Ethics and Corporate AI Culture — Google's Project Maven
Interim Measures to Consider:
Suspend AI system pending investigation - Implement additional human review for AI-assisted decisions - Halt expansion of AI system to new contexts - Preserve all relevant data and logs for legal review → Appendix F: Templates and Worksheets
Interpretable model
A model whose decision logic is directly legible to humans without post-hoc approximation — decision trees, logistic regression, rule sets. Advocated by Rudin for high-stakes decisions. → Chapter 17: Key Takeaways
Invisible Affected Parties:
All residents of LAPD prediction-box neighborhoods who experienced intensified policing — affected by the system without any formal relationship to it → Case Study 4.1: Mapping Stakeholders in a Predictive Policing Deployment
ISO/IEC JTC 1/SC 42
The joint technical committee responsible for international AI standardization has produced standards covering AI concepts and terminology (ISO/IEC 22989), AI risk management (ISO/IEC 23894), and AI bias in datasets (ISO/IEC TR 24027), among others. ISO standards are developed through national stand → Chapter 6: Introduction to AI Governance

J

Journals:
*Big Data & Society* (SAGE) — open access, interdisciplinary - *AI & Society* (Springer) — long-running interdisciplinary journal - *Ethics and Information Technology* (Springer) - *Harvard Journal of Law & Technology* - *Journal of Artificial Intelligence Research (JAIR)* — technical but includes e → Appendix A: Research Methods Primer

K

Key Limitation:
Does not account for legitimate differences in qualifications or risk between groups - Can require approving less-qualified applicants from one group to match approval rates - Mathematically: satisfying demographic parity is incompatible with predictive parity when base rates differ → Appendix G: Quick Reference Cards
Key questions:
Does the system perform differently for different demographic groups (defined by race, gender, age, disability, national origin, or other protected characteristics)? - What fairness metric or metrics does the system's developer use, and are those metrics appropriate given the deployment context? - A → Capstone Project 1: Ethical AI Audit
Key Requirements:
Conformity assessment before deployment - Registration in EU AI Act database - Risk management system throughout lifecycle - Training data governance requirements - Technical documentation and logging - Transparency and provision of information to users - Human oversight measures - Robustness, accur → Appendix G: Quick Reference Cards
KPMG, Deloitte, PwC, and EY
the major financial audit firms — have all developed AI audit practices, recognizing that AI governance is becoming a significant component of corporate governance and regulatory compliance. → Chapter 19: Auditing AI Systems

L

Ledgerwood v. Jobe (2016)
Arkansas Medicaid case where algorithmic care hour determinations were ruled to violate due process due to inadequate notice and meaningless appeal process. - **Epic Systems Deterioration Index** — Clinical AI tool deployed in thousands of US hospitals to predict patient deterioration; rarely disclo → Chapter 15: Key Takeaways
Limited risk
systems with specific transparency obligations. AI systems that interact directly with humans (chatbots) must inform users that they are interacting with an AI. AI-generated content must be labeled as such. → Chapter 6: Introduction to AI Governance
List all data inputs to the AI system:
**SECTION 2: AFFECTED POPULATION ANALYSIS** → Appendix F: Templates and Worksheets
Local partnership requirements
genuine co-development and co-governance rather than vendor relationships with local resellers — create accountability structures that serve communities better than purely extractive models. When AI systems are co-developed with local institutions that have relationships with affected communities, t → Chapter 34: AI Ethics in Emerging Markets
Lookalike audience
An advertising targeting approach in which a platform's AI identifies users who resemble a "seed" audience provided by an advertiser, based on behavioral similarity. → Chapter 16: Key Takeaways

M

Matched-pair analysis (comparative file review)
A method for detecting disparate treatment by comparing outcomes for applicants with matched financial profiles but different demographic characteristics, isolating demographic variables as the only systematic difference. → Chapter 11: Key Takeaways
Meaningful human review
Human oversight of an AI decision that is genuine: the human reviewer has sufficient information, expertise, time, and organizational authority to identify and correct errors, not merely to ratify the AI's recommendation. → Chapter 15: Key Takeaways
Meaningful information about the logic involved
The GDPR standard for what data controllers must provide when automated decision-making is permitted by exception under Article 22. The content of this requirement is legally contested. → Chapter 17: Key Takeaways
Minimal risk
AI systems that pose no significant risks and face no specific requirements beyond existing law. → Chapter 6: Introduction to AI Governance
Module 2: Understanding AI Bias (60 minutes)
Sources of bias: training data, feedback loops, proxy variables, label bias (20 min) - Fairness metrics: introduction to key concepts (20 min) - What is disparate impact? How is it measured? - Why "our algorithm doesn't use race" doesn't solve the problem - Interactive exercise: bias identification → Appendix F: Templates and Worksheets

N

NIST AI Risk Management Framework (AI RMF)
Published in January 2023, the NIST AI RMF is the most practically oriented organizational governance framework currently available. It structures AI risk management around four core functions: **Govern** (establishing the organizational culture, policies, and accountability structures for responsib → Chapter 6: Introduction to AI Governance

O

Online resources:
Stanford Encyclopedia of Philosophy (plato.stanford.edu) — Free, academically rigorous entries on all major ethical frameworks. The entries on "Consequentialism," "Deontological Ethics," "Virtue Ethics," and "The Original Position" are excellent starting points. - Moral Machine (moralmachine.net) — → Chapter 3: Further Reading and Resources
Opening hook
a real-world scenario that frames the chapter's stakes - **Learning objectives** — what readers will know and be able to do - **Main content** — 8,000–12,000 words of authoritative analysis - **Boxed features** — ethical dilemmas, stakeholder perspectives, debate scenarios, thought experiments - **T → AI Ethics: A Comprehensive Business Guide
operational policies
specific rules governing concrete situations. What data sources may not be used to train models? What categories of AI application require ethics review? What human oversight is required before deploying a high-stakes automated decision system? What documentation must accompany a model into producti → Chapter 6: Introduction to AI Governance
Overall accuracy:
Group A: (180 + 160) / 500 = 68% - Group B: (70 + 232) / 400 = 75.5% → Chapter 9: Measuring Fairness — Metrics and Trade-offs

P

Part 1: Validity Assessment
What validity evidence is required (type, source, independence)? - What is the minimum acceptable standard for criterion validity? - How do you verify that validation studies apply to your specific roles and candidate pool? → Exercises — Chapter 10: Bias in Hiring and HR Systems
Part 2: Adverse Impact Assessment
What adverse impact data must the vendor provide? - What is your organization's threshold for acceptable adverse impact before deployment? - What monitoring commitments must the vendor make post-deployment? → Exercises — Chapter 10: Bias in Hiring and HR Systems
Part 3: Accommodation and Accessibility
What accommodation alternatives must be available? - How must accommodation be communicated to candidates? - Who is responsible for accommodation logistics — vendor or employer? → Exercises — Chapter 10: Bias in Hiring and HR Systems
Part 4: Contract and Liability
What contractual representations must the vendor make? - What remedies does your contract provide if adverse impact is discovered post-deployment? - How does EEOC guidance on employer liability affect your vendor contracting approach? → Exercises — Chapter 10: Bias in Hiring and HR Systems
Participatory governance
An approach to AI deployment in which affected communities participate in decisions about whether and how AI is deployed, rather than simply receiving information about systems already deployed. → Chapter 15: Key Takeaways
Partnership on AI (PAI)
Founded in 2016 by Amazon, Facebook, Google, IBM, Microsoft, and Apple (joined shortly after), PAI has grown to include academic institutions, civil society organizations, and companies. Its eight tenets address safety, fairness, transparency, and human-AI collaboration. PAI has produced useful rese → Chapter 6: Introduction to AI Governance
Podcasts and audio resources:
"Philosophize This!" (podcast) — Episodes on Kant, Mill, Rawls, and Aristotle provide accessible introductions to the philosophical frameworks covered in this chapter - "AI Ethics Brief" (newsletter and podcast, Amsterdam University) — Current AI ethics research and policy, with regular engagement w → Chapter 3: Further Reading and Resources
Positive predictive value (PPV)
The probability that an AI system's positive prediction is correct, given the base rate of the condition in the population. Distinct from the model's sensitivity or confidence score. → Chapter 15: Key Takeaways
Post-remote biometric identification
searching surveillance footage after the fact, rather than in real time — is classified as a high-risk AI system, subject to requirements including conformity assessment, transparency obligations, human oversight, and registration in the EU AI Act database. → Chapter 26: Biometrics and Facial Recognition Ethics
Power and accountability
Who holds power, who is harmed, who answers? 2. **Innovation vs. harm prevention** — The speed-vs-caution tension and who bears its costs 3. **Ethics washing vs. genuine ethics** — The gap between stated values and actual practice 4. **Diversity and inclusion** — Who is at the table when AI is desig → AI Ethics: A Comprehensive Business Guide
Practical wisdom (phronesis)
the capacity for sound judgment in complex situations — requires, at minimum, that ethical questions be raised before decisions are made, not after. A virtuous organization would have had the Project Maven conversation before signing the contract: does this work align with our values? What are the e → Case Study 3.2: Virtue Ethics and Corporate AI Culture — Google's Project Maven
Pre-mortems
structured exercises in which teams imagine that a project has failed and work backward to identify what went wrong — are a practical mechanism for surfacing ethics concerns in a context that makes raising them psychologically safer. In a pre-mortem, articulating concerns is the task, not a deviatio → Chapter 22: Whistleblowing and Ethical Dissent in AI Organizations
Programmatic advertising
Automated buying and selling of digital advertising inventory through real-time auctions, with AI systems determining bid prices and audience selection in milliseconds. → Chapter 16: Key Takeaways
Proxy targeting
Achieving demographic exclusion in advertising by targeting variables that correlate with protected class characteristics (e.g., zip code as a proxy for race) rather than the protected class characteristics directly. → Chapter 16: Key Takeaways
Proxy variable
A variable that correlates with a protected characteristic (such as race) and can therefore serve as an indirect basis for discrimination in an algorithm that does not directly use the protected characteristic. → Chapter 11: Key Takeaways
Psychographic targeting
Advertising targeting based on inferred personality characteristics (e.g., the OCEAN/Big Five model) rather than demographic characteristics. → Chapter 16: Key Takeaways

Q

Q10: FALSE
The impossibility theorem demonstrates that not all fairness criteria can be simultaneously satisfied, but this does not mean fairness measurement is pointless. It means metric selection must be deliberate and transparent, and it heightens rather than diminishes the importance of measuring fairness → Chapter 9 Quiz: Measuring Fairness — Metrics and Trade-offs
Q11: FALSE
The "fairness through unawareness" approach fails because other variables in the model may proxy for race. In the United States, variables such as zip code, educational attainment, employment history, and many others are correlated with race due to historical patterns. Excluding the explicit variabl → Chapter 9 Quiz: Measuring Fairness — Metrics and Trade-offs
Q12: FALSE
Individual fairness and group fairness are conceptually distinct and can conflict even with large, representative populations. A system can treat every individual consistently relative to a task-relevant distance metric while still producing systematically different selection rates across groups. → Chapter 9 Quiz: Measuring Fairness — Metrics and Trade-offs
Q13: TRUE
Chouldechova's proof depends on the existence of base rate differences between groups. When base rates are equal, the mathematical relationships that force metric conflicts no longer operate (assuming the classifier is not perfect, the impossibility still technically applies only because the classif → Chapter 9 Quiz: Measuring Fairness — Metrics and Trade-offs
Q1: B
ProPublica documented that among defendants who did not reoffend, Black defendants were flagged as high-risk at a higher rate than white defendants. This is a false positive rate disparity, not overall prediction rate (A), calibration failure (C), individual fairness (D), or treatment equality (E). → Chapter 9 Quiz: Measuring Fairness — Metrics and Trade-offs
Q2: B
The Chouldechova impossibility theorem addresses the incompatibility of demographic parity, equalized odds, and calibration. The other combinations are either not the subject of the theorem or are partially overlapping constructs. → Chapter 9 Quiz: Measuring Fairness — Metrics and Trade-offs
Q3: B
Disparity ratio = 44%/68% = 0.647. Since 0.647 < 0.80, this triggers scrutiny under the four-fifths rule. Option A reverses the ratio. Options C, D, and E make mathematical errors. → Chapter 9 Quiz: Measuring Fairness — Metrics and Trade-offs
Q4: B
Equal FPR (10% for both groups) means one component of equalized odds is satisfied. Unequal TPR (85% vs. 60%) means the TPR component of equalized odds is violated, and equal opportunity (which requires equal TPR) is also violated. → Chapter 9 Quiz: Measuring Fairness — Metrics and Trade-offs
Q5: B
Counterfactual fairness requires a causal model and the counterfactual analysis of what would happen if the protected attribute changed. Options A, C, D, and E describe demographic parity, equalized odds, calibration, and individual fairness respectively. → Chapter 9 Quiz: Measuring Fairness — Metrics and Trade-offs
Q6: B
Equal overall accuracy is a necessary but not sufficient condition for fairness. A model can have equal accuracy while having dramatically different error compositions by group. This is one of the most common forms of inadequate fairness reporting. → Chapter 9 Quiz: Measuring Fairness — Metrics and Trade-offs
Q7: B
Multicalibration requires calibration for all efficiently computable subgroups, not just the major protected groups. This addresses intersectional fairness by ensuring that even small intersectional subgroups receive calibrated predictions. → Chapter 9 Quiz: Measuring Fairness — Metrics and Trade-offs
Q8: B
This is the classic formal vs. substantive fairness tension: applying formally neutral criteria that produce substantively disparate outcomes because the criteria encode historical privilege. → Chapter 9 Quiz: Measuring Fairness — Metrics and Trade-offs
Q9: FALSE
Equal overall accuracy can coexist with very different FPR and FNR across groups. For example, if Group A has a higher base rate, a calibrated system may achieve equal accuracy through different combinations of TP, TN, FP, and FN that produce different FPRs. → Chapter 9 Quiz: Measuring Fairness — Metrics and Trade-offs

R

Redlining
The historical practice of denying financial services (particularly mortgage lending) to residents of neighborhoods designated as high-risk, typically minority neighborhoods, based on government-produced maps with "hazardous" zones marked in red. → Chapter 11: Key Takeaways
Regulatory bodies
the FTC, which has jurisdiction over data practices of US companies; the Irish Data Protection Commissioner, which had jurisdiction over Facebook's European operations; the UK ICO — were stakeholders in the sense that the study raised questions within their regulatory mandates. Their response is dis → Case Study 4.2: The Invisible Stakeholder — Data Subjects and the Problem of Consent
Regulatory sandboxes
controlled environments where companies can deploy AI systems under enhanced regulatory supervision and reduced liability — allow regulators to learn alongside industry while limiting exposure. The EU AI Act incorporates regulatory sandboxes explicitly. **Iterative rule-making** — regulatory process → Chapter 6: Introduction to AI Governance
Remediation Plan Template:
Description of harm addressed: ___ - Steps taken to remediate the harm to affected individuals: ___ - Steps taken to prevent recurrence: ___ - Verification method that the fix works: ___ - Responsible party for implementation: ___ - Implementation timeline: ___ - Review date to confirm remediation: → Appendix F: Templates and Worksheets
responsible AI (RAI) teams
internal specialists whose job is to embed ethical consideration across the organization's AI work. These teams are distinct from governance committees in that they are operational rather than advisory: they work directly with engineering and product teams, developing tools, conducting assessments, → Chapter 6: Introduction to AI Governance
Responsible AI Institute
Offers a certification scheme for organizations, providing structured assessments of responsible AI practice against defined criteria. Unlike frameworks that merely recommend, certification schemes create external accountability — though the rigor of that accountability depends on the independence a → Chapter 6: Introduction to AI Governance
Restrictions on synthetic performers
AI-generated characters that replicate the likeness of background actors without consent. → Case Study 28-1: The Writers Guild Strike and AI in Creative Industries

S

Solely automated
The GDPR Article 22 criterion for decisions that trigger the provision's protections. A decision is solely automated when human involvement is nominal rather than genuine — when a human cannot actually influence the decision's outcome through their review. → Chapter 17: Key Takeaways
SR 11-7 / OCC 2011-12
Federal guidance on model risk management requiring banks to validate models for conceptual soundness, ongoing monitoring, and — as applied to fair lending models — demographic bias. → Chapter 11: Key Takeaways
Systemic transparency
Transparency about the aggregate patterns of AI decision-making, as distinct from individual transparency. Requires aggregate reporting, auditing, and research access rather than individual explanation. → Chapter 17: Key Takeaways

T

The four metrics computed here:
**Demographic Parity Difference**: The difference between the highest and lowest group positive prediction rates. A value of 0 means the model predicts a positive outcome at the same rate for all groups regardless of their actual outcomes. See Chapter 9 for the debate about when this is the right cr → Appendix B: Python Reference for AI Fairness Auditing
The ProPublica methodology
showing how to analyze AI bias using outcome data matched to algorithmic predictions, establishing a template for investigative AI accountability journalism 2. **The Chouldechova impossibility result** — proving mathematically that all fairness criteria cannot be simultaneously satisfied when base r → Chapter 30: Key Takeaways — AI in Criminal Justice Systems
Thin-file problem
The situation of individuals with insufficient credit history for major scoring models to generate a reliable score, or whose score is depressed by limited credit history, disproportionately affecting communities historically excluded from credit markets. → Chapter 11: Key Takeaways
## Part 1: Multiple Choice (8 questions, 2 points each) → Chapter 9 Quiz: Measuring Fairness — Metrics and Trade-offs
Transparency Notes
Microsoft's version of model cards — provide standardized documentation for AI systems and capabilities that Microsoft makes available to customers and partners. They describe what a system does and does not do, the limitations of its performance, the contexts in which it was designed and tested, an → Case Study 21.1: Microsoft's Responsible AI Journey — Building Governance That Works
True
Deontological analysis holds that mass surveillance without consent violates dignity and autonomy rights regardless of its crime-reduction effects. (Section 3.3) → Chapter 3: Assessment Quiz
True / False
**10.** The Optum algorithm's primary error was that it predicted which patients had the highest healthcare costs rather than which patients had the highest healthcare needs, and healthcare cost is not equally correlated with health need across racial groups. → Quiz: Chapter 12 — Bias in Healthcare AI
Types of mitigation to consider:
**Design mitigations:** Changes to how the system is built — different training data, different features, different output format, different decision thresholds - **Deployment mitigations:** Changes to how the system is used — restricted scope, mandatory human review, minimum confidence thresholds f → Capstone Project 3: Stakeholder Impact Assessment

U

Unacceptable risk
systems that pose such fundamental threats to human rights and democratic values that they are prohibited outright. Prohibited applications include: AI systems that use subliminal techniques to manipulate behavior in ways that cause harm; systems that exploit vulnerabilities of specific groups; soci → Chapter 6: Introduction to AI Governance
Use Differential Privacy when:
You need to publish aggregate statistics or train models on centralized data while providing provable individual privacy protection. - Your dataset is large enough that DP noise does not overwhelm the statistical signal. - You can specify and defend a meaningful epsilon value. - The privacy threat i → Chapter 27: Privacy-Preserving AI Techniques
Use Federated Learning when:
Data cannot be centralized due to regulatory requirements, competitive concerns, or user trust obligations. - The data is distributed across many participants (devices, institutions) and would be prohibitively expensive to centralize even if permitted. - You are training a machine learning model and → Chapter 27: Privacy-Preserving AI Techniques
Use Homomorphic Encryption when:
Data must be processed by a third party (for cloud processing, SaaS computation) without the third party accessing unencrypted data. - The computation is well-defined and can be performed within current HE performance limits. - Encrypted inference is the use case (running a model on a user's encrypt → Chapter 27: Privacy-Preserving AI Techniques
Use Secure Multi-Party Computation when:
Multiple parties need to jointly compute a specific function over their combined data without revealing their data to each other. - The computation is specific enough and the parties are few enough that SMPC's computational overhead is manageable. - No single trusted party can receive all participan → Chapter 27: Privacy-Preserving AI Techniques
Use Synthetic Data when:
The primary use case is development and testing rather than production training. - You need to share data broadly (with partners, vendors, researchers) without privacy risk. - You can pair GAN-based generation with differential privacy to provide formal guarantees. - Healthcare, financial services, → Chapter 27: Privacy-Preserving AI Techniques

V

Validity and scientific foundation:
What does your tool claim to measure? What is the peer-reviewed scientific basis for the claim that this measurement predicts job performance? - Can you provide independent validation studies — conducted by researchers without financial relationship to your company — demonstrating criterion validity → Chapter 10: Bias in Hiring and HR Systems
Variable Power, High Interest:
Academic researchers — limited power but high interest and some ability to shape public discourse and policy through publications - ACLU and civil liberties organizations — limited formal power but ability to litigate, advocate, and generate media coverage - Investigative journalists — significant p → Case Study 4.1: Mapping Stakeholders in a Predictive Policing Deployment
Vocabulary Builder
**Algorithmic bias:** Systematic and unfair differences in how an AI system treats individuals or groups, arising from flaws in data, design, or social context. - **Disparate impact:** The doctrine that a facially neutral practice can be unlawful if it has a disproportionate adverse effect on a prot → Chapter 7: Understanding Algorithmic Bias

W

Watermarking by AI generators
including OpenAI's watermarking of DALL-E outputs and Google DeepMind's SynthID — embeds signals in generated content that allow detection by systems trained to read the watermark. The limitation is that watermarks can be removed or degraded by simple processing steps (compression, resizing, transcr → Case Study 29-2: Deepfakes in the 2024 Election Cycle
What a monitoring plan must include:
**Metrics:** What will be measured? Metrics should be linked to specific potential harms identified in Step 3. Disaggregated data (broken down by demographic group) is essential for detecting disparate impact. - **Data sources:** Where will monitoring data come from? System performance logs, user co → Capstone Project 3: Stakeholder Impact Assessment
What a strong answer includes:
Acknowledgment of the legitimate critique: the proliferation of AI ethics frameworks, principles, and boards without binding enforcement mechanisms is well-documented; critics including Ben Green and Meredith Whittaker have documented "ethics washing" - Evidence that ethics can produce real behavior → Appendix B: Answers to Selected Exercises
What fairness metrics do not tell you:
Whether the disparity is caused by the model or reflects real-world inequality (and whether that distinction matters for your use case) - Whether the disparity is legally actionable under applicable law in your jurisdiction - Whether the disparity is practically significant enough to cause harm at y → Appendix B: Python Reference for AI Fairness Auditing
What Is AI Ethics? Framing the Challenge
**Instructions:** Exercises are organized by difficulty level. Star ratings indicate cognitive demand: → Chapter 1: Exercises
What this appendix covers:
Setting up a Python environment for fairness analysis - Loading and exploring datasets with known demographic disparities - Computing industry-standard fairness metrics using Fairlearn - Detecting proxy discrimination using SHAP (SHapley Additive exPlanations) - Explaining individual decisions using → Appendix B: Python Reference for AI Fairness Auditing
When to Use:
When the goal is proportional representation in outcomes - Anti-discrimination law contexts where equal selection rates are required - When base rates of the underlying characteristic are similar across groups - When there is reason to believe historical data encodes discrimination that should not b → Appendix G: Quick Reference Cards