Arvind Narayanan, tutorial at FAT* (2018) A detailed walk-through of different mathematical definitions of fairness and the value judgments each embodies. More technical than this chapter but presented clearly by one of the field's leading researchers. *Duration: ~60 minutes. Available on YouTube.* → Further Reading: Bias and Fairness
"Art AI Gallery"
An online exhibition space that curates AI-generated and AI-assisted artwork with critical commentary. Useful for seeing a wide range of AI creative outputs and thinking about them as art rather than as technology demonstrations. → Further Reading: AI and Creativity
"Coded Bias"
Documentary directed by Shalini Kantayya (2020) A feature-length documentary following Joy Buolamwini and other researchers and activists fighting algorithmic bias. Available on streaming platforms. *Duration: ~85 minutes.* → Further Reading: Bias and Fairness
"Fairness in Machine Learning" interactive course
Google's Machine Learning Crash Course A free, interactive module on fairness in ML, including hands-on exercises with fairness metrics. *Duration: ~1–2 hours.* → Further Reading: Bias and Fairness
"In Machines We Trust"
MIT Technology Review podcast series Multiple episodes exploring AI bias, fairness, and accountability across different domains. *Various episodes, 20–40 minutes each.* → Further Reading: Bias and Fairness
"Lex Fridman Podcast"
Extended conversations with AI researchers and creators. The episodes with artists working with AI tools are particularly relevant to this chapter. → Further Reading: AI and Creativity
"The Coded Gaze: Bias in Artificial Intelligence"
Joy Buolamwini, TEDx talk (2016) A concise and compelling introduction to facial recognition bias, featuring Buolamwini's personal experience and research. Excellent for sharing with people who prefer video to reading. *Duration: ~9 minutes.* → Further Reading: Bias and Fairness
"The Gradient" podcast
Interviews with AI researchers and practitioners, including episodes on AI art, creativity, and the social implications of generative AI. Technical enough to be substantive, accessible enough for non-specialists. → Further Reading: AI and Creativity
10. Personal Reflection
How has your understanding of this system changed over the course? What surprised you most? What question remains unanswered? → Index
6:47 AM
Your phone alarm goes off. Your phone has recorded that you slept until 6:47, that you were in your apartment (GPS), that your phone was motionless for 7 hours and 23 minutes (accelerometer), and that your heart rate was 62 bpm (smartwatch sync). - **7:12 AM** — You check Instagram. The app records → Chapter 12: Privacy, Surveillance, and AI
9. Synthesis and Recommendations
What are the system's greatest strengths? Its most significant risks? What specific changes would you recommend, and why? Who should implement those changes? → Index
🏃 Fast Track
For readers with some background in AI or technology. Tells you which sections to skim and which exercises to complete to verify your understanding. Lets you move through the book at roughly twice the standard pace. → How to Use This Book
📖 Standard Path
The default. Read every chapter in order, complete the exercises and quizzes, and work on the progressive project. This is the path most undergraduate courses will follow. → How to Use This Book
🔬 Deep Dive
For motivated learners who want more. Points you to advanced case studies, extension exercises, and external resources. Adds roughly 50% more material to each chapter. → How to Use This Book
A
A Taxonomy of Failures:
**Wrong answers** (Type 1): Standard errors within the system's domain - **Hallucinations** (Type 2): Confident fabrications, especially from language models, produced by the structural mechanism of next-token prediction - **Distributional shift** (Type 3): Performance degradation when deployment co → Chapter 8: When AI Gets It Wrong — Errors, Hallucinations, and Failures
a) FACTS Framework:
**F — Function:** The system analyzes user data (purchase history, browsing, social media, location) to predict and recommend products before the user searches for them. It does not "read minds" — it identifies statistical patterns in behavior data. → Quiz — Chapter 1: What Is Artificial Intelligence?
Add to your audit report:
A section on visual processing capabilities (or their absence) - At least two potential failure scenarios involving visual inputs - An assessment of whether the system's visual training data is representative of its deployment context → Chapter 6: Computer Vision — How Machines See the World
unemployment insurance, portable benefits, healthcare not tied to employment — that cushion transitions. - **Demanding transparency** in how algorithmic management systems work and how they affect workers. - **Participating in policy conversations** about AI governance, labor rights, and the future → Chapter 10: AI and Work — Automation, Augmentation, and the Future of Jobs
AI Fairness 360
IBM Research An open-source toolkit for examining and mitigating bias in machine learning models. Includes tutorials and documentation. For readers interested in seeing how bias mitigation works in practice with code. *Accessibility: Requires some Python knowledge.* → Further Reading: Bias and Fairness
AI governance gap
the mismatch between the global reach of AI systems and the national (or at best, regional) scope of existing governance mechanisms. → Index
a period of reduced funding, diminished expectations, and widespread skepticism. The term captures both the coldness of the funding climate and the sense of dormancy. AI hadn't died, but it had been forced to hibernate. → Index
AI-assisted cyberattacks
large language models can be used to generate phishing emails, identify software vulnerabilities, and create malicious code. Again, the system is not misaligned; it is being used for a harmful purpose. → Index
Algorithmic management
our third new concept — refers to the use of AI and automated systems to assign, monitor, evaluate, and discipline workers. If you've ever driven for a rideshare company, delivered food through an app, or worked in an Amazon warehouse, you've experienced algorithmic management firsthand. → Chapter 10: AI and Work — Automation, Augmentation, and the Future of Jobs
Answer rubric (10 points):
Risk classification that goes beyond the EU model (considers context, not just technology) — 2 points - Accountability framework that addresses the accountability gap (specifies responsible parties at each stage) — 2 points - Innovation-protection balance that avoids both "ban everything" and "regul → AI Literacy — Sample Final Exam
AI systems that can select and engage targets without human intervention — raise profound ethical questions. The alignment question here is not whether the weapon works as designed (it might) but whether designing such a weapon is aligned with human values in the first place. → Index
B
backpropagation
a method for training neural networks by adjusting their internal connections based on errors — was refined and popularized in the 1980s, even as expert systems grabbed all the headlines. → Index
Bans and restrictions:
San Francisco, Boston, Minneapolis, New Orleans, Portland (Oregon), and several other U.S. cities have banned government use of facial recognition. - The European Union's AI Act classifies real-time biometric identification in public spaces as "high risk" and imposes significant restrictions, though → Case Study 2: Facing the Camera — Surveillance, Identity, and Consent
Bias enters the AI pipeline at every stage
from problem formulation to data collection, from labeling to model design, from training to deployment. There is no single point where "the bias happens," which means there is no single point where it can be fixed. → Chapter 9: Bias and Fairness — Why AI Can Discriminate
biometric data
data derived from your body. Other forms include fingerprints, iris scans, voiceprints, gait analysis (the way you walk), and even your heartbeat pattern. → Chapter 12: Privacy, Surveillance, and AI
make sure a "70% risk" means 70% across groups — you will necessarily flag *different proportions* of each group, because the base rates differ. And you may have different false positive rates across groups. → Chapter 9: Bias and Fairness — Why AI Can Discriminate
capabilities and limitations
Prioritizing **research on societal risks** posed by AI, including bias and discrimination - Developing AI systems that help address **society's greatest challenges**, including cancer prevention and climate change → Case Study 13.2: Self-Regulation — When Big Tech Writes Its Own Rules
Capstone 1: Comprehensive AI Audit Report
Complete AI system audit with technical analysis, bias assessment, governance review, and policy recommendations (individual, 15–20 pages) 2. **Capstone 2: AI Policy Brief** — Write a policy brief for a government body on a specific AI issue, including stakeholder analysis, evidence review, and acti → AI Literacy: Understanding Artificial Intelligence for Everyone
Center for AI Safety (CAIS)
Research and advocacy on AI existential risk - **Partnership on AI** — Multi-stakeholder organization addressing AI's societal impact - **AI Now Institute** — Research on near-term social impacts of AI, particularly equity and justice - **Center for Humane Technology** — Focuses on realigning techno → Further Reading — Chapter 20: AI Safety and Alignment
CityScope Predict
that uses an AI risk assessment tool to inform decisions about which individuals released on bail are likely to miss their court dates. The tool assigns each person a risk score. The question is: what does it mean for this tool to be *fair*? → Chapter 9: Bias and Fairness — Why AI Can Discriminate
grouping similar items together. A retailer might use clustering to discover natural segments among its customers: frequent buyers who purchase during sales, high-end buyers who purchase full-price items, seasonal shoppers who only appear during holidays. Nobody defined these groups in advance. The → Index
composite example
based on real technologies and real documented issues, but assembled into a single, coherent scenario for clarity. (We label these Tier 3: illustrative composites.) Think of them as case studies you will get to know deeply over the coming chapters, examining each one from multiple angles as you buil → Index
Concept Budget:
New Concepts: narrow AI vs. general AI; AI as pattern recognition; the AI effect; AI literacy framework - New Terms: artificial intelligence, machine learning, algorithm, neural network, narrow AI, general AI (AGI), automation, training data, model, intelligent agent - New Techniques: "AI or Not?" e → AI Literacy: Understanding Artificial Intelligence for Everyone
Conditions for deployment:
Independent bias audit before deployment and annually thereafter - Public disclosure of the algorithm's factors, validation data, and performance metrics disaggregated by race, gender, and age - Mandatory training for judges on algorithmic limitations - A community oversight board with authority to → Quiz: AI and Justice — Criminal Justice, Civil Rights, and Accountability
ContentGuard
A social media content moderation system deciding what speech is allowed on a platform (bias, free speech, scale, automation). Introduced Ch.1, recurs Ch.4, 7, 9, 13, 15, 17, 19, 21. 2. **MedAssist AI** — A hospital deploying an AI diagnostic tool that performs differently across patient demographic → AI Literacy: Understanding Artificial Intelligence for Everyone
Courses for Continuing Education:
Elements of AI (elementsofai.com) — Free course from the University of Helsinki - AI for Everyone (Coursera, Andrew Ng) — Non-technical AI course - Ethics of AI (MIT OpenCourseWare) — Free course materials on AI ethics → Further Reading — Chapter 21: The Road Ahead
D
Data provenance
the documented history of where data came from, how it was processed, and who handled it — is increasingly recognized as essential for responsible AI. Just as you might want to know where your food was grown, how it was processed, and what chemicals were used, AI practitioners need to know where the → Chapter 4: Data — The Fuel That Powers AI (And Its Biggest Weakness)
datasheets for datasets
standardized documentation that accompanies training data, analogous to nutrition labels on food. This concept, proposed by Timnit Gebru and colleagues in a 2021 paper, would require dataset creators to answer a structured set of questions about their data's composition, collection process, recommen → Chapter 4: Data — The Fuel That Powers AI (And Its Biggest Weakness)
debiasing training data
identifying and correcting patterns that reflect discriminatory practices rather than underlying differences in behavior. But debiasing is technically difficult, conceptually contested (what does "unbiased" crime data even look like?), and risks creating a false sense of security ("we fixed the data → Index
Deepfakes
AI-generated synthetic media that can convincingly mimic real people's faces, voices, and actions — have been used for fraud, non-consensual intimate imagery, and political disinformation. The AI systems that generate deepfakes are not misaligned; they are doing exactly what their users ask. The saf → Index
demographic parity
flag the same proportion of each group — you will necessarily have different error rates. You will over-flag some individuals in one group and under-flag in another. The risk scores will *not* mean the same thing across groups. → Chapter 9: Bias and Fairness — Why AI Can Discriminate
data generated as a byproduct of doing something else. You are not trying to share information; information is simply leaking from your activities like heat from an engine. → Chapter 12: Privacy, Surveillance, and AI
durable frameworks
is perhaps the most important takeaway from this entire course. By the time you read this sentence, some of the specific AI systems, policies, and technical details discussed in this book will be outdated. That is the nature of a rapidly evolving field. But the *frameworks* — the FACTS questions, th → Index
E
Early involvement
before procurement, not after deployment - **Genuine authority** — the power to reject or modify proposals, not just provide input - **Accessible information** — technical documentation translated into plain language - **Diverse representation** — including formerly incarcerated individuals, defense → Index
can be substantial. For some AI workloads, embodied carbon accounts for a significant fraction of total lifecycle emissions, challenging the assumption that the "use phase" is always the dominant source. → Simple AI carbon footprint estimator
Employment and housing status
**Substance use history** - **Social environment factors** (peer criminal involvement, neighborhood characteristics) → Index
engagement
clicks, watch time, shares. But high-engagement content is often sensational, outrageous, or emotionally provocative. A system optimized for engagement will naturally amplify content that pushes emotional buttons, even if that content is misleading, polarizing, or harmful. → Chapter 7: AI Decision-Making — Recommendations, Classifications, and Predictions
Multiple difficulty levels, from foundational to research-level - **Quiz** — Self-assessment with detailed answer explanations - **Case Studies** — Two in-depth case studies per chapter - **Key Takeaways** — One-page summary for quick reference - **Further Reading** — Annotated recommendations → How to Use This Book
Expansions:
China has deployed extensive facial recognition infrastructure, including systems integrated with social credit programs and ethnic minority surveillance systems in Xinjiang. - India has rolled out one of the world's largest facial recognition systems for law enforcement and government services. - T → Case Study 2: Facing the Camera — Surveillance, Identity, and Consent
explainability
the ability to explain *why* an AI system reached a particular conclusion. If a physician tells you that your chest X-ray shows a suspicious nodule, you can ask: "What made you think that?" The physician can point to the image, describe the features they noticed, and explain their reasoning. You may → Index
F
Face detection
Finding faces in an image. This is the technology that draws a box around faces in your camera's viewfinder. It's relatively straightforward and is embedded in nearly every smartphone and digital camera. → Chapter 6: Computer Vision — How Machines See the World
not a word-for-word script (that leads to stilted delivery), but key points, transitions, and specific examples to use. - **Activity instructions** — detailed enough that another facilitator could run the activity from your description alone. - **Anticipated questions and responses** — what will the → Capstone Project 3: AI Literacy Workshop Design
FACTS Framework
five questions to ask whenever you encounter an AI system or an AI claim. → Index
FACTS Framework (from Chapter 1):
**F — Function:** MedAssist performs image classification — it identifies potential abnormalities in medical images. It does not diagnose diseases, recommend treatments, or interact with patients. - **A — Accuracy:** High in controlled settings; lower in real-world deployment, particularly for under → Index
Feature extraction
Measuring the unique geometry of each face: the distance between the eyes, the shape of the jawline, the width of the nose, the depth of the eye sockets. Modern systems convert these measurements into a mathematical representation called a **face embedding** — a string of numbers that serves as a co → Chapter 6: Computer Vision — How Machines See the World
the AI's predictions become self-fulfilling prophecies. The data bias is not a one-time problem; it is an ongoing engine that amplifies existing disparities. We will explore feedback loops in much greater depth in Chapter 7 (AI Decision-Making) and Chapter 9 (Bias and Fairness), but the important th → Chapter 4: Data — The Fuel That Powers AI (And Its Biggest Weakness)
Run the code and test it against known inputs and expected outputs - Review the logic — does it actually do what it claims to do? - Check for common error patterns: off-by-one errors, edge cases, unhandled exceptions - Use linting tools and static analysis to catch structural issues → Chapter 8: When AI Gets It Wrong — Errors, Hallucinations, and Failures
For AI-generated images:
Look for telltale artifacts: extra fingers on hands, inconsistent text, impossible physics - Check metadata — AI-generated images may lack camera metadata that real photos have - Use reverse image search to check if the image is a manipulation of a real photo - Consider context — is this image being → Chapter 8: When AI Gets It Wrong — Errors, Hallucinations, and Failures
For different emphases:
*Policy focus*: Spend extra time on Chapters 9, 12, 13, 17, and 19. Add a policy brief assignment. - *Technical focus*: Assign all optional Python exercises. Add hands-on labs using the code examples. - *Ethics focus*: Expand discussion time for Chapters 9, 11, 12, 17, and 20. Replace one homework w → AI Literacy: Understanding Artificial Intelligence for Everyone
How does the concept of "data is never neutral" apply to the surveillance data collected by systems like CityScope Predict? What biases might be encoded in police arrest records used as training data? → Chapter 12: Privacy, Surveillance, and AI
From Chapter 6 (Computer Vision):
We learned that computer vision systems identify patterns in images. How does understanding how these systems work technically help you evaluate claims about facial recognition accuracy? → Chapter 12: Privacy, Surveillance, and AI
From Chapter 7 (AI Decision-Making):
We learned that AI decisions are probability estimates, not truths. How does this insight change how you think about the EU AI Act's requirement for "human oversight" of high-risk AI systems? What should "human oversight" actually look like in practice? → Chapter 13: Governing AI — Policy, Regulation, and Global Approaches
From Chapter 9 (Bias and Fairness):
In Chapter 9, we discussed how different definitions of fairness can conflict. Apply this to facial recognition: a system might be "accurate on average" while being much less accurate for certain demographic groups. Which definition of fairness should apply — overall accuracy or equal accuracy acros → Chapter 12: Privacy, Surveillance, and AI
Full Citations:
Russakovsky, O., Deng, J., Su, H., et al. (2015). ImageNet large scale visual recognition challenge. *International Journal of Computer Vision*, 115(3), 211–252. - Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet classification with deep convolutional neural networks. *Advances in Neu → Appendix B: Key Studies Summary — Landmark Research Referenced in This Book
fundamental rights
such as freedom of expression — there is an argument that it should be treated as high risk even if not explicitly listed. The Act includes provisions for the Commission to add new high-risk categories as needed. → Case Study 13.1: The EU AI Act in Practice — Classifying Risk
G
Generation
Data is created through human activity, sensors, transactions, or deliberate collection. 2. **Collection** — Someone decides what data to gather, from whom, and how. 3. **Storage** — Data is organized, formatted, and housed somewhere. 4. **Curation** — Data is cleaned, selected, and prepared for use → Chapter 4: Data — The Fuel That Powers AI (And Its Biggest Weakness)
the training data accurately reflects a history of gender-imbalanced hiring, and the AI learns to replicate that pattern. It may also involve **selection bias** if the company's applicant pool was itself not representative of the broader population. → Chapter 4 Quiz
Holly Herndon and Mat Dryhurst
Musicians who have developed tools and frameworks for ethical AI music creation, including the Spawning.ai platform that gives creators control over how their work is used in AI training. - **Refik Anadol** — A media artist who uses AI to create large-scale installations that have been exhibited in → Further Reading: AI and Creativity
I
If your system does not use computer vision:
Could computer vision be added to your system? Would it help or create new risks? - Does your system process any sensory data (audio, text, sensor readings)? How does the pixel-to-meaning gap in computer vision parallel challenges in your system? → Chapter 6: Computer Vision — How Machines See the World
If your system processes visual information:
What types of visual tasks does it perform? (Classification, detection, segmentation, facial recognition, other?) - What training data was likely used? Consider the demographics, contexts, and conditions represented. - Can you identify potential edge cases — situations where the visual processing mi → Chapter 6: Computer Vision — How Machines See the World
Inference
using trained models — is cheap per query but collectively enormous, and grows with every new user and application. For widely deployed models, inference often dominates total energy consumption. - **Data centers** globally consume approximately 1–1.5% of world electricity, with AI-driven demand gro → Key Takeaways: AI and the Environment — Climate, Resources, and Sustainability
Intervention strategy:
Establish a classroom norm early: when someone (including the instructor) uses anthropomorphic language, anyone can gently flag it by saying "pattern or understanding?" This creates a shared, low-friction correction mechanism. - When a student says "the AI understands X," ask: "Can you rephrase that → Common Student Struggles and Intervention Strategies
K
kernel
across the image, checking whether a particular pattern is present at each location. One filter might look for vertical edges. Another might look for horizontal edges. Another might look for diagonal lines. These filters aren't designed by humans; the network *learns* what patterns to look for durin → Chapter 6: Computer Vision — How Machines See the World
Key concepts from this chapter:
**Pixels and features:** Images are grids of numbers; vision systems learn to extract hierarchical features (edges, textures, shapes, objects) from those grids - **Convolutional neural networks (CNNs):** Architectures that learn visual patterns through layers of filters, from simple features to comp → Chapter 6: Computer Vision — How Machines See the World
Key Insights:
The **accuracy-interpretability trade-off** means that the most accurate systems are often the hardest to understand, creating tension in contexts where explanations are needed. - **Feedback loops** occur when AI decisions influence the data used to train or evaluate the system, potentially amplifyi → Chapter 7: AI Decision-Making — Recommendations, Classifications, and Predictions
Key Sections:
1.1 AI Is Everywhere (And You Might Not Notice) - 1.2 What Do We Mean by "Intelligence"? - 1.3 Narrow AI vs. General AI: Managing Expectations - 1.4 The AI Effect: Why We Keep Moving the Goalposts - 1.5 Introducing Our Four AI Systems (anchor examples) - 1.6 Your AI Literacy Toolkit: A Framework for → AI Literacy: Understanding Artificial Intelligence for Everyone
Knowing what AI is good at
pattern recognition in large datasets, consistency, speed, tireless attention. - **Knowing what AI is bad at** — contextual judgment, ethical reasoning, handling genuinely novel situations, common sense. - **Knowing where you add value** — the tasks where your judgment, creativity, empathy, or conte → Chapter 10: AI and Work — Automation, Augmentation, and the Future of Jobs
a painstaking process where a specialist (the "knowledge engineer") would interview domain experts, extract their rules and heuristics, and encode them into the system. This process was laborious but systematic. → Index
L
Layer 1: Technical bias
Errors in data, models, or metrics > - *Addressable by:* Data auditing, fairness constraints, performance disaggregation > > **Layer 2: Institutional bias** — Organizational practices and incentives that produce biased systems > - *Addressable by:* Diverse teams, bias audits, regulatory requirements → Chapter 9: Bias and Fairness — Why AI Can Discriminate
life cycle assessment (LCA)
a method for evaluating the environmental effects of a product or system across its entire lifespan, from raw material extraction through manufacturing, use, and disposal. → Simple AI carbon footprint estimator
data about data. The content of your text message is data. The metadata is everything else: who you texted, when, how often, from where, and for how long. → Chapter 12: Privacy, Surveillance, and AI
multimodal AI
systems that can process and generate across multiple modalities simultaneously. A multimodal system can read a document, look at an image, listen to audio, watch a video, and reason about all of them together. → Index
N
Newsletters and Digests:
*Import AI* by Jack Clark — Weekly newsletter covering AI research, policy, and industry - *The Algorithm* by MIT Technology Review — Accessible AI news from a trusted source - *AI Wonk* by the OECD AI Policy Observatory — Policy-focused coverage of global AI governance → Further Reading — Chapter 21: The Road Ahead
O
OECD AI Policy Observatory
Look up the AI policy landscape for specific countries where your system operates - **EU AI Act text** — Determine how your system would be classified under European regulation - **Access Now reports** — Research civil society perspectives on AI deployment in specific regions - **Stanford HAI AI Ind → Further Reading — Chapter 19: Global Perspectives on AI
Opening quote and overview
What you'll learn and why it matters 2. **Main content** (5–7 sections) — With retrieval practice prompts, worked examples, and content blocks 3. **Project Checkpoint** — How to apply this chapter to your AI Audit Report 4. **Chapter Summary** — Key concepts, debates, and frameworks 5. **Spaced Revi → How to Use This Book
Stanford HAI (Human-Centered AI) — hai.stanford.edu - AI Now Institute — ainowinstitute.org - Partnership on AI — partnershiponai.org - OECD AI Policy Observatory — oecd.ai - Electronic Frontier Foundation (EFF) — eff.org (on AI and civil liberties) - Algorithm Watch — algorithmwatch.org (European p → Further Reading — Chapter 21: The Road Ahead
P
parameters
adjustable values that determine how the model transforms inputs into predictions. During training, these parameters are tuned so that the model's predictions increasingly match the actual labels. The model that emerges from this process — with all its parameters set to specific values — is what we → Index
Part I: What Is AI?
Foundations, history, how machines learn - **Part II: How AI Works** — Data, LLMs, computer vision - **Part III: AI in Action** — Decision-making, failures - **Part IV: AI and Society** — Bias, work, creativity, privacy - **Part V: Living with AI** — Governance, practical skills, healthcare, educati → AI Literacy: Understanding Artificial Intelligence for Everyone
Podcasts:
*Hard Fork* (The New York Times) — Technology and society, frequently covers AI - *Eye on AI* — In-depth interviews with AI researchers and policymakers - *Your Undivided Attention* (Center for Humane Technology) — Technology's impact on attention, democracy, and well-being → Further Reading — Chapter 21: The Road Ahead
Position A: A binding international AI treaty
Similar to arms control agreements, a binding treaty would establish universal rules for AI development and deployment. > - *Strength:* Provides clear, enforceable standards applicable everywhere. > - *Challenge:* History shows that binding technology treaties are extremely difficult to negotiate, a → Index
post-market surveillance
the systematic monitoring of how AI systems perform after they are deployed in real clinical settings. A system that performed well in clinical trials may perform differently when used by different clinicians, on different patient populations, with different equipment, in different workflows. → Index
power
not just their stated positions - Notes where stakeholder interests align and where they conflict - Acknowledges legitimate concerns on all sides, even those you ultimately argue against → Capstone Project 2: AI Policy Brief
power usage effectiveness (PUE)
the ratio of total facility energy to the energy used for actual computing. A PUE of 1.0 would mean all energy goes to computation; a PUE of 2.0 means half the energy is used for cooling, lighting, and other overhead. State-of-the-art data centers achieve PUE values around 1.1–1.2. Older facilities → Simple AI carbon footprint estimator
the use of algorithms to forecast where and when crimes will occur, and sometimes to predict who will commit them. The concept gained traction in the early 2010s, with companies like PredPol (now Geolitica) and Palantir offering tools to police departments across the United States and beyond. → Index
Priya
our recurring student character — inhabits. When Priya sits down to write a paper and considers using an AI writing tool, she's interacting with the product of this specific historical moment: decades of failed approaches, two AI winters, the data explosion, the deep learning revolution, and the tra → Index
establishing a verified chain of custody for media. Technologies like C2PA (Coalition for Content Provenance and Authenticity) aim to embed cryptographic signatures in photos and videos at the moment of capture, creating a tamper-evident record of where media came from and whether it's been modified → Chapter 6: Computer Vision — How Machines See the World
judges decide alone, bringing their expertise and their biases > 2. **Algorithm replaces judge** — the risk score determines the outcome automatically > 3. **Algorithm informs judge** — the judge sees the risk score as one input among many > > Most current deployments use Scenario 3. But even in Sce → Index
R
Receive feedback
a reward (something good happened) or a penalty (something bad happened) 4. **Update your strategy** based on this experience 5. **Repeat** — thousands, millions, or billions of times → Index
**Tools Built by Humans:** AI decision systems embed the assumptions, values, and biases of their designers and their training data. - **Who Benefits, Who Is Harmed:** The same system can benefit some users while harming others — and the harms often fall on already-marginalized communities. - **Huma → Chapter 7: AI Decision-Making — Recommendations, Classifications, and Predictions
regulatory capture
a situation where the regulated industry gains undue influence over the regulators, shaping rules in ways that serve industry interests rather than public interests. We will return to this concept in Section 13.5. → Chapter 13: Governing AI — Policy, Regulation, and Global Approaches
Respect the citation honesty system:
Tier 1: Only for sources you can verify exist - Tier 2: Attributed but unverified claims — use "Research suggests..." framing - Tier 3: Clearly labeled illustrative examples 4. **Maintain voice consistency** with the existing chapters 5. **Test code examples** if modifying any Python code 6. **Submi → Contributing to AI Literacy: Understanding Artificial Intelligence for Everyone
right to explanation
the right to receive a meaningful, human-understandable account of how an algorithmic decision was reached. The European Union's General Data Protection Regulation (GDPR) includes a version of this right for automated decisions. The United States does not currently have a federal equivalent, though → Index
risk assessment instruments (RAIs)
are used at multiple points in the criminal justice process: → Index
role prompting
asking the model to adopt a specific persona or expertise. → Index
runaway feedback loop
a cycle in which an AI system's outputs become its own future inputs, reinforcing and amplifying the pattern that existed in the original data. In the policing context, the loop works like this: → Index
S
Scoring Guide:
Section 1 (Multiple Choice): 2 points each (20 points total) - Section 2 (True/False with Justification): 3 points each (15 points total) - Section 3 (Short Answer): 5 points each (20 points total) - Section 4 (Applied Scenario): 10 points each (10 points total) - **Total: 65 points** → Quiz — Chapter 1: What Is Artificial Intelligence?
superintelligence
intelligence that substantially surpasses the best human minds in virtually every domain. → Index
T
Technical
Addressable through data and model fixes 2. **Institutional** — Addressable through organizational practices and requirements 3. **Structural** — Addressable through policy reform and societal change → Key Takeaways: Bias and Fairness
techno-nationalism
the idea that a nation's technological capabilities are directly tied to its economic competitiveness, military power, and geopolitical influence. Techno-nationalism is not new (think of the Space Race), but AI has intensified it because AI is a "dual-use" technology: the same techniques that power → Index
the authorship gradient
the recognition that authorship in AI-assisted creation exists on a spectrum from "AI did everything" to "the human did everything, using AI as a minor tool." Rather than drawing a sharp line between "author" and "not author," it's more useful to locate any specific case on this gradient and evaluat → Chapter 11: AI and Creativity — Art, Music, Writing, and the Question of Authorship
The hype cycle
overpromise, underdeliver, overcorrect — has repeated in every era. Recognizing this pattern is itself a form of AI literacy. → Index
The recurring themes in action:
**Tools built by humans:** Vision systems trained on biased data reproduce and amplify those biases. MedAssist AI worked better for some patients than others because of who was in the training set. - **Capability vs. understanding:** CNNs can classify images with superhuman accuracy on benchmarks, b → Chapter 6: Computer Vision — How Machines See the World
The Three Modes:
**Recommendation** systems suggest items, content, or actions based on user preferences and behavior. They shape what people see, want, and consume — and they're typically optimized for engagement, not wellbeing. - **Classification** systems sort inputs into predefined categories. They face inherent → Chapter 7: AI Decision-Making — Recommendations, Classifications, and Predictions
AI systems carry human biases, incentives, and blind spots 2. **Capability vs. Understanding** — What AI can do vs. what AI "knows" 3. **Who Benefits, Who Is Harmed** — Power and equity analysis of AI systems 4. **Human in the Loop** — AI as augmentation, not replacement 5. **AI Literacy as Civic Sk → AI Literacy: Understanding Artificial Intelligence for Everyone
transformer
an architecture that would reshape the entire AI landscape within just a few years. → Index
not because it is casual, but because it gives you a systematic way to assess whether an output should be trusted. → Index
W
We identified four major types of bias:
**Historical bias:** The training data accurately reflects an unequal world - **Representation bias:** Some groups are underrepresented or missing from the data - **Measurement bias:** The thing being measured is a poor proxy for the thing that matters - **Aggregation bias:** One-size-fits-all model → Chapter 9: Bias and Fairness — Why AI Can Discriminate
Week 5: Revision
Read the report as if you are the decision-maker receiving it. Is it clear? Is it actionable? - Ask a peer to read the executive summary and tell you what they understood. If they cannot summarize your findings, revise. → Capstone Project 1: Comprehensive AI Audit Report
Gather all 21 progressive project components. - Identify gaps — which sections need the most new research? - Create a detailed outline mapping your existing material to the report structure. → Capstone Project 1: Comprehensive AI Audit Report
Weeks 3–4: Drafting
Write new sections; revise existing material to fit the report's voice and structure. - Ensure every claim is supported by evidence (research, documentation, news reports, your own analysis). → Capstone Project 1: Comprehensive AI Audit Report
What the headline obscures:
**Specificity.** The AI did not achieve superhuman performance in "medical diagnosis" broadly. It achieved it on one specific task (e.g., detecting a particular type of cancer in a particular type of image). The headline collapses a narrow achievement into a sweeping claim. → Case Study 2: AI in the Headlines — Separating Signal from Noise
What this means in practice:
RecruitSmart's developer must implement a **risk management system** — a continuous process of identifying, evaluating, and mitigating risks throughout the system's lifecycle. - The system must be trained on **high-quality data** that is relevant, representative, and as free from errors as possible. → Case Study 13.1: The EU AI Act in Practice — Classifying Risk
How computers turn images into numbers they can process - How convolutional neural networks learn to recognize visual patterns - Where computer vision shows up in your daily life (hint: more than you think) - Why vision systems fail in ways that surprise us - Why facial recognition is one of the mos → Chapter 6: Computer Vision — How Machines See the World
Who benefits from RateCalc:
The insurance company benefits from more granular risk assessment, which can improve profitability. - Some customers benefit from lower premiums if the model identifies them as lower-risk based on non-driving factors. - Shareholders benefit from reduced claims costs. → Case Study 1: The Recommendation Engine You Didn't Know About
Who may be harmed:
Customers from lower-income communities, who are more likely to have lower credit scores, may pay higher premiums despite being safe drivers. - Customers in predominantly minority zip codes may face higher rates due to geographic correlations in historical data. - Anyone whose life circumstances — m → Case Study 1: The Recommendation Engine You Didn't Know About
Why action is needed now
what makes this urgent? - **Your recommendation** in one to two sentences - **Key supporting evidence** — two to three bullet points → Capstone Project 2: AI Policy Brief
Write an initial profile (200–300 words):
What is the system called? - What company or organization created it? - What task does it perform? - Who uses it? - Who is affected by its outputs? → Index