Crate #8: AI Ethics โ The Hard Questions
Power, bias, and who gets to decide
๐ Prerequisites
Why Ethics in AI Matters NOW
AI is no longer just a research topic. It's making real decisions about real people, right now:
โข Deciding who gets a loan and who doesn't โข Predicting who might commit a crime โข Screening job applications โข Recommending how long someone should stay in jail โข Choosing what news you see in your feed โข Diagnosing diseases โข Driving cars
When AI makes a mistake on a chess game, nobody gets hurt. When AI makes a mistake deciding whether someone gets parole, a person's life is affected. The stakes are completely different.
The scariest part? Most people affected by AI decisions don't know AI is involved. Insurance rates, hiring decisions, college admissions, loan approvals โ AI might already be part of those decisions, and nobody told the people affected.
Bias: The Mirror Problem
AI learns from human data. Human data contains human biases. Therefore, AI inherits human biases. It's a mirror, not a crystal ball.
REAL EXAMPLES: โข Amazon's hiring AI learned to downgrade resumes with the word "women's" (like "women's chess club") because historically, Amazon hired mostly men.
โข Healthcare AI allocated less care to Black patients because it used health spending (not health needs) as a proxy for health. Because Black patients historically had less access to healthcare (and thus spent less), the AI concluded they were healthier. They weren't.
โข Facial recognition systems have error rates 10-100x higher for dark-skinned women compared to light-skinned men, because the training data was predominantly light-skinned faces.
The fix isn't simple. You can't just "remove bias" like removing a bug from code. Bias is baked into the data, the problem definition, the evaluation metrics, and the deployment decisions. Fighting AI bias requires diverse teams, careful data curation, and constant monitoring.
Here's a hard question with no easy answer: Should AI treat everyone identically, even if that leads to unequal outcomes? Or should it account for historical inequalities, which means treating people differently?
Deepfakes, Privacy, and Power
DEEPFAKES โ AI can now generate fake videos of real people saying things they never said. The technology is getting cheaper and easier every month. How do you trust what you see when seeing is no longer believing?
PRIVACY โ AI needs data, and the most useful data is often personal. Your browsing history, messages, photos, location data โ all of it is valuable for training AI. Who owns your data? Who profits from it? Did you consent?
SURVEILLANCE โ Some countries use AI-powered cameras to track every citizen's movements. Others use it to analyze social media posts and flag "troublemakers." The same face recognition that unlocks your phone can be used to identify protesters in a crowd.
JOB DISPLACEMENT โ AI is automating tasks that used to require humans. This creates new jobs AND destroys old ones. The question isn't whether AI will change work โ it will. The question is whether the benefits are shared fairly.
POWER CONCENTRATION โ Training cutting-edge AI costs hundreds of millions of dollars. Only a handful of companies (Google, Microsoft, Meta, OpenAI, Anthropic) can afford it. This concentrates enormous power in very few hands. Should AI development be more democratized?
ENVIRONMENTAL IMPACT โ Training one large AI model can emit as much carbon as five cars over their entire lifetimes. AI data centers use enormous amounts of water for cooling. The environmental cost is real and often ignored.
๐ค Think About It
- If a self-driving car must choose between hitting a pedestrian or swerving into a wall (risking the passenger), who should it protect? Who gets to program that decision?
- Should AI-generated art be copyrightable? Who owns it โ the person who wrote the prompt, the company that built the AI, or the artists whose work trained it?
- If an AI system makes a harmful decision (denying someone a loan unfairly), who is responsible? The programmer? The company? The algorithm itself?
๐ฌ Try This
- Look up 3 news stories about AI bias. For each one, identify: What data caused the bias? Who was harmed? How could it have been prevented?
- Write an 'AI Ethics Constitution' โ your 5 rules for how AI should be developed and used. Compare with a friend's rules.
- Find a deepfake detection tool online and test it with some images. How accurate is it? What could go wrong if we rely on these tools?
๐ Go Deeper
๐ฏ Fun Fact
The EU's GDPR (adopted 2016, enforced 2018) effectively gives people the right to 'meaningful information about the logic involved' when AI makes decisions about them. In practice, this is incredibly hard because many AI models are 'black boxes' where even the creators can't fully explain individual decisions. Laws and technology don't always move at the same speed.
๐ Quick Quiz
1. Why does AI inherit human biases?
2. What is a 'deepfake'?
3. When AI makes a harmful decision, who bears responsibility?
