Overly aggressive data filtering practices in AI training can systematically remove content from women and minority voices, leading to representational erasure in AI systems.
Rebuttals to Common Fallacies
Safety and representation aren't inherently opposed goals; more sophisticated approaches can address both.
The solution isn't to abandon content filtering, but to develop more nuanced approaches that don't systematically exclude marginalized voices.