Computer Vision Interview Questions #20 - The Low-Contrast Bias Trap
Why perfectly balanced face datasets still fail - and how sensor physics silently breaks feature learning.
You’re in a Senior Computer Vision interview at OpenAI. The VP of Engineering sets a trap:
“Our production FaceID model has a 12% higher error rate on darker skin tones. We audited the training data and it is perfectly balanced (50/50 split). We retrained from scratch. The error persists. Why?”
90% of candidates walk right into the trap.
Most candidates say: “You need to oversample the underrepresented class,”
or “You need better data augmentation.”
They assume bias is strictly a sampling problem.
The interviewer sighs and ends the call. Why? Because the candidates tried to solve a physics problem with a CSV file.
The reality is that computer vision models inherit the same bugs as the human eye. Remember 𝐀𝐝𝐞𝐥𝐬𝐨𝐧’𝐬 𝐂𝐡𝐞𝐜𝐤𝐞𝐫𝐛𝐨𝐚𝐫𝐝 𝐈𝐥𝐥𝐮𝐬𝐢𝐨𝐧? (The one where Square A and Square B are the same color, but your brain refuses to believe it because of the shadow).
Your brain, and your ConvNet, uses Luminance Priors to guess geometry. It assumes “Darker = Shadow.”
On lighter skin, shadows create high-contrast gradients that make feature detection (nose, jawline) easy. On darker skin, light absorption reduces that dynamic range. The “shadow” and the “skin” pixel values get closer together.
The model isn’t “biased” against the demographic, it is biased against Low-Contrast Physics.
The Senior Engineer identifies 𝐓𝐡𝐞 𝐂𝐡𝐞𝐜𝐤𝐞𝐫𝐛𝐨𝐚𝐫𝐝 𝐄𝐟𝐟𝐞𝐜𝐭.
1. 𝐓𝐡𝐞 𝐏𝐡𝐲𝐬𝐢𝐜𝐬 𝐁𝐨𝐭𝐭𝐥𝐞𝐧𝐞𝐜𝐤:
Standard RGB sensors have a lower Signal-to-Noise Ratio (SNR) on darker surfaces due to light absorption. No amount of “balanced data” fixes a sensor that can’t see the gradient.
2. 𝐓𝐡𝐞 𝐅𝐞𝐚𝐭𝐮𝐫𝐞 𝐂𝐨𝐥𝐥𝐚𝐩𝐬𝐞:
Keep reading with a 7-day free trial
Subscribe to AI Interview Prep to keep reading this post and get 7 days of free access to the full post archives.

