Computer Vision Interview Questions #8 – The Zero-Padding Distribution Trap
Why injecting zeros at image borders silently breaks translation equivariance and corrupts edge statistics.
You’re in a Senior Computer Vision Interview at OpenAI and the interviewer asks:
“We use Zero-Padding to maintain feature map dimensions (e.g., 32x32). But from a signal processing perspective, why is injecting zeros at the borders dangerous for your model’s statistical distribution?”
Most of candidates say: “It’s just a convenience so the output size matches the input size. It helps us stack deep layers.”
That is the Wikipedia definition. It misses the mathematical reality of what they are actually doing to the data.
The reality is that 𝐙𝐞𝐫𝐨 𝐢𝐬 𝐧𝐨𝐭 𝐍𝐨𝐭𝐡𝐢𝐧𝐠. In statistics and signal processing, 𝘡𝘦𝘳𝘰 𝘪𝘴 𝘢 𝘷𝘦𝘳𝘺 𝘴𝘱𝘦𝘤𝘪𝘧𝘪𝘤, 𝘴𝘵𝘳𝘰𝘯𝘨 𝘷𝘢𝘭𝘶𝘦.
When you pad with zeros, you are fundamentally altering the data distribution at the edges of your image.
Here is the breakdown:


