Forester Research recently highlighted their predictions for 2022 and included in it was the concept of bias bounties. They are modeled after bug bounties, which reward users who detect problems in software. Bias bounties would reward users for identifying bias in AI systems. This year, Twitter launched the first major bias bounty and awarded $3,500 to a student who proved that its image cropping algorithm favors lighter, slimmer, and younger faces.
According to Forester other major tech companies such as Google and Microsoft will implement bias bounties, as will non-technology companies including banks and healthcare companies. They suggest that AI professionals should consider bias bounties as a canary in the coal mine for situations where incomplete data or existing inequity may lead to discriminatory outcomes from AI systems.
While this might sound altruistic and noble to assuage public criticism and to provide transparency and accountability, the idea that it will eliminate or even “dent” the problem of bias is complexly naïve and uninformed.
The problem stems from the very core of the math methods used in AI and deep learning today. The methods have so many variables and the “truth cases” used to train the methods are far from truth. It would take too much time and most of you simply don’t care about these details… so why do I choose to correct all this and present this as a blog?
It is because the core issue in all of this is not bias in the AI systems but rather bias in the data we collect on people itself. We may have elements of truth in it such as whether people buy cigarettes and how that translates to their health statistics but what seems like truth may not be so obvious once you go a bit deeper.
You are probably aware that the hiring algorithms used by major corporations use Myers Briggs tests to correlate success in jobs. That may seem unbiased on the surface but as you dig into the test itself you find that it was never designed nor intended to indicate success in life … it was just to highlight how different people view life situations and respond to them.
We should be alarmed at incentivizing anyone to make subjective judgements that get validated by some statistical measures. Just because you find a better variable to correlate to an outcome does not mean you have discovered a bias in the prior one.
Remember the core idea … correlation does not imply causation. The birth rate in England correlates to the stork population. I hope I am not going too fast … am I?