.@_KarenHao in @techreview: A #WhirlwindTour of #Bias in #DeepLearning apps. Creeps in while 1)Framing the Problem 2) Collecting & Preparing the Data. It is #HardToFix because 1) Downstream impacts not visible till much later https://t.co/5WYqqlCKwH— Satyen Baindur (@Satyen_Baindur) February 18, 2019
...2) Standard Training/Testing protocols draw data from same distribution, thus even a 'tested' model retains bias from training data 3) #Portability trap elides #SocialContext 4) Eliminating #StatisticalBias rarely, if ever, explicitly considers ethical issues.— Satyen Baindur (@Satyen_Baindur) February 18, 2019