NPR just published an article that says: “Facial recognition software sold by Amazon mistakenly identified 28 members of Congress as people who had been arrested for crimes, the American Civil Liberties Union announced on Thursday. Amazon Rekognition has been marketed as tool that provides extremely accurate facial analysis through photos and video.” Fred Simkin posted the following comment:
Decisions require judgment. Judgment is a product of “knowledge” not just more “data”. The piece is an excellent example of the fallacy of the data only approach implicit in DL and ML. NN based solutions are excellent filters in hybrid applications, but decisions and actions must be the purview of a knowledge based representation schema component.
Benedict Evans from from Andreessen Horowitz provided this comment: “The ACLU did a publicity stunt in which it got Amazon’s cloud facial recognition service to falsely identify US congressmen as criminals, by matching their pictures against a mugshot database. Amazon points out that machine learning is a probabilistic technology, that the ACLU set the tool to 80% confidence (not, say, 99%), and that the sample image set was biased, all of which is true but also, of course, the point: if you are not rigorous in thinking about what parameters you use and what bias might be in the data set, then you will get lots of inaccuracies. Machine learning is not magic, the computer can be wrong, and one should not take the results of any such system on trust. Equally, of course, claiming as the ACLU does that this is ‘flawed and dangerous’ is also to miss the point: it’s a tool with probabilistic outcomes that you can use or mis-use, and more importantly understand or misunderstand.”