Carolin Schmidt, Wayne Wan and I drafted a new working paper — feedback would be very welcome.
ML-enabled classifiers are regularly criticized for being ‘black boxes’: While their predictive power is undisputed, it is difficult to understand why the model arrived at a particular classification. The same can be said for humans classifying photos according to their aesthetic appeal. They can quickly say whether they like a photo or not — but giving justifications for such a choice is often challenging. Also, human classifiers exhibit inconsistencies and biases, adding to the black box nature of their classifications.
This paper first collects binary classifications of house pictures from a large group of participants and then trains personalized ML classifiers for each participant. Predictions from these automated yet personal classification machines shed light on biases and inconsistencies in the participants’ assessment of residential real estate’s visual appeal.
Full paper: Link