In this episode we discuss Bias in Pruned Vision Models: In-Depth Analysis and Countermeasures
by Eugenia Iofinova, Alexandra Peste, Dan Alistarh. The paper investigates the relationship between neural network pruning and induced bias in Convolutional Neural Networks (CNNs) for computer vision. The authors show that highly-sparse models (with less than 10% remaining weights) can maintain accuracy without increasing bias when compared to dense models. However, at higher sparsities, pruned models exhibit higher uncertainties in their outputs, as well as increased correlations, which are linked to increased bias. The authors propose easy-to-use criteria to establish whether pruning will increase bias and identify samples most susceptible to biased predictions.
view more