Explainable AI in Computer Vision

Page 1

Computer Vision

2 2003

2012

[1,2]

Computer Vision

[1] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25.

[2] Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. (2009, June). Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition (pp. 248255). Ieee.

3 2003

Computer Vision

2012

[1,2]

[1] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25. AlexNet [1]

[2] Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. (2009, June). Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition (pp. 248255). Ieee.

• Object-centric dataset

• Millions of images of 1,000 object categories

4 2003
[2]
Dog

Computer Vision

[1] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25.

[2] Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. (2009, June). Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition (pp. 248255). Ieee.

[3] Zhou, B., Lapedriza, A., Xiao, J., Torralba, A., & Oliva, A. (2014). Learning deep features for scene recognition using places database. Advances in neural information processing systems, 27.

• Object-centric dataset

• Millions of images of 1,000 object categories [3]

• Scene-centric dataset • Millions of images of 365 Place categories

5 2003
2012
AlexNet
[2] Dog
[1]
[1,2] 2014 [3]

Computer Vision

Performanceof ComputerVisionsystems

2012

2014 [1,2] [3] Explainability

[1]

[1] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25.

[2] Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. (2009, June). Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition (pp. 248255). Ieee.

[3] Zhou, B., Lapedriza, A., Xiao, J., Torralba, A., & Oliva, A. (2014). Learning deep features for scene recognition using places database. Advances in neural information processing systems, 27.

• Object-centric dataset • Millions of images of 1,000 object categories • Scene-centric dataset • Millions of images of 365 Place categories

6
2003
[2]
Dog
AlexNet
[3]

Computer Vision

Performanceof ComputerVisionsystems

2012

2014 [1,2] [3] Explainability

[1]

[1] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25.

[2] Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. (2009, June). Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition (pp. 248255). Ieee.

[3] Zhou, B., Lapedriza, A., Xiao, J., Torralba, A., & Oliva, A. (2014). Learning deep features for scene recognition using places database. Advances in neural information processing systems, 27.

• Object-centric dataset • Millions of images of 1,000 object categories • Scene-centric dataset • Millions of images of 365 Place categories

7
2003
[2]
Dog
AlexNet
[3]

We need more than “great performing black boxes”

Increasing number of Deep Learning uses. Some of them involve important decision making.

Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115.

Benign or malign?

• CNN on 129,450 clinical images

• Annotations from 21 certified dermatologists

• CNN obtained the same accuracy as the experts

After its publication, the authors noticed a bias in their algorithm

“Is the Media’s Reluctance to Admit AI’s Weaknesses Putting us at Risk?” (2019)

8

We need more than “great performing black boxes”

Increasing number of Deep Learning uses. Some of them involve important decision making.

After its publication, the authors noticed a bias in their algorithm

“Is the Media’s Reluctance to Admit AI’s Weaknesses Putting us at Risk?” (2019)

9

Explainability in Comptuer Vision

Input

Understanding the relation between input and output (Local Explanations)

● What is the model “looking at”?

● What parts of the input (image) are contributing the most for the model to produce a specific output?

Internal Representation of the model (Global Explanations) ?

● How is the model encoding the input?

● What is the representation learned by the model?

10
bedroom

Understanding the relation between input and output (Local Explanations)

● What is the model “looking at”?

● What parts of the input (image) are contributing the most for the model to produce a specific output?

11
Input bedroom Explainability
in Comptuer Vision

Understanding why images are correctly classified

in deep scene cnns. ICLR, 2015.

12
Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., & Torralba, A. Object detectors emerge

Understanding why images are correctly classified

13

Understanding why images are correctly classified

Base of Image regions (superpixels)

14

Understanding why images are correctly classified

Base of Image regions (superpixels)

15

Understanding why images are correctly classified

Base of Image regions (superpixels)

16

Understanding why images are correctly classified

Base of Image regions (superpixels)

17

Understanding why images are correctly classified

Base of Image regions (superpixels)

18

Understanding why images are correctly classified

Bedroom Bedroom (minimal image)

19

Bedroom

20

Understanding why images are correctly classified

Dining Room Dining Room

21

Understanding why images are correctly classified

Dining Room Dining Room (minimal image)

22

Dining Room

23

Techniques based on analyzing how the output changes when making small perturbations in the input

Visualizing the minimal image

Zhou, B., Khosla, A., Lapedriza, A., Oliva, A. & Torralba, A.,

Object detectors emerge in deep scene CNNs. ICLR, 2015.

24

Techniques based on analyzing how the output changes when making small perturbations in the input

Visualizing the minimal image

Zhou, B., Khosla, A., Lapedriza, A., Oliva, A. & Torralba, A.,

Object detectors emerge in deep scene CNNs. ICLR, 2015.

Occluding small regions of the input and observing how the output changes

Zeiler, Matthew D., and Rob Fergus. "Visualizing and understanding convolutional networks." European conference on computer vision. Springer, Cham, 2014

25

Techniques based on analyzing how the output changes when making small perturbations in the input

Visualizing the minimal image

Zhou, B., Khosla, A., Lapedriza, A., Oliva, A. & Torralba, A.,

Object detectors emerge in deep scene CNNs. ICLR, 2015.

Occluding small regions of the input and observing how the output changes

When occluding the region that is most informative for the model, the drop of the true class score is stronger.

Zeiler, Matthew D., and Rob Fergus. "Visualizing and understanding convolutional networks." European conference on computer vision. Springer, Cham, 2014

26

Class Activation Map (CAM)

Heatmap visualizations

Highlighting the areas of the image that contributed the most to the decision of the model.

B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba. "Learning Deep Features for Discriminative Localization". Computer Vision and Pattern

(CVPR), 2016.

27
Penguin
Recognition
Agata Lapedriza Bolei Zhou Aude Oliva Aditya Khosala Antonio Torralba
W1 + W2 +…+ W3 = CAM Avg 28
Class Activation Map (CAM) on fully convolutional networks

CAM results on Action Classification

29

CAM results on more abstract concepts

Mirror lake

View out of the window

30

CAM on Medical Image

Input Chest X-Ray Image

P., Irvin, J., Zhu, K., Yang, B., Mehta, H., Duan, T., ... & Ng, A. Y. (2017). Chexnet: Radiologist-level pneumonia

detection on chest x-rays with deep learning. arXiv preprint arXiv:1711.05225.

31
Rajpurkar,

CAM on Medical Image

Input Chest X-Ray Image

detection on chest x-rays with deep learning. arXiv preprint arXiv:1711.05225.

Chexnet: Radiologist-level pneumonia

32
Rajpurkar, P., Irvin, J., Zhu, K., Yang, B., Mehta, H., Duan, T., ... & Ng, A. Y. (2017).

Explainability for Debugging

Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017). Dermatologist-level classification

of skin cancer with deep neural networks. Nature, 542(7639), 115.

33

Explainability in Comptuer Vision

Understanding the relation between input and output (Local Explanations)

Make simple perturbations in the image and observe how the classification score changes.

Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., & Torralba, A. “Object detectors emerge in deep scene cnns”. ICLR, 2015.

Class Activation Maps

B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba. "Learning Deep Features for Discriminative Localization". Computer Vision and Pattern Recognition (CVPR), 2016.

#Grad-CAM Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, Dhruv Batra. Grad-CAM: Why did you say that? Visual Explanations from Deep Networks via Gradient-based Localization. ICCV 2017 [video]

34

Explainability in Comptuer Vision

Understanding the relation between input and output (Local Explanations)

Make simple perturbations in the image and observe how the classification score changes.

Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., & Torralba, A. “Object detectors emerge in deep scene cnns”. ICLR, 2015.

Class Activation Maps

B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba. "Learning Deep Features for Discriminative Localization". Computer Vision and Pattern Recognition (CVPR), 2016.

#Grad-CAM Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, Dhruv Batra. Grad-CAM: Why did you say that? Visual Explanations from Deep Networks via Gradient-based Localization. ICCV 2017 [video]

Internal Representation of the Model (Global Explanations)

35

Understanding the internal representation of the model

36
Zhou,
B., Khosla, A., Lapedriza, A., Oliva, A., & Torralba, A. Object detectors emerge in deep scene cnns. ICLR, 2015. savannah Agata Lapedriza Bolei Zhou Aude Oliva Aditya Khosala Antonio Torralba

Understanding what the units in the CNN are doing

37
savannah

Understanding what the units in the CNN are doing

3D Matrix

Feature maps

savannah
38

Understanding what are the units in the CNN doing

3D Matrix

savannah … Unit 1 Unit 2 Unit N 39

Understanding what are the units in the CNN doing

3D Matrix

savannah … Unit 1 Unit 2 Unit N 40

Visualization of image patches that strongly activate units

● Take a large dataset of images

● Per each image, take a forward pass in the model and keep the maximum score and corresponding location the specific unit.

● Visualize those images in the dataset that produce the highest response in this unit.

Unit 1

41
ETC.

Visualizations for units in the last convolutional layer of an AlexNet trained for Place categorization

42

Crowdsourcing Units

● Word Description

lighthouse

● Mark the image that do not correspond to the concept you just wrote

● Which category does your short description mostly belong to?

❏ Scene

❏ Region or Surface

❏ Object

❏ Object part

❏ Texture or Material

❏ Simple elements or colors

43

Several object detectors emerge in the last convolutional layer for Place

44
Categorization

Simple shapes and colors

Textures, Regions, Object parts

Objects and Scene Parts

48 Internal
AlexNet Model trained for Place Categorization
Representation learned by an

Network Dissection

Probabilistic formulation to pair units with concepts in the BROADEN visual dictionary

BRODEN Visual Dictionary

● 1,197 concepts; 63,305 images

Broden: Broadly and Densely Labeled Dataset

Bau, D., Zhou, B., Khosla, A., Oliva, A., & Torralba, A. Network dissection: Quantifying interpretability of deep visual representations. In

of the IEEE conference on computer vision and pattern recognition (pp. 6541-6549), 2017.

49
Proceedings

Network Dissection

dissection: Quantifying interpretability of deep visual representations. In

IEEE conference on computer vision and pattern recognition (pp. 6541-6549), 2017.

50
Proceedings of the
Bau, D., Zhou, B., Khosla, A., Oliva, A., & Torralba, A. Network

Interpreting Face Inference Models using Hierarchical Network Dissection

Teotia, D., Lapedriza, A., & Ostadabbas, S. (2022). Interpreting Face Inference Models Using Hierarchical Network Dissection. International Journal of Computer Vision, 130(5), 1277-1292.

Divyang Teotia Sarah Ostadabbas Agata Lapedriza

Hierarchical Network Dissection: main idea

Apparent age: [20-30]

52
s Mouth slightly open Narrow eyes AU2 0
Eyeglasse

Motivation

• Significant improvements in Deep Learning based Face classification models.

• Concerning biases and poor performance in underrepresented classes.

53

• Significant improvements in Deep Learning based Face classification models.

Goals

• Concerning biases and poor performance in underrepresented classes.

• Understanding the representations learned by FaceCentric CNNs.

• Exploring the use of explainability to reveal biases encoded in the model.

54
Motivation

Face dictionary

12 Global concepts

38 Local concepts.

We work with academic datasets with categorical labels corresponding to social constructs, apparent presence, or stereotypical representations of these concepts. However we acknowledge that these concepts are more fluid in essence, as further discussed in Sect. 4. of our paper.

55

Face dictionary: Examples

Global concepts

Apparent Age

[CelebA, Liu 2015]

Local concepts

Facial attributes [CelebA, Liu 2015]; AU, [EmotioNet, Benitez- Quiroz et al. 2016]

56

Algorithm for unit-concept pairing

Analyze the activation of this unit when you do a forward pass with all the images in the dictionary.

[20-30]

Algorithm for unit-concept pairing

[20-30]

• Stage Ⅰ: Paring Units with Global Concepts

Analyze the activation of this unit when you do a forward pass with all the images in the dictionary.

• Stage Ⅱ : Pairing Units with Facial Region

Eye Region

Nose Region

Cheek Region

Mouth Region

Chin Region

• Stage Ⅲ : Pairing Units with Local Concept

Unit-concept pairing qualitative visualizations

Dissecting Face-centric models

• We perform our dissection on five face-centric inference tasks: age estimation, gender classification, beauty estimation, facial recognition, and smile classification.

● For all the models we dissect the last convolutional layer.

60

Dissecting General Face Inference Models

Apparent Age Estimation

61

Dissecting General Face Inference Models

Apparent Age Estimation

62

Dissecting General Face Inference Models

Apparent Gender Estimation

63

Dissecting General Face Inference Models

Smile classification (smile vs. non smile)

64

Can Hierarchical Network Dissection reveal bias in the representation learned by the model?

65

Controlled experiments: bias introduced on the local concept eyeglasses

• Target task: apparent gender recognition

• Datasets with different degrees of bias: from Balanced to Fully biased

Balanced (P = 0.5)

• Model: ResNet-50

Fully biased (P = 1)

Apparent gender recognition wearing eyeglasses

66
_ _ _

Controlled experiments: bias introduced on the local concept eyeglasses

67 Balanced Fully biased

Controlled experiments: bias introduced on the local concept eyeglasses

68 Balanced Fully biased

Limitations

● There are many facial concepts that are not included in the dictionary and this limits the interpretability capacity of Hierarchical Network Dissection.

● For the local concepts in our face dictionary, the segmentation masks were computed automatically. This generates some degree of noise that can affect the unit-concept pairing, particularly for the local concepts with small area.

69

Take home messages

70

Take home messages

Understanding the relation between input and output

What elements in the input are responsible for the model to produce a specific output?

Internal Representation of the model

How is the model working internally? E.g.

71
Image perturbation Class Activation Maps Analyzing the units of the model E.g.

Take home messages

Understanding the relation between input and output

What elements in the input are responsible for the model to produce a specific output?

Internal Representation of the model

How is the model working internally?

Class Activation Maps Analyzing the units of the model

The interest for Explainability

● Debugging

● Reusing the model for downstream tasks

● Audit the model: Bias Discovery and Fairness in AI

72
E.g.
E.g.
Image perturbation

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.