跳到主要内容

1 篇博文 含有标签「attribution」

查看所有标签

· 阅读需 10 分钟
PuQing

Mukund Sundararajan, Ankur Taly, Qiqi Yan

We study the problem of attributing the prediction of a deep network to its input features, a problem previously studied by several other works. We identify two fundamental axioms---Sensitivity and Implementation Invariance that attribution methods ought to satisfy. We show that they are not satisfied by most known attribution methods, which we consider to be a fundamental weakness of those methods. We use the axioms to guide the design of a new attribution method called Integrated Gradients. Our method requires no modification to the original network and is extremely simple to implement; it just needs a few calls to the standard gradient operator. We apply this method to a couple of image models, a couple of text models and a chemistry model, demonstrating its ability to debug networks, to extract rules from a network, and to enable users to engage with models better.

简介

本文介绍了一种神经网络的可视化方法:积分梯度(Integrated Gradients),是一篇 2016-2017 年间的工作。

所谓可视化,简单来说就是对于给定的输入 xx 以及模型 F(x)F(x),想办法指出 xx 的哪些分量对模型的预测有较大的影响,或者说 xx 各个分量的重要性做个排序,而专业的话术就是归因(Attribution)。一个朴素的思路是直接使用梯度 xF(x)\nabla _{x}F(x) 来作为 xx各个分量的重要性指标,而积分梯度是对它的改进。