跳到主要内容

3 篇博文 含有标签「instance-segmentation」

查看所有标签

· 阅读需 20 分钟
Gavin Gong

Xinlong Wang, Tao Kong, Chunhua Shen, Yuning Jiang, Lei Li

We present a new, embarrassingly simple approach to instance segmentation in images. Compared to many other dense prediction tasks, e.g., semantic segmentation, it is the arbitrary number of instances that have made instance segmentation much more challenging. In order to predict a mask for each instance, mainstream approaches either follow the 'detect-thensegment' strategy as used by Mask R-CNN, or predict category masks first then use clustering techniques to group pixels into individual instances. We view the task of instance segmentation from a completely new perspective by introducing the notion of "instance categories", which assigns categories to each pixel within an instance according to the instance's location and size, thus nicely converting instance mask segmentation into a classification-solvable problem. Now instance segmentation is decomposed into two classification tasks. We demonstrate a much simpler and flexible instance segmentation framework with strong performance, achieving on par accuracy with Mask R-CNN and outperforming recent singleshot instance segmenters in accuracy. We hope that this very simple and strong framework can serve as a baseline for many instance-level recognition tasks besides instance segmentation.

实例分割相比于语义分割,不仅需要预测出每一个像素点的语义类别,还要判断出该像素点属于哪一个实例。以往二阶段的方法主要是:

  1. 先检测后分割:例如Mask R-CNN ,先用检测的方法到得每一个实例,然后对该实例进行语义分割,分割得到的像素都属于此实例。
  2. 先分割后分类:先采用语义分割的方法对整个图的所有像素点做语义类别的预测,然后学习一个嵌入向量,使用聚类方法拉近属于同一实例的像素点,使它们属于同一类(同个实体)。

单阶段方法(Single Stage Instance Segmentation)方面的工作受到单阶段目标检测的影响大体上也分为两类:一种是受one-stage, anchot-based检测模型如YOLO,RetinaNet启发,代表作有YOLACT和SOLO;一种是受anchor-free检测模型如 FCOS 启发,代表作有PolarMask和AdaptIS。上述这些实例分割的方法都不那么直接,也不那么简单。SOLO的出发点就是做更简单、更直接的实例分割。

基于对MSCOCO数据集的统计,作者提出,验证子集中总共有36780个对象,其中98.3%的对象对的中心距离大于30个像素。至于其余的1.7%的对象对,其中40.5%的大小比率大于1.5倍。在这里,我们不考虑像X形两个物体这样的少数情况。总之,在大多数情况下,图像中的两个实例要么具有不同的中心位置,要么具有不同的对象大小

于是作者提出通过物体在图片中的位置形状来进行实例的区分。同一张图片中,位置和形状完全相同,就是同一个实例,由于形状有很多方面,文章中朴素地使用尺寸描述形状。

该方法与 Mask R-CNN 实现了同等准确度,并且在准确度上优于最近的单次实例分割器。

· 阅读需 14 分钟
Gavin Gong

Daniel Bolya, Chong Zhou, Fanyi Xiao, Yong Jae Lee

We present a simple, fully-convolutional model for real-time instance segmentation that achieves 29.8 mAP on MS COCO at 33.5 fps evaluated on a single Titan Xp, which is significantly faster than any previous competitive approach. Moreover, we obtain this result after training on only one GPU. We accomplish this by breaking instance segmentation into two parallel subtasks: (1) generating a set of prototype masks and (2) predicting per-instance mask coefficients. Then we produce instance masks by linearly combining the prototypes with the mask coefficients. We find that because this process doesn't depend on repooling, this approach produces very high-quality masks and exhibits temporal stability for free. Furthermore, we analyze the emergent behavior of our prototypes and show they learn to localize instances on their own in a translation variant manner, despite being fully-convolutional. Finally, we also propose Fast NMS, a drop-in 12 ms faster replacement for standard NMS that only has a marginal performance penalty.

YOLACT是You Only Look At CoefficienTs 的简写,其中 coefficients 是这个模型的输出之一,这个命名风格应是致敬了另一目标检测模型 YOLO。

image-20210818180207356

上图:YOLACT的网络结构图。YOLACT的目标是将掩模分支添加到现有的一阶段(one-stage)目标检测模型。我个人觉得这是夹在一阶段和二阶段中间的产物。将其分为一阶段的依据是其实现“将掩模分支添加到现有的一阶段目标检测模型”的方式与Mask R-CNN对 Faster-CNN 操作相同,但没有诸如feature repooling和ROI align等明确的目标定位步骤。也就是,定位-分类-分割的操作被变成了分割-剪裁

根据评估,当YOLACT 处理550×550550\times 550​​​大小的图片时,其速度达到了 33FPS,而互联网上多数视频一般是 30FPS 的,这也就是实时的含义了。这是单阶段比较早的一份工作,虽然这个速度不快但也还行了。

· 阅读需 14 分钟
Gavin Gong

Jifeng Dai, Kaiming He, Yi Li, Shaoqing Ren, Jian Sun

Fully convolutional networks (FCNs) have been proven very successful for semantic segmentation, but the FCN outputs are unaware of object instances. In this paper, we develop FCNs that are capable of proposing instance-level segment candidates. In contrast to the previous FCN that generates one score map, our FCN is designed to compute a small set of instance-sensitive score maps, each of which is the outcome of a pixel-wise classifier of a relative position to instances. On top of these instance-sensitive score maps, a simple assembling module is able to output instance candidate at each position. In contrast to the recent DeepMask method for segmenting instances, our method does not have any high-dimensional layer related to the mask resolution, but instead exploits image local coherence for estimating instances. We present competitive results of instance segment proposal on both PASCAL VOC and MS COCO.

这篇工作又名InstanceFCN。实例分割方面,由于网络难以同时进行分类和分割任务,因此首先流行的是二阶段实例分割网络,首先对输入找到实例的proposal,然后在其中进行密集预测(也就是先框框再分割)。本文从名称上看不是一篇讲实例分割的文章,是讲如何通过FCN获得实例级别的分割mask的的。

在阅读之前我想提醒一下,这篇工作的效果是比较差的,毕竟是早期工作。不过这篇工作具有不错的启发意义,值得读一读。后面的一篇工作FCIS(Fully Convolutional Instance-aware Semantic Segmentation)中就借鉴了本文中提出的instance-sensitive score maps(请不要弄混本篇工作和FCIS)。本文的一大贡献就是提出使用instance-sensitive score maps区分不同个体。