差分卷积——边缘检测

差分卷积——边缘检测

参考论文:

​ 这两篇论文是同一个作者的工作,后面一篇论文是前面论文的延续和拓展工作,我们直接介绍后面论文的内容

像素差分卷积

​ 传统的边缘检测算子,如Canny、Sobel和局部二值模式(LBP),虽然能够有效捕捉边缘信息,但它们通常是浅层结构,无法充分表达复杂的边缘特征(而且是静态的算子)。此外,这些算子没有与现代卷积神经网络(CNNs)的优势相结合。现有的边缘检测方法往往需要通过堆叠大量的 CNN 网络来达到效果(CNNs kernels which are semantically strong while hard to give emphasis on edge information),这杨网络的开销就会变大。为了解决这些问题,作者提出了像素差分卷积(PDC)。PDC的主要特点如下:

  • 结合传统边缘检测算子:PDC融合了传统边缘检测算子的优点,如计算像素间的差分来捕捉边缘信息,同时也保留了深度学习模型的强大学习能力。
  • 轻量化设计:PDC的设计使得模型可以在保证边缘检测准确性的同时,减少内存占用和计算成本

中心查分卷积网络的效果如下:

PDC

主要工作:

​ 作者提出了三种差分卷积(pixel difference convolution (PDC))用于边缘检测,差分卷积的形式为:

y=f(x,θ)=i=1k×kwixi,   (vanilla convolution) y=f(x,θ)=(xi,xi)Pwi(xixi),  (PDC)y = f\left( \mathbf{x},\mathbf{\theta }\right) = \sum_{i = 1}^{k \times k}w_i \cdot x_i,\;\text{ (vanilla convolution) }\\ y = f( \nabla \mathbf{x},\mathbf{\theta }) = \sum_{( x_i,x_i^\prime) \in P }w_i \cdot \left( {x_i - x_i^\prime }\right), \; \text{(PDC)}

​ 其中 xix_ixix_i^\prime 是输入的像素点,wiw_ik×kk \times k 大小的卷积核的权重,P={(x1,x1),(x2,x2),,(xm,xm)}P =\{ ( x_1,x_1^\prime ) ,( x_2,x_2^\prime ) ,\dots ,( x_m,x_m^\prime ) \} 是被选出来作差分操作的像素对,且 mk×km \leq k \times k

​ 差分卷积能捕捉边缘的原因在于差分操作实际上是一种提取梯度的操作(相邻像素之间的梯度),通过不同的差分方式提出来不同的差分算子,作者在文章中提出了三种不同的差分算子:

​ we derive three types of PDC instances as shown in Fig, in which we name them as central PDC (CPDC), angular PDC (APDC) and radial PDC (RPDC):

PDC

​ 作者的示意图类似于矢量的减法,矢量的开始代表减数(像素),矢量箭头处代表被减像素。由于局部梯度信息容易引入噪声,因此作者综合了 Vallina Conv 和 PDC 的优点,用一个超参数 θ\theta 控制普通卷积输出和差分卷积输出之间的相对重要性,当 θ=0\theta=0 时,模型仅使用强度信息;当 θ=\theta= 时,模型仅使用梯度信息

y(p0)=θpnPw(pn)(x(p0+pn)x(p0))+(1θ)pnPw(pn)x(p0+pn)y(p_0)=\theta \cdot \sum_{p_n \in P} w(p_n)\cdot \left( x(p_0+p_n)-x(p_0)\right) + (1-\theta) \cdot \sum_{p_n \in P}w(p_n) \cdot x(p_0+p_n)

​ 最后作者在试验中发现按照 CPDC, APDC, RPDC and vanilla convolution 这种顺序重复堆叠的效果最好

重参数化与卷积等价:

​ PDC 层可以通过保存模型中卷积核权重的差值,根据选定像素对的位置,转换为普通的卷积层。这样,在推理阶段可以保持高效:

y=w1(x1x2)+w2(x2x3)+w3(x3x6)+=(w1w4)x1+(w2w1)x2+(w3w2)x3+=w^1x1+w^2x2+w^3x3+=w^ixi.y = w_1 \cdot ( x_1 - x_2) + w_2 \cdot (x_2 - x_3) + w_3 \cdot (x_3 - x_6) + \ldots \\ = (w_1 - w_4) \cdot x_1 + (w_2 - w_1) \cdot x_2 + (w_3 - w_2) \cdot x_3 + \ldots \\ = \widehat{w}_1 \cdot x_1 + \widehat{w}_2 \cdot x_2 + \widehat{w}_3 \cdot x_3 + \ldots = \sum \widehat{w}_{i} \cdot x_i.

既然差分卷积可以等效于普通卷积,为什么训练网络的时候不能等效呢:

​ 因为神经网络不听话,他需要被人指导!如果想让神经网络逼近 10001,我们做好的方式不是让他直接学习 10001,而是让他让学习 10000+x10000+xxx,这也是为什么 Diffusion Model 是预测噪声而不是直接预测下一张图片,因为预测噪声对神经网络来说简单的多

代码实现:

CPDC:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
class Conv2d_cd(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size=3, stride=1,
padding=1, dilation=1, groups=1, bias=False, theta=0.7):

super().__init__()
self.conv = nn.Conv2d(in_channels, out_channels, kernel_size=kernel_size, stride=stride, padding=padding, dilation=dilation, groups=groups, bias=bias)
self.theta = theta

def forward(self, x):
out_normal = self.conv(x)

if math.fabs(self.theta - 0.0) < 1e-8:
return out_normal
else:
#pdb.set_trace()
[C_out,C_in, kernel_size,kernel_size] = self.conv.weight.shape
kernel_diff = self.conv.weight.sum(2).sum(2)
kernel_diff = kernel_diff[:, :, None, None]
out_diff = F.conv2d(input=x, weight=kernel_diff, bias=self.conv.bias, stride=self.conv.stride, padding=0, groups=self.conv.groups)

return out_normal - self.theta * out_diff
  • theta:控制强度级信息和梯度级信息之间平衡的参数
    • 检查 theta 是否接近于 0。如果 theta 接近于 0,则直接返回 out_normal
    • 如果 theta 不接近于 0,则继续执行像素差分卷积操作
  • 由于用中心像素值分别减去边缘像素值比求他的负数复杂(边缘像素值求和减中心像素值),因此作者这里用了另一种计算方式

统一形式:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
import math
import torch
import torch.nn as nn
import torch.nn.functional as F

class Conv2d(nn.Module):
def __init__(self, pdc, in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=False):
super(Conv2d, self).__init__()
if in_channels % groups != 0:
raise ValueError('in_channels must be divisible by groups')
if out_channels % groups != 0:
raise ValueError('out_channels must be divisible by groups')
self.in_channels = in_channels
self.out_channels = out_channels
self.kernel_size = kernel_size
self.stride = stride
self.padding = padding
self.dilation = dilation
self.groups = groups
self.weight = nn.Parameter(torch.Tensor(out_channels, in_channels // groups, kernel_size, kernel_size))
if bias:
self.bias = nn.Parameter(torch.Tensor(out_channels))
else:
self.register_parameter('bias', None)
self.reset_parameters()
self.pdc = pdc

def reset_parameters(self):
nn.init.kaiming_uniform_(self.weight, a=math.sqrt(5))
if self.bias is not None:
fan_in, _ = nn.init._calculate_fan_in_and_fan_out(self.weight)
bound = 1 / math.sqrt(fan_in)
nn.init.uniform_(self.bias, -bound, bound)

def forward(self, input):

return self.pdc(input, self.weight, self.bias, self.stride, self.padding, self.dilation, self.groups)


## cd, ad, rd convolutions
def createConvFunc(op_type):
# convolution CDPC APDC RPDC
assert op_type in ['cv', 'cd', 'ad', 'rd'], 'unknown op type: %s' % str(op_type)
if op_type == 'cv':
return F.conv2d

if op_type == 'cd':
def func(x, weights, bias=None, stride=1, padding=0, dilation=1, groups=1):
assert dilation in [1, 2], 'dilation for cd_conv should be in 1 or 2'
assert weights.size(2) == 3 and weights.size(3) == 3, 'kernel size for cd_conv should be 3x3'
assert padding == dilation, 'padding for cd_conv set wrong'

weights_c = weights.sum(dim=[2, 3], keepdim=True)
yc = F.conv2d(x, weights_c, stride=stride, padding=0, groups=groups)
y = F.conv2d(x, weights, bias, stride=stride, padding=padding, dilation=dilation, groups=groups)
return y - yc
return func
elif op_type == 'ad':
def func(x, weights, bias=None, stride=1, padding=0, dilation=1, groups=1):
assert dilation in [1, 2], 'dilation for ad_conv should be in 1 or 2'
assert weights.size(2) == 3 and weights.size(3) == 3, 'kernel size for ad_conv should be 3x3'
assert padding == dilation, 'padding for ad_conv set wrong'

shape = weights.shape
weights = weights.view(shape[0], shape[1], -1)
weights_conv = (weights - weights[:, :, [3, 0, 1, 6, 4, 2, 7, 8, 5]]).view(shape) # clock-wise
y = F.conv2d(x, weights_conv, bias, stride=stride, padding=padding, dilation=dilation, groups=groups)
return y
return func
elif op_type == 'rd':
def func(x, weights, bias=None, stride=1, padding=0, dilation=1, groups=1):
assert dilation in [1, 2], 'dilation for rd_conv should be in 1 or 2'
assert weights.size(2) == 3 and weights.size(3) == 3, 'kernel size for rd_conv should be 3x3'
padding = 2 * dilation

shape = weights.shape
if weights.is_cuda:
buffer = torch.cuda.FloatTensor(shape[0], shape[1], 5 * 5).fill_(0)
else:
buffer = torch.zeros(shape[0], shape[1], 5 * 5)
weights = weights.view(shape[0], shape[1], -1)
buffer[:, :, [0, 2, 4, 10, 14, 20, 22, 24]] = weights[:, :, 1:]
buffer[:, :, [6, 7, 8, 11, 13, 16, 17, 18]] = -weights[:, :, 1:]
buffer[:, :, 12] = 0
buffer = buffer.view(shape[0], shape[1], 5, 5)
y = F.conv2d(x, buffer, bias, stride=stride, padding=padding, dilation=dilation, groups=groups)
return y
return func
else:
print('impossible to be here unless you force that')
return None