我一直在尝试混合两个图像.我目前采用的方法是,我获得两个图像的重叠区域的坐标,并且只有在重叠区域中,我才加入硬编码的alpha值为0.5,然后再添加它.所以基本上我只是从这两个图像的重叠区域中获取每个像素的一半值,并添加它们.这不会给我一个完美的融合,因为alpha值被硬编码为0.5.这是3个图像混合的结果:
如您所见,从一个图像到另一个图像的转换仍然可见.如何获得完美的Alpha值,消除这种可见的过渡?还是没有这样的东西,我采取错误的方法?
以下是我目前正在进行的混合:
for i in range(3): base_img_warp[overlap_coords[0],overlap_coords[1],i] = base_img_warp[overlap_coords[0],i]*0.5 next_img_warp[overlap_coords[0],i] = next_img_warp[overlap_coords[0],i]*0.5 final_img = cv2.add(base_img_warp,next_img_warp)
如果有人想给它一个镜头,这里有两个扭曲的图像,和他们的重叠区域的面具:http://imgur.com/a/9pOsQ
解决方法
这是我一般的方式:
int main(int argc,char* argv[]) { cv::Mat input1 = cv::imread("C:/StackOverflow/Input/pano1.jpg"); cv::Mat input2 = cv::imread("C:/StackOverflow/Input/pano2.jpg"); // compute the vignetting masks. This is much easier before warping,but I will try... // it can be precomputed,if the size and position of your ROI in the image doesnt change and can be precomputed and aligned,if you can determine the ROI for every image // the compression artifacts make it a little bit worse here,I try to extract all the non-black regions in the images. cv::Mat mask1; cv::inRange(input1,cv::Vec3b(10,10,10),cv::Vec3b(255,255,255),mask1); cv::Mat mask2; cv::inRange(input2,mask2); // now compute the distance from the ROI border: cv::Mat dt1; cv::distanceTransform(mask1,dt1,CV_DIST_L1,3); cv::Mat dt2; cv::distanceTransform(mask2,dt2,3); // now you can use the distance values for blending directly. If the distance value is smaller this means that the value is worse (your vignetting becomes worse at the image border) cv::Mat mosaic = cv::Mat(input1.size(),input1.type(),cv::Scalar(0,0)); for (int j = 0; j < mosaic.rows; ++j) for (int i = 0; i < mosaic.cols; ++i) { float a = dt1.at<float>(j,i); float b = dt2.at<float>(j,i); float alpha = a / (a + b); // distances are not between 0 and 1 but this value is. The "better" a is,compared to b,the higher is alpha. // actual blending: alpha*A + beta*B mosaic.at<cv::Vec3b>(j,i) = alpha*input1.at<cv::Vec3b>(j,i) + (1 - alpha)* input2.at<cv::Vec3b>(j,i); } cv::imshow("mosaic",mosaic); cv::waitKey(0); return 0; }
基本上您计算从您的ROI边界到对象中心的距离,并从两个混合掩码值计算alpha.因此,如果一个图像与边框有较高距离,而另一个图像距离边框较远,则您更喜欢靠近图像中心的像素.对于扭曲图像的大小不一样的情况,对这些值进行归一化将会更好.
但是,更好和更有效的是预先计算混合面具并扭曲它们.最好的是知道您的光学系统的渐晕,并选择和相同的混合面罩(通常较低的边框值).
从以前的代码,你会得到这些结果:
ROI面具:
混合面具(就像印象一样,必须是浮点矩阵):
图像镶嵌: