这是我需要实现的:
>从相机或图库中拍摄图像
>从图像&中删除背景保存
>背景应该是黑色或白色
>还需要删除阴影和背景
结果示例:
原始图像
结果图片
这是我尝试过的:
CGFloat colorMasking[6]={222,255,222,255}; CGImageRef imageRef = CGImageCreateWithMaskingColors([IMG CGImage],colorMasking); UIImage *resultThumbImage = [UIImage imageWithCGImage:imageRef scale:ThumbImage.scale orientation:IMG.imageOrientation];
它唯一的工作在白色背景.它不是更有效.我需要达到我在上面的图像中放置的确切结果.
我还提到了一些参考文献:
iOS how to mask the image background color
How to remove the background of image in iphone app?
Changing the background color of a captured image from camera to white
有人可以帮我实现这个目标吗?
任何参考或将受到高度赞赏.
提前致谢.
解决方法
@H_502_36@ 通常,根据经验,背景颜色与所有其他颜色的差异越大,将图像分割成前景和后景更容易.在这种情况下,就像@Chris一样
建议,可以使用简单的色度键实现.以下是我的快速
在 Wikipedia上描述的键控的实现(它用C语言编写但是将其翻译成
Objective-C应该很简单):
/** * @brief Separate foreground from background using simple chroma keying. * * @param imageBGR Image with monochrome background * @param chromaBGR Color of the background (using channel order BGR and range [0,255]) * @param tInner Inner threshold,color distances below this value will be counted as foreground * @param tOuter Outer threshold,color distances above this value will be counted as background * * @return Mask (0 - background,255 - foreground,[1,255] - partially fore- and background) * * Details can be found on [Wikipedia][1]. * * [1]: https://en.wikipedia.org/wiki/Chroma_key#Programming */ cv::Mat1b chromaKey( const cv::Mat3b & imageBGR,cv::Scalar chromaBGR,double tInner,double tOuter ) { // Basic outline: // // 1. Convert the image to YCrCb. // 2. Measure Euclidean distances of color in YCrBr to chroma value. // 3. Categorize pixels: // * color distances below inner threshold count as foreground; mask value = 0 // * color distances above outer threshold count as background; mask value = 255 // * color distances between inner and outer threshold a linearly interpolated; mask value = [0,255] assert( tInner <= tOuter ); // Convert to YCrCb. assert( ! imageBGR.empty() ); cv::Size imageSize = imageBGR.size(); cv::Mat3b imageYCrCb; cv::cvtColor( imageBGR,imageYCrCb,cv::COLOR_BGR2YCrCb ); cv::Scalar chromaYCrCb = bgr2ycrcb( chromaBGR ); // Convert a single BGR value to YCrCb. // Build the mask. cv::Mat1b mask = cv::Mat1b::zeros( imageSize ); const cv::Vec3d key( chromaYCrCb[ 0 ],chromaYCrCb[ 1 ],chromaYCrCb[ 2 ] ); for ( int y = 0; y < imageSize.height; ++y ) { for ( int x = 0; x < imageSize.width; ++x ) { const cv::Vec3d color( imageYCrCb( y,x )[ 0 ],imageYCrCb( y,x )[ 1 ],x )[ 2 ] ); double distance = cv::norm( key - color ); if ( distance < tInner ) { // Current pixel is fully part of the background. mask( y,x ) = 0; } else if ( distance > tOuter ) { // Current pixel is fully part of the foreground. mask( y,x ) = 255; } else { // Current pixel is partially part both,fore- and background; interpolate linearly. // Compute the interpolation factor and clip its value to the range [0,255]. double d1 = distance - tInner; double d2 = tOuter - tInner; uint8_t alpha = static_cast< uint8_t >( 255. * ( d1 / d2 ) ); mask( y,x ) = alpha; } } } return mask; }
可以在此Github Gist中找到完整的代码示例.
不幸的是,你的例子不遵循这个经验法则.自从前景和
背景仅在强度上变化,难以(或甚至不可能)找到单个全局集
良好分离的参数:
>物体周围的黑线但物体内没有孔(tInner = 50,tOuter = 90)
>物体周围没有黑线,物体内部没有孔(tInner = 100,tOuter = 170)
因此,如果您无法更改图像的背景
需要复杂的方法.但是,快速简单的示例实现有点超出范围,但您可能需要查看image segmentation和image segmentation的相关区域
alpha matting.