将库OpenCV 2.4.2连接到xcode 4.5.1上

前端之家收集整理的这篇文章主要介绍了将库OpenCV 2.4.2连接到xcode 4.5.1上前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。
我已经安装了opencv与macports遵循这里的指导: Compile OpenCV (2.3.1+) for OS X Lion / Mountain Lion with Xcode

我也搜索并尝试在stackexchange和google上的其他变体,但这似乎让我最接近.

它似乎适用于某些事情,但不适用于2.4.2附带的示例代码.请注意,我已经添加了所有opencv 2.4.2 dylibs链接二进制与库.

例如,以下将编译并运行:

#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>

int main ( int argc,char **argv )
{
    cvNamedWindow( "My Window",1 );
    IplImage *img = cvCreateImage( cvSize( 640,480 ),IPL_DEPTH_8U,1 );
    CvFont font;
    double hScale = 1.0;
    double vScale = 1.0;
    int lineWidth = 1;
    cvInitFont( &font,CV_FONT_HERSHEY_SIMPLEX | CV_FONT_ITALIC,hScale,vScale,lineWidth );
    cvPutText( img,"Hello World!",cvPoint( 200,400 ),&font,cvScalar( 255,255,0 ) );
    cvShowImage( "My Window",img );
    cvWaitKey();
    return 0;
}

但是,当我尝试构建任何示例,如display_image.cpp,示例如下,我得到链接错误.

– 不工作 –

#include <stdio.h>
 #include <iostream>
 #include "opencv2/imgproc/imgproc.hpp"
 #include "opencv2/highgui/highgui.hpp"
 #include "opencv2/flann/miniflann.hpp"

 using namespace cv; // all the new API is put into "cv" namespace. Export its content
 using namespace std;
 using namespace cv::flann;

static void help()
{
    cout <<
    "\nThis program shows how to use cv::Mat and IplImages converting back and forth.\n"
    "It shows reading of images,converting to planes and merging back,color conversion\n"
    "and also iterating through pixels.\n"
    "Call:\n"
    "./image [image-name Default: lena.jpg]\n" << endl;
}

int main(int argc,char *argv[])
{
    help();
    const char* imagename = argc > 1 ? argv[1] : "lena.jpg";
    Mat img = imread(imagename); // the newer cvLoadImage alternative,MATLAB-style function
    if(img.empty())
    {
        fprintf(stderr,"Can not load image %s\n",imagename);
        return -1;
    }
    if( !img.data ) // check if the image has been loaded properly
        return -1;

    Mat img_yuv;
    cvtColor(img,img_yuv,CV_BGR2YCrCb); // convert image to YUV color space. The output image will be created automatically

    vector<Mat> planes; // Vector is template vector class,similar to STL's vector. It can store matrices too.
    split(img_yuv,planes); // split the image into separate color planes

    imshow("image with grain",img);

    waitKey();

    return 0;

}

我收到以下错误

Undefined symbols for architecture x86_64:
 "cv::split(cv::Mat const&,std::__1::vector<cv::Mat,std::__1::allocator<cv::Mat> >&)",referenced from:
  _main in main1.o
 "cv::imread(std::__1::basic_string<char,std::__1::char_traits<char>,std::__1::allocator<char> > const&,int)",referenced from:
  _main in main1.o
 "cv::imshow(std::__1::basic_string<char,cv::_InputArray const&)",referenced from:
  _main in main1.o
 ld: symbol(s) not found for architecture x86_64
 clang: error: linker command Failed with exit code 1 (use -v to see invocation)

任何想法如何解决这个问题?

解决方法

我有同样的问题. Xcode 4.5中的构建设置默认值似乎不同.

在“构建设置”下 – > Apple LLVM编译器4.1 – 语言>

C标准库:=从libc(LLVM …)更改为libstdc(GNU C …).

原文链接:https://www.f2er.com/iOS/336320.html

猜你在找的iOS相关文章