Swift 4:如何使用ios11视觉框架从面部标志点创建面部地图

前端之家收集整理的这篇文章主要介绍了Swift 4:如何使用ios11视觉框架从面部标志点创建面部地图前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。
我正在使用ios 11视觉框架实时生成面部标志点.我能够获得面部标志点并使用面部标志点的UIBezierPath覆盖相机层.但是,我希望得到类似右下角的图片.目前我有一些看起来像左图,我试图循环点和添加中点,但我不知道如何从点生成所有这些三角形.我如何从左边的点开始生成右边的地图?

我不确定我能拿到所有的分数,而不是它会帮助太多,但我也从整个脸部的边界框中得到分数.最后,是否有任何框架可以让我认识到我需要的所有要点,比如openCV或其他什么,请告诉我.谢谢!

这是我从https://github.com/DroidsOnRoids/VisionFaceDetection开始使用的代码

func detectLandmarks(on image: CIImage) {
    try? faceLandmarksDetectionRequest.perform([faceLandmarks],on: image)
    if let landmarksResults = faceLandmarks.results as? [VNFaceObservation] {

        for observation in landmarksResults {

            DispatchQueue.main.async {
                if let boundingBox = self.faceLandmarks.inputFaceObservations?.first?.boundingBox {
                    let faceBoundingBox = boundingBox.scaled(to: self.view.bounds.size)
                    //different types of landmarks



                    let faceContour = observation.landmarks?.faceContour
                    self.convertPointsForFace(faceContour,faceBoundingBox)

                    let leftEye = observation.landmarks?.leftEye
                    self.convertPointsForFace(leftEye,faceBoundingBox)

                    let rightEye = observation.landmarks?.rightEye
                    self.convertPointsForFace(rightEye,faceBoundingBox)

                    let leftPupil = observation.landmarks?.leftPupil
                    self.convertPointsForFace(leftPupil,faceBoundingBox)

                    let rightPupil = observation.landmarks?.rightPupil
                    self.convertPointsForFace(rightPupil,faceBoundingBox)

                    let nose = observation.landmarks?.nose
                    self.convertPointsForFace(nose,faceBoundingBox)

                    let lips = observation.landmarks?.innerLips
                    self.convertPointsForFace(lips,faceBoundingBox)

                    let leftEyebrow = observation.landmarks?.leftEyebrow
                    self.convertPointsForFace(leftEyebrow,faceBoundingBox)

                    let rightEyebrow = observation.landmarks?.rightEyebrow
                    self.convertPointsForFace(rightEyebrow,faceBoundingBox)

                    let noseCrest = observation.landmarks?.noseCrest
                    self.convertPointsForFace(noseCrest,faceBoundingBox)

                    let outerLips = observation.landmarks?.outerLips
                    self.convertPointsForFace(outerLips,faceBoundingBox)
                }
            }
        }
    }

}

func convertPointsForFace(_ landmark: VNFaceLandmarkRegion2D?,_ boundingBox: CGRect) {
    if let points = landmark?.points,let count = landmark?.pointCount {
        let convertedPoints = convert(points,with: count)



        let faceLandmarkPoints = convertedPoints.map { (point: (x: CGFloat,y: CGFloat)) -> (x: CGFloat,y: CGFloat) in
            let pointX = point.x * boundingBox.width + boundingBox.origin.x
            let pointY = point.y * boundingBox.height + boundingBox.origin.y

            return (x: pointX,y: pointY)
        }

        DispatchQueue.main.async {
            self.draw(points: faceLandmarkPoints)
        }
    }
}


func draw(points: [(x: CGFloat,y: CGFloat)]) {
    let newLayer = CAShapeLayer()
    newLayer.strokeColor = UIColor.blue.cgColor
    newLayer.lineWidth = 4.0

    let path = UIBezierPath()
    path.move(to: CGPoint(x: points[0].x,y: points[0].y))
    for i in 0..<points.count - 1 {
        let point = CGPoint(x: points[i].x,y: points[i].y)
        path.addLine(to: point)
        path.move(to: point)
    }
    path.addLine(to: CGPoint(x: points[0].x,y: points[0].y))
    newLayer.path = path.cgPath

    shapeLayer.addSublayer(newLayer)
}
我最终找到了一个有效的解决方案.我通过 https://github.com/AlexLittlejohn/DelaunaySwift使用delaunay三角测量,并且我修改它以使用视觉框架的面部地标检测请求生成的点.使用代码片段无法轻松解释这一点,因此我将下面的github repo链接到了我的解决方案.请注意,这不会从额头获得分数,因为视觉框架仅从眉毛中获得分数.

https://github.com/ahashim1/Face

原文链接:https://www.f2er.com/swift/319375.html

猜你在找的Swift相关文章