hybridgroup/cylon-opencv

View on GitHub
examples/face_detection/face_detection.markdown

Summary

Maintainability
Test Coverage
# Face Detection

First, let's import Cylon:

    Cylon = require 'cylon'
    var Cylon = require('cylon');

Now that we have Cylon imported, we can start defining our robot

    Cylon.robot({

Let's define the connections and devices:

      connections: {
        opencv: { adaptor: 'opencv' }
      },

      devices: {
        window: { driver: 'window' },
        camera: {
          driver: 'camera',
          camera: 0,
          haarcascade: __dirname + "/haarcascade_frontalface_alt.xml"
        }
      },

Now that Cylon knows about the necessary hardware we're going to be using, we'll
tell it what work we want to do:

      work: function(my) {
        // We setup our face detection when the camera is ready to
        // display images, we use `once` instead of `on` to make sure
        // other event listeners are only registered once.
        my.camera.once('cameraReady', function() {
          console.log('The camera is ready!');

          // We add a listener for the facesDetected event
          // here, we will get (err, image/frame, faces) params back in
          // the listener function that we pass.
          // The faces param is an array conaining any face detected
          // in the frame (im).
          my.camera.on('facesDetected', function(err, im, faces) {
            // We loop through the faces and manipulate the image
            // to display a square in the coordinates for the detected
            // faces.
            for (var i = 0; i < faces.length; i++) {
              var face = faces[i];
              im.rectangle(
                [face.x, face.y],
                [face.x + face.width, face.y + face.height],
                [0, 255, 0],
                2
              );
            }

            // The second to last param is the color of the rectangle
            // as an rgb array e.g. [r,g,b].
            // Once the image has been updated with rectangles around
            // the faces detected, we display it in our window.
            my.window.show(im, 40);

            // After displaying the updated image we trigger another
            // frame read to ensure the fastest processing possible.
            // We could also use an interval to try and get a set
            // amount of processed frames per second, see below.
            my.camera.readFrame();
          });

          // We listen for frameReady event, when triggered
          // we start the face detection passing the frame
          // that we just got from the camera feed.
          my.camera.on('frameReady', function(err, im) {
            my.camera.detectFaces(im);
          });

          my.camera.readFrame();
        });
      }

Now that our robot knows what work to do, and the work it will be doing that
hardware with, we can start it:

    }).start();