The technology between face detection and face recognition is wide apart, however these two-term cause a lot of confusion. Face detection only works by capturing images of a person walking through a well-positioned area and camera, then storing those faces in a searchable database.
Face recognition works by gathering the stored images and comparing them against known faces in a database. As you can see, this is a two-step process. The standalone system does not have the processing power required to handle both face detection and recognition at the same time.
Nowadays the face detection is becoming common with some DVRs or NVRs system cameras. As the camera’s resolutions and pixel density improve, many devices will come with face detection and many other IVS features.
Standalone DVRs and NVRs will not have face recognition built-in to the core system, as this technology is still many generations away.
However, face detection can be a useful feature in certain situations. A well-positioned and well-angled camera in front of an entrance can capture people’s face and store them locally in a searchable database.
This feature can be very handy if your security system is set to send alerts on your phone, hence when someone comes in the recorder with send a snapshot of his face to your phone.
Some face recognition algorithms identify facial features by extracting landmarks, or features, from an image of the subject’s face. For example, an algorithm may analyze the relative position, size, and/or shape of the eyes, nose, cheekbones, and jaw.
These features are then used to search for other images with matching features. Other algorithms normalize a gallery of face images and then compress the face data, only saving the data in the image that is useful for face recognition. A
probe image is then compared with the face data. Recognition algorithms can be divided into two main approaches, geometric, which looks at distinguishing features, and photometric, which is a statistical approach that distills an image into values and compares the values with templates to eliminate variances.
Popular recognition algorithms include principal component analysis using eigenfaces, linear discriminant analysis, elastic bunch graph matching using the Fisherface algorithm, the hidden Markov model, the multilinear subspace learning using tensor representation, and the neuronal motivated dynamic link matching.
A newly emerging trend, claimed to achieve improved accuracy, is three-dimensional face recognition. This technique uses 3D sensors to capture information about the shape of a face.
This information is then used to identify distinctive features on the surface of a face, such as the contour of the eye sockets, nose, and chin.
One advantage of 3D face recognition is that it is not affected by changes in lighting like other techniques. It can also identify a face from a range of viewing angles, including a profile view.