Face tracking

The FaceTracker class tracks faces in consecutive frames. It detects faces with motion prediction models that make it robust to occlusions. It also automatically creates and consolidates templates over time.

Important

The face tracker requires the relevant AI models to run. See Face detection models for details.

The face tracker outputs a list of TrackedFace objects containing similar information as in a DetectedFace:

  • An identifier

  • A face detection score

  • The face bounds

  • The portrait bounds

  • The interocular distance (IOD)

  • The 5 landmark features (eyes, nose and mouth)

Plus the following:

Predicted bounds

Predicted bounds are computed using a Kalman filter which has the effect of making them smooth and robust to false non-detections.

Parameters

The face tracker defines the following parameters:

Name

Description

Detection model

Face detection models used to detect faces.

Detection threshold

Confidence threshold of the face detector. Range is 0-100. Default value is 50.

Encoding model

Model used to create features and assess consistancy among views of a given face. Default value is FaceEncoder9B.
Some better accuracy/speed balances can be found by choosing another model.

Matching threshold

Minimum match score to reach to preserve the ID of a tracked face between frame ‘t-1’ and frame ‘t’.
Default value is 3000 which corresponds to a False Match Rate of 1/1000.

Maximum tracked face age

Maximum number of consecutive non-detections to reach before deleting a tracked face.
Default value is 30 which corresponds to 2s at a frame rate of 15 FPS.
One must adapt this value to its needs in terms of tracker identity memory (in seconds) and measured frame rate on target platform.

Minimum tracked face age

Minimum number of consecutive detections to reach before creating a tracked face.
Default value is 1 for FaceDetector3B since the false detection rate is low enough.
If using a less accurate detector (such as FaceDetector3C) one might consider increasing a bit this value to avoid false tracks.

NMS IOU threshold

Non-maximum suppression (NMS) intersection-over-union (IOU) threshold.
Setting a high threshold allows to detect more overlapping faces which can be useful in a multi-face scenario. On the contrary, in a portrait scenario, a low NMS IOU threshold should be preferred.

Thread count

Number of threads to use for face detection. Default is 1.

Example

# create a new instance of the FaceDetector class.
face_detector = FaceDetector(
    confidence_threshold = 50,
    nms_iou_threshold = 40,
    thread_count = 4
)

# warm up the face detector
face_detector.warm_up(512, 512)

# downscale image
downscaled_image = image.clone()
scale = downscaled_image.downscale(256)

# detect faces on the image.
detected_face_list = face_detector.detect_faces(downscaled_image)

# enumerate detected faces.
for face in detected_face_list:
    bounds = face.bounds

# get the largest face
face = detected_face_list.get_largest_face()

# rescale detected face
face.rescale(1/scale)

# Dispose of all resources allocated to the FaceDetector.
del face_detector
FaceLibrary.loadModel(modelPath, FaceModel.faceDetector4B, ProcessingUnit.cpu);

// Create a new instance of the FaceDetector class.
var faceDetector = FaceDetector()
  ..confidenceThreshold = 50
  ..model = FaceModel.faceDetector4B
  ..nmsIouThreshold = 40
  ..threadCount = 4;

// Load an image from a file.
Image image = Image.fromFile("image1.jpg", PixelFormat.bgr24Bits);

// Detect faces on the image.
DetectedFaceList detectedFaceList = faceDetector.detectFaces(image);

// Enumerate detected faces.
for (DetectedFace face in detectedFaceList) {
  // ...
}

// Disposes all resources allocated to the FaceDetector.
faceDetector.dispose();
// Load the face detection AI model.
FaceLibrary.LoadModel(modelPath, FaceModel.FaceDetector4B, ProcessingUnit.Cpu);

// Create a new instance of the FaceTracker class.
var faceTracker = new FaceTracker()
{
    ConfidenceThreshold = 50,
    Model = FaceModel.FaceDetector4B,
    NmsIouThreshold = 40,
    ThreadCount = 4
};

// Load an image from a file.
Image image = Image.FromFile("image1.jpg", PixelFormat.Bgr24Bits);

// Detect faces on the image.
TrackedFaceList trackedFaceList = faceTracker.TrackFaces(image);

// Enumerate tracked faces.
foreach (TrackedFace face in trackedFaceList)
{
    // ...                
}

// Dispose all resources allocated to the FaceTracker.
faceTracker.Dispose();
FaceLibrary.loadModel(modelPath, FaceModel.FACE_DETECTOR_4B, ProcessingUnit.CPU);

// Create a new instance of the FaceDetector class.
FaceDetector faceDetector = new FaceDetector();
faceDetector.setConfidenceThreshold(50);
faceDetector.setModel(FaceModel.FACE_DETECTOR_4B);
faceDetector.setNmsIouThreshold(40);
faceDetector.setThreadCount(4);

// Load an image from a file.
Image image = Image.fromFile("image1.jpg", PixelFormat.BGR_24_BITS);

// Detect faces on the image.
DetectedFaceList detectedFaceList = faceDetector.detectFaces(image);

// Enumerate detected faces.
for (DetectedFace face : detectedFaceList) {
    // ...
}

// Dispose of all resources allocated to the FaceDetector.
faceDetector.dispose();
FaceLibrary.loadModel(modelPath, FaceModel.FACE_DETECTOR_4B, ProcessingUnit.CPU)

// Create a new instance of the FaceDetector class.
val faceDetector = FaceDetector().apply {
    confidenceThreshold = 50
    model = FaceModel.FACE_DETECTOR_4B
    nmsIouThreshold = 40
    threadCount = 4
}

// Load an image from a file.
val image = Image.fromFile("image1.jpg", PixelFormat.BGR_24_BITS)

// Detect faces on the image.
val detectedFaceList = faceDetector.detectFaces(image)

// Enumerate detected faces.
for (face in detectedFaceList) {
    // ...
}

// Dispose of all resources allocated to the FaceDetector.
faceDetector.dispose()
FaceLibrary.loadModel(modelPath, model: .faceDetector4B, processingUnit: .cpu)

// Create a new instance of the FaceDetector class.
var faceDetector = FaceDetector()
faceDetector.confidenceThreshold = 50
faceDetector.model = .faceDetector4B
faceDetector.nmsIouThreshold = 40
faceDetector.threadCount = 4

// Load an image from a file.
let image = Image(fromFile: "image1.jpg", pixelFormat: .bgr24Bits)

// Detect faces on the image.
let detectedFaceList = faceDetector.detectFaces(image: image)

// Enumerate detected faces.
for face in detectedFaceList {
    // ...
}

// Dispose of all resources allocated to the FaceDetector.
faceDetector.dispose()
ID3_FACE_IMAGE image = NULL;
ID3_FACE_DETECTOR detector = NULL;
ID3_DETECTED_FACE_LIST detected_face_list = NULL;
ID3_DETECTED_FACE detected_face = NULL;
int face_count = 0;
int err;

// Load the face detection AI model.
err = id3FaceLibrary_LoadModel(models_dir.c_str(), id3FaceModel_FaceDetector3B, id3FaceProcessingUnit_Cpu);

// Create a new instance of the FaceDetector class.
if (err == id3FaceError_Success)
{
    err = id3FaceDetector_Initialize(&detector);

    if (err == id3FaceError_Success)
        err = id3FaceDetector_SetConfidenceThreshold(detector, 50);
        
    if (err == id3FaceError_Success)
        err = id3FaceDetector_SetModel(detector, id3FaceModel_FaceDetector3B);

    if (err == id3FaceError_Success)
        err = id3FaceDetector_SetNmsIouThreshold(detector, 40);

    if (err == id3FaceError_Success)
        err = id3FaceDetector_SetThreadCount(detector, 4);
}

// load an image from a file
if (err == id3FaceError_Success)
{
    err = id3FaceImage_Initialize(&image);

    if (err == id3FaceError_Success)
        err = id3FaceImage_FromFile(image, "image1.jpg", id3FacePixelFormat_Bgr24Bits);
}

// Detect faces on the image.
if (err == id3FaceError_Success)
{
    err = id3DetectedFaceList_Initialize(&detected_face_list);

    if (err == id3FaceError_Success)
        err = id3DetectedFace_Initialize(&detected_face);

    if (err == id3FaceError_Success)
        err = id3FaceDetector_DetectFaces(detector, image, detected_face_list);
}   

// Enumerate detected faces.
if (err == id3FaceError_Success)
{
    err = id3DetectedFaceList_GetCount(detected_face_list, &face_count);

    if (err == id3FaceError_Success)
    {
        for (int i = 0; i < face_count; i++)
        {
            err = id3DetectedFaceList_Get(detected_face_list, i, detected_face); 

            if (err == id3FaceError_Success)
            {
                id3FaceRectangle bounds;
                err = id3DetectedFace_GetBounds(detected_face, &bounds);
            }
        }
    }
}

// Disposes all resources
if (detected_face != NULL)    
    id3DetectedFace_Dispose(&detected_face);

if (detected_face_list != NULL)
    id3FaceDetectedFaceList_Dispose(&detected_face_list);

if (detector != NULL)
    id3FaceDetector_Dispose(&detector);

See also