Portrait processing

The PortraitProcessor module facilitates the capture and analysis of portraits in various scenarios such as enrolment or face verification.

The module is fed with images and can perform the following operations:

  • Face detection and tracking with prediction models

  • Landmark detection (68 points)

  • Head pose estimation

  • Subject’s position verification

  • Template encoding and updating

  • Age estimation

  • Background uniformity computation

  • Expression detection

  • Age estimation

  • Occlusion detection

  • Presentation attack detection

  • Background uniformity estimation

  • Background removal

  • Expression estimation

  • Face attribute estimation (eye opening, eye gaze, face mask, gender, glasses, head covering, make-up, mouth opening, smile)

  • Photographic attributes estimation

  • Verification of compliance on a number of quality check points

  • Unified quality score computation

Important

The following AI models are required by the PortraitProcessor module:

  • FaceDetector4B or another preferred face detector (see Face detection for details).

  • FaceEncoder9B, or another preferred face encoder (see Face encoding for details).

  • FaceLandmarksEstimator2A

  • FacePoseEstimator1A

  • FaceAttributesClassifier2A

  • FaceOcclusionDetector2A

  • FaceColorBasedPad3A

Creating and updating a portrait

  • Creating a Portrait (Single Image):

    For analysis based on a single image, use the PortraitProcessor.createPortrait method. This method generates a Portrait object from a static image, enabling subsequent analysis (e.g., liveness detection or biometric processing).

  • Updating a Portrait (Live Capture):

    For real-time or live video capture, the PortraitProcessor.updatePortrait method is preferred. This method continuously updates the Portrait object with each new frame, ensuring a more dynamic and accurate analysis during live sessions.

  • Checking Portrait Status:

    The current state of the portrait (e.g., whether it’s initialized, being processed, or completed) can be monitored through the Portrait.status property. This property provides key status updates throughout the portrait’s lifecycle, which can be used to control workflow logic.

Quality check points

The PortraitProcessor module verifies the compliance to the quality check points defined below.

Important

The quality check points are verified only if the associated processing option is enabled.

Photographic quality

  • The brightness is well balanced.

  • The image is coloured, not grayscale.

  • The skin looks natural.

  • The image resolution is correct.

  • The dynamic range of the image is correct.

  • The image is sharp.

  • No flash reflection is visible.

  • No noise is present in the image.

  • The image is not pixelated.

  • No red-eye is present.

  • The background is uniform.

Facial attributes

  • The subject’s face is frontal.

  • The face expression is neutral.

  • The eyes are visible.

  • The eyes are open.

  • The subject does not wear glasses.

  • The subject looks straight towards the camera.

  • The mouth is visible.

  • The mouth is closed.

  • The subject is not smiling.

  • The subject does not wear a hat.

  • The nose is visible.

Geometry

  • The height of the head in the image is correct.

  • The width of the head in the image is correct.

  • The horizontal position of the head in the image is correct.

  • The vertical position of the head in the image is correct.

Note

Vertical position is the distance from the bottom edge of the image to the imaginary line passing through the center of the eyes, in percentage of the total vertical length of the image.

Presentation attack detection

Presentation Attack Detection (PAD) is enabled through the PortraitProcessor.detectPresentationAttack method. This method implements a passive liveness detection algorithm that analyzes multiple frames to differentiate between bona-fide users and potential spoofing attempts (such as photos or masks). For optimal performance, the user must position their face centrally within the frame.

The result of the detection is reflected in the Portrait.padStatus property, which will indicate whether the portrait corresponds to a bona-fide subject or a presentation attack.

Note

Liveness detection typically requires the analysis of several frames to ensure robustness. To enhance user experience and guide proper interactions, the Portrait.instruction property provides real-time feedback, containing specific instructions that help users adjust their position or actions.

Important

In certain conditions, such as low image quality or challenging environmental factors, accurate liveness detection may be difficult. It is crucial to assess the photographic quality of input images (e.g., lighting, focus) before invoking the PAD method to minimize the risk of false detections and improve the feedback provided to users.

Example

The example below demonstrates how to analyze a portrait on an image:

# initialize the portrait processor
portrait_processor = PortraitProcessor(
    thread_count=4,
)

# process portrait on the image
portrait = portrait_processor.create_portrait(image)    

# check status
if portrait.status == PortraitStatus.CREATED:

    portrait_processor.detect_occlusions(portrait)
    portrait_processor.estimate_face_attributes(portrait)
    portrait_processor.estimate_photographic_quality(portrait)
    portrait_processor.estimate_background_uniformity(portrait)
    portrait_processor.detect_presentation_attack(portrait)
    
    # check presentation attack
    print(f"PAD score = {portrait.pad_score}")

    # show quality check points
    display_quality_checkpoints(portrait.quality_checkpoints)

    # crop portrait to ICAO format
    crop_image = portrait_processor.crop_icao_portrait(portrait)

See also