AI Models

This SDK makes extensive use of AI models, particularly for minutiae detection and encoding tasks.

The following models are available:

Model

Size

Description

FingerAligner1A

2009 kB

Aligns a fingerprint before encoding.

FingerDetector2A

2855 kB

Detects one or multiple fingerprints in images.

FingerEncoder1A

4,683 kB

Provides proprietary information at the fingerprint level to increase match accuracy.

FingerMinutiaDetector3B

3,522 kB

Detects minutiae. Corresponds to id3_13B1 submission to NIST MINEX III.

FingerMinutiaEncoder1A

1,697 kB

Provides proprietary information at the minutiae level to increase match accuracy.

Important

Models can be downloaded at the following URL:

AI model files

AI model files have the .id3nn extension. You must copy the necessary data files into your application package. If you wish to reduce the size of your application, we recommend copying only the files required by the biometric algorithm.

In your application’s source code you should specify location of these files.

Warning

AI model files MUST NOT be renamed.

Loading AI models

It is recommended to load the AI models on application startup. The FingerLibrary Class provides methods for loading and unloading AI model file in memory.

An example is given below:

FingerLibrary.load_model(ai_models_path, FingerModel.FINGER_DETECTOR_2A, ProcessingUnit.CPU)
FingerLibrary.load_model(ai_models_path, FingerModel.FINGER_MINUTIA_DETECTOR_3B, ProcessingUnit.CPU)
FingerLibrary.load_model(ai_models_path, FingerModel.FINGER_ALIGNER_1A, ProcessingUnit.CPU)
FingerLibrary.load_model(ai_models_path, FingerModel.FINGER_ENCODER_1A, ProcessingUnit.CPU)

Processing units

The inference of the AI models can be executed on either the CPU or the GPU if available, by specifying a ProcessingUnit Enumeration. The GPU options selects a default backend depending on your platform. Detailed backend options are available to ensure a specific backend.

Warning

Inference on GPU is an experimental feature. Some models might be unstable on some backend or provide nonsense result. We strongly encourage you to verify the results on those backends, and contact our support in case of inadequate behaviour.

See also