home   |  research   |  members   |  projects   |  publications   |  conferences of interest   |  downloads   |  contact & impressum   |  privacy information
 
 

USIT -- University of Salzburg Iris-Toolkit v3.0.0

This is "The Multimedia Signal Processing and Security Lab", short WaveLab, website. We are a research group at the Artificial Intelligence and Human Interfaces (AIHI) Department of the University of Salzburg led by Andreas Uhl. Our research is focused on Visual Data Processing and associated security questions. Most of our work is currently concentrated on Biometrics, Media Forensics and Media Security, Medical Image and Video Analysis, and application oriented fundamental research in digital humanities, individualised aquaculture and sustainable wood industry.
This is USIT Version 2 — Version 1 is located here

USIT - University of Salzburg Iris Toolkit v2 is a Windows/Linux software package for iris recognition, made publicly available together with the book chapter:

C. Rathgeb, A. Uhl, P. Wild, and H. Hofbauer. "Design Decisions for an Iris Recognition SDK," in K. Bowyer and M. J. Burge, editors, Handbook of iris recognition, second edition, Advances in Computer Vision and Pattern Recognition, Springer, 2016.

The software package includes algorithms for:

  • Iris Preprocessing
  • Feature Extraction
  • Feature Comparison
USIT is based on easy-to-use command line tools (input and output relies on files). In order to download USIT follow the link on the bottom.

Chapter Abstract:
Open source software development kits are vital to (iris) biometric research in order to achieve comparability and reproducibility of research results. In addition, to further advances in the field of iris biometrics the community needs to be provided with state-of-the-art reference systems which serve as adequate starting point for new research. This chapter provides a summary of relevant design decisions for software modules constituting an iris recognition system. The proposal of general criteria and adequate concepts is complemented by a detailed description of how according design decisions are implemented in the University of Salzburg Iris Toolkit, an open source iris recognition software which contains diverse algorithms for iris segmentation, feature extraction, and comparison. Building upon a file-based processing chain the provided open source software is designed to support rapid prototyping as well as integration in existing frameworks achieving enhanced usability and extensibility. In order to underline the competitiveness of the presented iris recognition software, experimental evaluations of segmentation and feature extraction algorithms are carried out on a publicly available iris database and compared to a commercial product.

Readme

License

H. Hofbauer, C. Rathgeb, A. Uhl, and P. Wild,
University of Salzburg, AUSTRIA,
2020

Copyright (c) 2020, University of Salzburg All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

If this software is used to prepare for an article please include the following reference:

Text

C. Rathgeb, A. Uhl, P. Wild, and H. Hofbauer. “Design Decisions for an Iris Recognition SDK,” in K. Bowyer and M. J. Burge, editors, Handbook of iris recognition, second edition, Advances in Computer Vision and Pattern Recognition, Springer, 2016.

Bibtex

@incollection{USIT3,
    author     = {Christian Rathgeb and Andreas Uhl and Peter Wild and Heinz Hofbauer},
    title      = {Design Decisions for an Iris Recognition SDK},
    booktitle  = {Handbook of Iris Recognition},
    editor     = {Kevin Bowyer and Mark J. Burge},
    publisher  = {Springer},
    year       = {2016},
    series     = {Advances in Computer Vision and Pattern Recognition},
    edition    = {second edition},
}

Requirements

These programs require the following libraries:

Algorithm description

Segmentation

  • caht … Contrast-adjusted Hough Transform, now also includes the tunable version
  • wahet … Weighted Adaptive Hough and Ellipsopolar Transform, now also includes the tunable version
  • cahtvis … Same as caht but for visible light (searches for iris first then pupil)
  • ifpp … Iterative Fourier-series Push Pull
  • manuseg … Uses points from a manual segmentation to extract the iris texture

Tools

  • cahtlog2manuseg … Generates input for manuseg from caht segmentation logs. Can be used to segment masks for drop in mask replacement.
    How to:
    1. Segment with caht and log (-l) the segmentation.
    2. Convert the segmentation log to manuseg input files with cahtlog2manuseg
    3. Use manuseg and the generated files to normalize drop in masks.
  • wahetlog2manuseg … Basically same as cahtlog2manuseg but using elliptical parameters as generated by wahet instead of circular caht parameters.

Iris Mask comparions

  • maskcmp … Comparison of iris masks

Iris Feature Extracation

  • lg … 1D-LogGabor Feature Extraction (=> hd for comparison)
  • cg … Complex Gabor filterbanks as used by Daugman (=> hd for comparison)
  • qsw … Extraction with the algorithm of Ma et al. (=> hd for comparison)
  • ko … Algorithm of ko et al. (=> koc for comparison)
  • cr … Algorithm of Rathgeb and Uhl (=> hd for comparison)
  • cb … Context-based Iris Recognition (=> cbc for comparison)
  • dct … Algorithm of Monroe et al. (=> dctc for comparison)
  • sift … Sift points as iris code (=> siftc for comparison)
  • surf … Surf points as iris code (=> surfc for comparison)
  • lbp … Local binary pattern based features (=> lbpcc for comparison)

Comparators

  • koc … Algorithm of Ko et al.
  • cbc … Context based algorithm
  • dctc … Algorithm of Monro et al.
  • siftc … Comparator for sift iris codes
  • surfc … Comparator for surf iris codes
  • lbpc … Comparator for lbp based iris codes
  • hd … Hamming Distance-based Comparator

Verification

  • hdverify … Performance of Hamming Distance-based verification of iris codes

Evaluation

  • gen_stats_np.py … Generate statistics from score file

gen_stats_np

Can handle differnt input formats, by way of regular expressions, but the main format is of the following form:

FILE1 FILE2 SCORE

where the user id should be determinable (by regular expresssion) from the filename.

An example of a score file (first 10 lines only):

lg_caht_c1_bD_s1_u038-R_004.tiff lg_caht_c1_bD_s1_u038-R_004.tiff 0
lg_caht_c1_bD_s1_u038-R_004.tiff lg_caht_c1_bD_s1_u038-L_000.tiff 0.482091
lg_caht_c1_bD_s1_u038-R_004.tiff lg_caht_c1_bD_s1_u009-L_002.tiff 0.461379
lg_caht_c1_bD_s1_u038-R_004.tiff lg_caht_c1_bD_s1_u009-R_002.tiff 0.48866
lg_caht_c1_bD_s1_u038-R_004.tiff lg_caht_c1_bD_s1_u030-L_003.tiff 0.459765
lg_caht_c1_bD_s1_u038-R_004.tiff lg_caht_c1_bD_s1_u001-R_004.tiff 0.458024
lg_caht_c1_bD_s1_u038-R_004.tiff lg_caht_c1_bD_s1_u035-R_003.tiff 0.451192
lg_caht_c1_bD_s1_u038-R_004.tiff lg_caht_c1_bD_s1_u013-R_003.tiff 0.460288
lg_caht_c1_bD_s1_u038-R_004.tiff lg_caht_c1_bD_s1_u004-R_000.tiff 0.488936
lg_caht_c1_bD_s1_u038-R_004.tiff lg_caht_c1_bD_s1_u018-R_001.tiff 0.454028

the regular expression to get the user id would be u(\d\d\d-[LR])_, groups are used to select the id and multiple groups are allowed and will simply be concatenated. An example output of the above input is:

CONFIG   linemodel:=space
CONFIG   idgen:=(\d\d\d-[LR])
CONFIG   ignorecase:=False
CONFIG   outfile:=None
CONFIG   distfile:=None
CONFIG   comparefile:=None
CONFIG   statistics:=True
CONFIG   reverse:=False
CONFIG   filename:=hd_lg_caht_mask_s16.txt
CONFIG   range:=[0, 1]
CONFIG   gencount:=672
CONFIG   fnamesquash:=(.*)
CONFIG   bins:=1000
CONFIG   impcount:=61104
672 genuines, 61104 imposters
EER = 19.504090% at threshold t = 0.445 
OVL_b = 32.641539%
AUC_b = 87.897819%
FNMR = 41.964286% at FMR = 0.100000% 
FNMR = 45.072857% at FMR = 0.010000% 
FMR = 99.996164% at FNMR = 0.100000% 
FMR = 99.998143% at FNMR = 0.010000% 

showing equal error rate, overlap coefficient, area under curve (both based on the binning) as well as different ROC operation points (FNMR at a given FMR and FMR at a given FNMR).

Face/Face-part detection

  • gfcf … Gaussian Face and Face-part Classification Fusion

USIT packages

The packages contain software which is not part of the core USIT package. It was taken from different publications and is packaged with the USIT for convenience reasons.

Each subdirectory should be self contained and should contain the respective software as well as a readme.md which outlines the license and usage information (in case it differs from the core USIT).

Also note that windows binaries and makefiles might or might not be supplied in the packages. However, unless noted otherwise in the individual packages the requirements should be the same as for the base USIT package.

Binarized Statistical Image Features

Requirements

As base USIT and

License

The USIT License applies.

If this software is used to prepare for an article please include the following reference:

Text

Christian Rathgeb, Florian Struck, Christoph Busch, “Efficient BSIF-based Near-Infrared Iris Recognition”, in Proceedings of International Conference on Image Processing Theory, Tools and Applications (IPTA’16), 2016.

Bibtex

@INPROCEEDINGS{Rathgeb16c,
  AUTHOR     = {Christian Rathgeb and Florian Struck and Christoph Busch},
  TITLE      = {Efficient BSIF-based Near-Infrared Iris Recognition},
  BOOKTITLE  = {Proceedings of International Conference on Image Processing Theory, Tools and Applications (IPTA'16)},
  YEAR       = {2016},
}

CNN Masks to Manuseg Segmentation

Usage

cnnmasktomanuseg.py input.ext output_directory

The input file is read and processed based on the circular boundary finding algorithm descsribed in the paper below. The input is striped from it’s extension and parameter files for inner (pupillary), output_directory/input.inner.txt, and outer (sclera), output_directory/input.outer.txt, boundary of the iris are written into the output directory in a format compatible with manuseg.

License

The USIT License applies.

If this software is used to prepare for an article please include the following reference:

Text

Heinz Hofbauer, Ehsaneddin Jalilian, and Andreas Uhl. “Exploiting superior CNN-based iris segmentation for better recognition accuracy”, Pattern Recognition Letters 120, 2019. DOI: 10.1049/iet-bmt.2015.0069 .

Bibtex

@article{ Hofbauer19a,
    doi = {10.1016/j.patrec.2018.12.021},
    author = {Heinz Hofbauer, Ehsaneddin Jalilian, Andreas Uhl},
    title = {Exploiting superior CNN-based iris segmentation for better recognition accuracy},
    journal = {Pattern Recognition Letters},
    issn = {0167-8655},
    volume = {120},
    year = {2019},
    pages = {17-23},
}

Triple A

License

The USIT License applies.

If this software is used to prepare for an article please include the following reference:

Text

Christian Rathgeb, Heinz Hofbauer, Andreas Uhl, and Christoph Busch. “TripleA: Accelerated Accuracy-preserving Alignment for Iris-Codes”, Proceedings of the 9th IAPR/IEEE International Conference on Biometrics (ICB’16), 2016.

Bibtex

@INPROCEEDINGS{Rathgeb16b,
  AUTHOR     = {Christian Rathgeb and Heinz Hofbauer and Andreas Uhl and Christoph Busch},
  TITLE      = {{TripleA}: Accelerated Accuracy-preserving Alignment for Iris-Codes},
  BOOKTITLE  = {Proceedings of the 9th IAPR/IEEE International Conference on Biometrics (ICB'16)},
  YEAR       = {2016},
  PAGES      = {8}
}

Changelog

  • [v3.0.0] 2020.04.22

    IMPORTANT:
    The bump to version 3.0.0 is due to changes in behaviour that makes this version incompatible with scripts which might have been written for version 2. Specifically, caht, wahet and hd changed their default behavior:

    caht and wahet
    These downscale the input image to a size of 320x280 if it is larger than 480x420 (more than 50% larger than the target size). This can be reverted with the -noscale commandline option.
    The reason for this change is that both segmentors contain assumptions about the image size and image to iris ratio. For larger images for example, the eyelash detection with a structuring element of fixed (pixel-based) size, as employed by wahet, would not find anything because eyelashes are to large to match.
    hd
    From a medical point of view the eye can rotate in the eye socket, this means that even if the head is kept in the same position there will be rotation compensated in for iris matching. Various papers have looked into this effect and found ample rotation in databases used for research. The minimum amount of rotation is ±10°. Given that the output of caht and wahet is 512x64 that translates to -s -16 16 which is now the default.
    • A shift of ±16 bit is enabled by default (-s -16 16).
    • Harddistk space is cheap, -b is ENABLED by default (turn off with -boff)
    • Memory is cheap, -# is ENABLED by default (turn off with -#off)

    Note: The -b and -# flags have been kept for compatibilities sake, they do nothing. If both -b and -boff (or -# and -#off) are present then the enabling flag overrides.

    These changes in theory create incombatibilities, in practice the difference should be minimal.

    Minor changes:

    • gen_stats_np now has the option --odauto which enables writing of output and statistics and set the filenames based on the input with extension .eer and .stats. The direct options (-o and -d) can override the automatic filenames if required.
    • Added rank 1 accuracy (and rank 5 and 10) for identification evaluation to gen_stats_np. This is off by default as the computation is expensive, switch on with --R1.
    • gen_stats_np should now be able to work better with inputs not generated by hd.
    • gen_stats_np now allows for floats in the range expression.
    • gen_stats_np now prints each warning only once and gives a summary and count at the end instead of spamming.
    • Fixed an error in gen_stats_np which calculated a wrong FNMR@ ratio if the FMR had multiple positions where it was was equal to the threshold.
  • [v2.4.2] 2019.03.15

    • Added zero FMR and zero FNMR to gen_stat_np.py output.

    • TripleA now has a -M option to extend maximum shift

    • Fixed the gen_stat_np.py McNemar and statistics part (id was based on filename instead of id so no match was found on rare occasions, imports reflect the new scipy package structure).

    • Change of lg behaviour, it will skip inputs with empty data or mask instead of breaking. Should be more stable in batch processing now.

    • Change of qsw behaviour, it will skip inputs with empty data or mask instead of breaking. Should be more stable in batch processing now.

    • Changed hd to give some more informative errors when it fails.

    • Included a package, providing cnnmasktomanuseg.py, for the paper:

      Heinz Hofbauer, Ehsaneddin Jalilian, Andreas Uhl. “Exploiting superior CNN-based iris segmentation for better recognition accuracy”, in Pattern Recognition Letters 120, 2019. DOI: 10.1049/iet-bmt.2015.0069.

  • [v2.4.1] 2018.06.15

    • Fixed an error in gen_stats_np.py where output from hd was assumed, or more specifically a sorted list of filenames. This should be more generic now.
    • Fixed an error in gen_stats_np.py the area under curve was slightly overestimated (depending on the slope).
    • Fixed an error in wahetlot2manuseg where an ellipse with spanning rectangle of width or height < 1 only produces two points.
  • [v2.4.0] 2017.11.23

    • Tuneable versions of caht and wahet are and visible light versions cahtvis are in. The visible light version of wahet is wahet with negative border weight. These are the tools used in the chapter:

      Peter Wild, Heinz Hofbauer, James Ferryman, Andreas Uhl. “Robust Iris Image Segmentation”, In Iris and Periocular Biometrics, IET, 2016.

    • Fixed a division-by-zero bug in the mask generation of wahet.

    • Fixed a lot of errors when wahet failed to find a sane segmentation.

    • Fixed a bug in caht where eyelid search on image boundary lead to out of bound writes.

    • New tools:

      • cahtvis
      • gen_stats_np.py takes a score file, as generated from hd and outputs statistics (EER, FNMR) and distribution files for plotting.
  • [v2.3.0] 2017.10.25

    • Fixed a crash when a faulty parameters to hough_circle would generate a negtive size (caht).
    • Fixed a various memory problems with lookup_table in caht.
    • Fixed a division-by-zero bug in the mask generation of caht.
    • Hamming distance and masks: new options to skip failures (-sf) and log them (-sfl <file>), failures happen when there are no unmasked iris bits, i.e. nothing to compare.
      The default remains as it was up until now.
      Discussion: Currently hd is calculated based on no errors (which is technically correct). However this leads to bunching mask failures at HD 0, leading to grossly inflated FNMR at FMR=X% values. This is not usefull, but was the default up till now, and remains so.
      The new options allow the more sensible approach of allowing to remove the resulting comparisons from the results in an easy way (and log them so they can be reported).
    • New tools:
      • cahtlog2manuseg
      • wahetlog2manuseg
  • [v2.2.0] 2016.01.12

    • Fixed the bug where an empty point list for ellipse fitting would cause manuseg to break. Now the one failing input is skipped and the rest runs through.

    • HD has now the capability to report the bitshift at which the optimal HD was found.

    • Fixed a bug where the lg, cg, cr and qsw features wrote the bitsequence for iris codes bytewise out of order, i.e. byte order was correct, however bit order per byte was wrong. This lead to alignment errors with HD rotation correction with the -s option.

    • Included a package for Binarized Statistical Image Features (bsif and bsifc) from the paper:

      Christian Rathgeb, Florian Struck, Christoph Busch. “Efficient BSIF-based Near-Infrared Iris Recognition”, in Proceedings of International Conference on Image Processing Theory, Tools and Applications (IPTA’16), 2016.

  • [v2.1.0] 2016.03.22

    • Included package for TripleA from the paper:

      C. Rathgeb, H. Hofbauer, A. Uhl, and C. Busch. “TripleA: Accelerated Accuracy-preserving Alignment for Iris-Codes”, Proceedings of the 9th IAPR/IEEE International Conference on Biometrics (ICB’16), 2016.

  • [v2.0.0] 2016.02.04

    • Scaling options added as used in the paper:

      Heinz Hofbauer, Fernando Alonso-Fernandez, Josef Bigun, and Andreas Uhl. “Experimental Analysis Regarding the Influence of Iris Segmentation on the Recognition Rate,” in IET Biometrics, 2016.

    • Variable iris texture height support added as used in the paper:

      Rudolf Schraml, Heinz Hofbauer, Alexander Petutschnigg, and Andreas Uhl. “Tree Log Identification Based on Digital Cross-Section Images of Log Ends Using Fingerprint and Iris Recognition Methods,” In Proceedings of the 16th International Conference on Computer Analysis of Images and Patterns (CAIP’15), pp. 752-765, LNCS, Springer Verlag, 2015

    • New tools:

      • cg
      • lbp and lbpc
      • surf and surfc
      • sift and siftc
      • manuseg
    • Renamed iffp to ifpp (for iterative fourier push pull).

Sources and Executables

The code is available upon request.

Please fill out this form to request a download link for the software:

Name:
Affiliation:
Email address: