Advanced Techniques for Securing Digital Images

Purpose of the STSM:

The purpose of this short term scientific mission (STSM) was to establish  collaboration with the hosting group in the field of Multimedia Security at the Otto-von-Guericke University of Magdenburg (Germany). I visited Magdeburg from the 8-21. February 2016 and my academic host was Prof Dr Jana Dittman.

 

Work carried out during the STSM:

During the two weeks the STSM allowed me to discuss advanced techniques and algorithms for securing digital fingerprint images. The members of the hosting group showed me their activities concerning modern scanning techniques of fingerprint traces. The chromatic white light sensor and laser scanning microscope, typically used for the measurement of surface properties for quality assurance purposes, were shown to me. To the resulting high-resolution biometric data, invertible fragile image watermarking is applied, to assure authenticity and integrity of the digital fingerprint images in a forensic investigation. Privacy protection is added by encryption and substitution of the fingerprint area. This scheme can be extended to be used in a digital signature scenario with hierarchical access for further forensic investigation. The capacity of the embedded watermark in the digital images is evaluated in theory and in practical tests using image samples. Preliminary results using this technique from host institution are promising, therefore I hope we will further collaborate in this field during next period.

 

The electronic database of the University has supplied me with considerable material – articles and reviews – which promise to be vastly helpful in my future research work.

 

Also, my hosts made me familiar with their experience regarding the education profile ForensikDesign@Informatik, thaught at the University of Magdenburg.

 

Outcomes of the STSM:

The outcomes of this STSM can be summarized as follows:

  • Interchanging of the present knowledge, methods and research experience regarding image data securing and its implication on modern forensics;
  • To become familiar with the developed methodology in the hosting group and its application in the modern society;
  • Further collaboration on joint master courses in the field of Multimedia Security is considered.

Towards improved biometrics at a distance for forensics

Visitor:
    Dr. Julian Fierrez, Universidad Autonoma de Madrid, SPAIN
Host:
    Prof. Stan Z. Li, Institute of Automation, Chinese Academy of Sciences,
CHINA
Period:
    13-24 February, 2016
Title:
    Towards improved biometrics at a distance for forensics
Goals (all achieved):
    1.- Presentation of research activities at UAM related to biometrics at
a distance and soft biometrics by Dr. Fierrez.
    2.- Interaction with the group members at CBSR-CASIA related to
biometrics at a distance and soft biometrics with application to big
volumes of video data.
    3.- Experimental work using the algorithms available at UAM for
biometrics at a distance on the research databases available at CASIA.

Towards a General Framework for Privacy-Preserving Unimodal and Multimodal Biometric Verification

Towards a General Framework for Privacy-Preserving Unimodal and Multimodal Biometric Verification

 

Beneficiary: Marta Gomez-Barrero, PhD Student at ATVS, Universidad Autonoma de Madrid, Spain

Host: Prof. Dr. Christoph Busch, NISlab, Norwegian University of Science and Technology (NTNU), Gjovik, Norway

Period: 04/01/2016 – 15/02/2016

 

Purpose of the STSM

 

Even though biometric recognition systems are nowadays widely deployed, many concerns arise from the privacy issues posed by biometric data storage.  Among the various schemes proposed in the literature, Bloom filter based template protection has been successfully applied to different biometric traits (iris, face, fingerprint, and fingervein in a past STSM) granting irreversibility while preserving verification accuracy.

However, the application of this protection scheme to a given biometric characteristic is not straightforward – appropriate values for its parameters have to be computed ad hoc. Moreover, so far a multi-biometric approach has been only proposed for protected templates having the same size.

In the present Short Time Scientific Mission (STSM), we have continued our shared research line based on template protection to devise a new general framework for the estimation of the Bloom filter parameters based on statistical measures of the unprotected templates. Furthermore, we have proposed a new multi-biometrics approach for templates of different sizes based on a weighted feature-level fusion. The soundness of both methods has been analysed for several characteristics, namely: face, iris, fingerprint and fingervein.

 

Work carried out during the STSM

 

  1. Development of a methodology to reliably estimate the main parameters of Bloom filter based template protection schemes.
  2. Development of a general multi-biometric template protection system based on Bloom filters and a weighted feature level fusion.
  3. Evaluation of the proposed methodologies on several biometric characteristics – face, iris, fingerprint and fingervein – and feature level fusion – face and iris.

 

Results of the STSM

 

A new framework has been proposed for the estimation of the two main parameters of Bloom filter based schemes, based on a statistical analysis of the unprotected binary templates. A balance is reached between the irreversibility provided and accuracy degradation.

For the validation of the proposed framework, we use two independent sets of databases to avoid biased results. First, appropriate values for the system parameters are estimated on the development databases. Then, the performance of the protected systems is compared to that of the unprotected systems, evaluated on the test database. Comparing the evaluation results in terms of the Detection Error Trade-Off (DET) curves, we observe that accuracy is preserved for all characteristics considered.

Finally, the performance of the multi-biometric approach is compared to a score level unprotected fusion, achieving a higher accuracy. As a consequence, we may conclude that we achieve: i) higher verification accuracy and ii) more secure and privacy protected templates, due to the feature level fusion carried out in the protected domain.

 

Future Collaboration

 

As shared future research lines, we could highlight the continuation of the combination of the Bloom filter based template protection scheme with another protection approach based on honey templates, developed at the NISlab, which further protects the privacy of the subject. An article has been submitted with the first analysis and results.

Additionally, we are currently preparing a manuscript to report the results achieved during the STSM.

5th Int. Conference on Pattern Recognition Applications and Methods (ICPRAM) 2016

The International Conference on Pattern Recognition Applications and Methods would like to become a major point of contact between researchers, engineers and practitioners on the areas of Pattern Recognition, both from theoretical and application perspectives.

To disseminate the results of the action and to promote the awareness of the available forensic biometrics technologies in the scientific community. Organising an Action presentation within the ICPRAM annual conference on Pattern Recognition.
This event will allow the Action’s members to present the latest scientific results as well as to meet and propose a closer scientific exchange and open discussion between the Pattern Recognition and the Forensic scientific community. This event will constitute a step forward to promote awareness of the Forensic Biometrics technologies.

Forensic Source Camera Classification

Beneficiary:      Luca Debiasi, University of Salzburg, Salzburg (AT)

Host:                 Kristin Norell, Swedish National Forensic Centre, Linköping (SE)

Period:              14/10/2015 to 13/11/2015

 

1.    Purpose of the STSM

The main purpose of this STSM was to investigate a large data set containing images from an unknown number of different cameras and cluster it according to the source camera which acquired the images by exploiting the photo-response non-uniformity (PRNU). The data set under investigation contains images from unknown source cameras found on a personal computer as part of an actual criminal case. If the method is suitable it could be used in the near future at the Swedish National Forensic Centre (NFC) for example to cluster images from the same camera in child molest cases and in that way identify more victims. Connecting images to each other through the camera used could also help to identify the perpetrator.

 

2.    Work carried out during the STSM

  • Implementation of different PRNU enhancement techniques.
  • Analysis of EXIF data for images in case data set.
  • Implementation of various clustering techniques from literature and development of novel one.
  • Evaluation of clustering performance on “Dresden Image Database”
  • Extracted the PRNU of case data images using various sizes and positions.
  • Clustering of case data using the different clustering techniques.
  • Evaluation of the clustering results for the case data experiments and discussion with NFC experts on future application.
  • Presentation of “A Framework for Decision Fusion in Image Forensics Based on Dempster–Shafer Theory of Evidence” paper for decision fusion and discussion.
  • PRNU related discussions with experts from NFC: camera identification and existing protocol applied at NFC, PRNU quality and enhancements, inter-camera similarities and resulting false positives, decision fusion application for forensic investigations.

 

3.    Outcome and future collaboration

The analysis of the EXIF data showed 60 different camera models from 14 different makes for the images where the EXIF data was available. The implemented PRNU enhancements have been tested on a subset of the “Dresden Image Database” by performing camera identification experiments, where it was possible to lower the error rates and improve the differentiability of the cameras. The results of the application of the clustering techniques to the case data have been discussed and several random samples have been verified manually regarding their plausibility by looking at the EXIF data of the images and identify misclassifications. Some of the techniques have been able to generate a plausible and realistic number of clusters, while other methods showed to be not suited for this kind of data. Since most methods give a higher number of clusters than estimated from the EXIF data a subsequent merging of clusters after the clustering is also considered for future work. Since not all of the planned work could be completed during the STSM, the implementation of decision fusion strategies and camera identification for HDR images are performed following to the STSM. Therefore, the collaboration is still ongoing.

Extraction of soft-biometry traits from profile pictures in social networks

• Applicant: Jordi Gonzàlez, Computer Vision Center-Universitat Autònoma de Barcelona, WP3 member
• Host: Thomas Moeslund, VAP – University of Aalborg, MC member
• Period: 01/08/2015 to 31/08/2015

Purpose. This STSM aimed to establish a new collaboration between two IC1106 institution members. The visit established a new collaboration line between VAP and CVC‐UAB, which has allowed the applicant to learn new techniques related to soft‐ biometry extraction of static pictures, and to optimise and apply convolutional neural networks (CNNs) in the problem as defined by VAP.

Work carried out. The main aim of the research carried out during the STSM has been to apply the latest state‐of‐the‐art soft‐biometry recognition analysis algorithms developed by VAP and combine these with the auto key‐wording algorithms and software tools developed by CVC‐UAB (based on deep learning and convolutional neural networks) for facial pain recognition, in order to create a tailored solution that automatically classifies the levels of pain given an image.

Outcomes and future collaboration. The outcomes of the work carried out include an automatic classification procedure for extracting the pain description of the facial appearance of a user (VAP) using deep learning algorithms (CVC‐UAB). We plan to submit a collaborative journal (with impact factor) before 2016 on this topic. Also, we have collected social media data in Aalborg for further CNNs fine-tuning and enhance the description of a person with clothes and other soft-biometric characteristics of a human face. Towards this end, we have established a schedule of Skype meetings for finishing the work initiated during the STSM visit in terms of a joint ECCV2016 submission .

Additional activities. During the visit, I participated regularly in the research meetings of the VAP group, and I gave the following talk:

• Towards Visual Hermeneutics: From pixels to semantics, Aalborg University, August 11, 2015.

Examining the Effects of Supervised Descent Method as 3D Facial Landmark Localization Method on 3D Face Recognition, and Introduction of the Bosphorus 3D Face Dataset

Beneficiary: Necati Cihan Camgöz, Master’s Student at Boğaziçi University
Host:  Assist. Prof. Dr. Vitomir Struc, University of Ljubljana, Faculty of Electrical Engineering
Period: 27/04/2015 to 01/05/2015

Purpose of the STSM

First and far most important goal of this short term scientific mission to Ljubljana University was to establish a fruitful collaboration between two institutes while gaining further knowledge on my research field from my esteemed host. Our goals further included to share knowledge on facial landmark localization methods, analyze experimental results which were conducted on Bosphorus 3D face dataset, discuss possible improvements over the previously developed Supervised Descent Method (SDM) based approaches, and examine the effects of having better facial landmark localization on 3D face recognition.

Description of the work carried out during the STSM

By working with Assist. Prof. Dr. Vitomir Struc and his post-doctoral scholar colleague Dr. Janez Krizaj, who are experts on 3D facial landmark localization and had their own approach based on SDM, we were able assess the performance of our methods and discuss the possible ways to improve 3D facial landmark localization. Since the host was already working on the Bosphorus 3D face dataset there was no need for an introduction. We can categorize the work carried out during the STSM in to the following topics:

Creating the Experimental Setup:

Taking state-of-the art methods on 3D Facial Landmark Localization into consideration, we have created our experimental setup on Bosphorus 3D face dataset; so that we would be able to compare our results with them. In our setup we divided the dataset in to two approximately equal, user exclusive parts and did two fold cross-validation for our experiments.

Comparing 3D Facial Landmark Localization Methods and Analyzing the Experimental Results:

Since both parties had their own 3D facial landmark localization approach based on SDM, we experimented with both of the approaches. Our experiments showed us that the approach proposed by the host had higher 3D facial landmark localization accuracy. Furthermore, by comparing with the state-of-the-art methods we have realized that this approach had state-of-the-art performance while working with 10 and 22 most commonly used landmarks on Bosphorus 3D face dataset.

To examine the effects of the features which have been used to describe 3D facial landmarks we have experimented with various sizes of Histogram of Oriented Gradients (HOG) and Scale Invariant Feature Transform (SIFT). These experiments showed us that using HOG features generally resulted in higher accuracy of 3D facial landmark localization. Also finding the optimal size for the features affected the performance drastically. We also experimented with changing the size of the features in each step of the SDM, which gave the approach a coarse-to-fine alignment effect and increased the performance.

Discussing ways to improve the current 3D Facial Landmark Localization Method:

By examining the experimental results we came to the conclusion that there are two ways we can improve the current approach. First is by coming up with a better structure for SDM, i.e. a cascaded approach, which will allow the landmark localization to be more precise as it iterates through its stages. The second idea is to use or create better features for describing 3D facial landmarks, which will exploit the 3D information of the data.

Examining the Effects of Better Facial Landmark Localization Method on 3D Face Recognition:

A naïve 3D face recognition method was applied to faces that were aligned using facial landmarks found by both parties’ approaches. Although the face recognition performances were lower than the state-of-the-art, experiments showed us that the approach with better facial landmark localization resulted in more successful 3D face recognition.

Description of the main results obtained

Our experiments showed us that the approach proposed by the host had state-of-the-art 3D facial landmark localization performance while working with 10 and 22 most commonly used facial landmarks on Bosphorus 3D face dataset.

We have also realized that working with different features can affect the 3D landmark localization performance drastically (HOG vs. SIFT). To this extent we started on experimenting with different features, which may be more suitable for 3D data.

To examine the effects of having better facial landmark localization on 3D face recognition, we have experimented with a naïve 3D face recognition approach which used the facial landmark locations from both parties’ approaches to align faces. We have seen that better facial landmark localization results in a higher 3D face recognition accuracy, thus proving the 3D facial landmark localization being crucial for biometric applications that work on 3D data.

Future collaboration with host institution

The experiments during the STSM showed that the method in our hands has the state-of-the-art 3D facial landmark localization performance. Currently we are experimenting on ways to improve this method by trying out cascaded structures and new features which will exploit the 3D information. Our goal is to turn this work into a publication as our collaboration continues.

Building a Robust and Informative System for SPN-based Image Forgery Detection

Beneficiary:  Xufeng Lin, University Of Warwick, xufeng.lin@warwick.ac.uk

Host:  Prof.  Andreas Uhl, University of Salzburg, uhl@cosy.sbg.ac.at

Period: 01/05/2015 to 31/05/2015

Place: Salzburg (Austria)

1.      Purpose of the STSM

 

The purpose of this STSM is to build a robust and informative system of SPN-based image forgery detection and enhance the collaboration between digital forensics and biometrics.

 

Although there are many approaches proposed to improve the performance of SPN-based image forgery detection, each of them only focuses on one or several components of the detection framework. An integrated approach for assembling the existing methods to provide superior performance is still lacking.  Therefore, we try to build a robust and informative system of SPN-based image forgery detection by combining the existing state-of-the-art methods. It would be also very interesting and reasonable to apply our experience of camera sensor identification to the analysis of biometric datasets in WaveLab.

2.      Description of the work carried out during the STSM

 

During the STSM, the following work has been carried out:

 

      • I first implemented several state-of-the-art SPN extraction, preprocessing and enhancement algorithms. Different combinations of these algorithms have been tried and evaluated aiming at achieving the “best” quality of camera fingerprint, which lies the foundation for SPN-based image forgery detection. The combination of the algorithms makes the SPN estimation more robust against the interferences introduced from the difference procedures in the image acquisition pipeline.

 

      • After obtaining the fingerprint of camera, image forgery is exposed by comparing the normalized cross correlation (NCC) between two patches from the same location of camera fingerprint and the noise residual of the suspicious image with a pre-defined threshold. As the computational cost of NCC, straightforwardly localizing the pixel-level forgery becomes computationally prohibitive. Therefore, a fast pixel-level NCC calculation method was proposed by pre-calculating the local mean and variance of the camera fingerprint and the noise residual of the suspicious image.

 

      • One problem of existing SPN-based image forgery detection algorithms is that the output is a binary image indicating the pixels have been forged, which will easily lead to omissions and false positives. By incorporating the prior knowledge about the quality of SPN as well as the theoretical analysis of the distribution of correlation values, I converted the detection results into a probability map, which can provide the forensic investigators with more rich and instructive information for decision-making.

 

      • Collaborated with Wavelab group, we frequently discussed and shared experience in the area of image clustering based on SPN. In the later phase of this STSM, we jointly conducted image clustering experiments on CASIA iris dataset and achieved very promising results.

 

      • Collaborated with Wavelab group, we devised a fast correlation-based image alignment algorithm using the noise residuals extracted from iris images. This alignment algorithm serves as a preprocessing step for sensor aging rate estimation.

 

 

 

3.      Future collaboration with the host institution (if applicable)

 

During the STSM, I worked with the members of Wavelab group and tried to apply the sensor pattern noise technology to analyze the biometric datasets. The outcome of the collaborative work shows the effectiveness of sensor pattern noise in analyzing the Iris databases. So the future collaboration will focus on extending the applications of sensor pattern noise in the area of digital biometrics.

The design of experimental platform and algorithmic basis for multi-modal face recognition

Participant: Olegs Nikisins (Institute of Electronics and Computer Science, Riga, Latvia)

Scientific supervisor in the host institution: Assistant Professor Kamal Nasrollahi, and Professor Thomas B. Moeslund (Aalborg University, Laboratory of Visual Analysis of People, Denmark, Aalborg)

Period:  01/11/2013 to 31/12/2013

  1.        Purpose of the STSM

The purpose of this Short Term Scientific Mission (STSM) is to establish an international collaboration between two institutions within the COST Action IC1106 (Integrating Biometrics and Forensics for the Digital Age): Institute of Electronics and Computer Science (Riga, Latvia) and Laboratory of Visual Analysis of People (VAP) at Aalborg University (Denmark, Aalborg), which is the host institution. The general scientific idea behind the collaboration is to develop a multi-modal facial recognition system, which is based on the combination of depth, thermal and RGB images of the face. The ultimate goal is to study the robustness of such a multi-modal face recognition system in different capturing scenarios. The research during STSM covered two major fields: 1) development, testing and tuning of algorithmical basis of multi-modal face recognition system, 2) acquiring a face database, which consists of depth, thermal and RGB images. The development of the algorithms for such a system is a challenging issue which will benefit both biometrics and forensics and requires the complementary competences of both institutions to be joined. In order to achieve the second goal the specific instruments are needed, which are available in the host institution.

  1. 2.       Description of the work carried out during the STSM

The work carried during the STSM can be briefly divided into the following stages:

1)      Development of multi-modal face database, which incorporates RGB, depth and thermal images (RGB-D-T) of the face. Database covers three capturing scenarios: rotations of the head, variable facial expressions and variable illumination conditions. The database has also been supplemented with ground-truth data (coordinates of the face bounding boxes for all modalities) and standard evaluation protocol. Activity duration: approx. 2 weeks.

2)      Evaluation of feature parameters. The selected features are Local Binary Patterns (LBP), Histograms of Oriented Gradients (HOG) and HAAR-like features. The parameters are selected so as to provide the best recognition performance for each modality, with constraint of equal dimensionality of feature spaces in all modalities (needed for feature-level fusion). Activity duration: approx. 1 weeks.

3)      Testing of multi-modal face recognition algorithms on the acquired face database. Both individual evaluation of each module and analysis of the complete multi-modal system have been performed in order to realize the gain in the performance of multi-modal approach. The fusion of the modalities has been made in the feature level. Three classifiers have been considered in the recognition module – NNC, Weighted NNC and linear SVM.  Activity duration: approx. 2 weeks.

4)      Preparation of the publication based on the obtained results. The paper entitled “RGB-D-T based Face Recognition” is submitted to ICPR 2014 conference. Deadline for submission: 20.12.2013.

5)      The research in the field of Sparse-coding with application to RGB-D-T based face recognition has been initialized next. Activity duration in the host institution: till the end of the STSM, however this is an ongoing joint research.

 3.       Description of the main results obtained

1)      RGB-D-T facial database

The first significant result of the STSM is the RGB-D-T facial database. The images of each person in the database are organized in three sets corresponding to rotation, expression and illumination scenarios. The total number of persons in the database is 51. Each capturing sequence (rotation, expression, illumination) has 300 images per person: 100 RGB, 100 Depth and 100 Thermal synchronized images. The total number of images per person is 900 resulting in 45900 images in the database.

For development purposes the database is supplemented with a Matlab indexing function, which must be used to split the data into Training, Validation and Testing sets. This function is introduced in order to unify both the development and testing of the face recognition algorithms among the researchers who use the database.

Each facial image in the database is supplemented with ground-truth data, which includes bounding box parameters of the face. The ground-truth data is generated automatically.

The database will be publically available shortly on the webpage of Laboratory of Visual Analysis of People (Aalborg University).

2)      RGB-D-T based face recognition

The next result is the developed multi-modal face recognition algorithm. The algorithm is based on feature-level fusion concept. The obtained experimental results cover various combinations of classifiers (NNC, Weighted NNC and Linear SVM) and feature spaces (LBP, HOG, HAAR-like). It is worth mentioning that preprocessing of the input face images was deliberately excluded from the algorithmic pipeline in order to get a clear insight of “as is” possibilities of each particular modality.

From experimental results, which are described in the scientific publication a few important conclusions can be made. First, based on the complexity for the recognition the capturing scenarios can be prioritized as follows: rotations (difficult), illumination (less difficult), expressions (the simplest one). Second, the importance of each modality in the recognition process depends on the capturing scenario. However, thermal data constantly holds high impact in the recognition regardless of the scenario. From the list of observed features LBP in most cases provides the best recognition results.

Details are covered in the publication, which will be attached for consideration.

4.       Future collaboration with the host institution

The collaboration with the host institution continues in the form of joint research in the field of Sparse coding with application to RGB-D-T based face recognition.

Also the participation in the joint research projects is currently under discussion (Horizon 2020).

5.       Foreseen publications/articles resulting from the STSM

1)      Submitted publications:

O. Nikisins, K. Nasrollahi, M. Greitans and T.B. Moeslund. RGB-D-T based Face Recognition. International Conference on Pattern Recognition (ICPR 2014), under review, 2014

2) Foreseen publications:

Journal paper in the field of Sparse coding with application to RGB-D-T based face recognition.

Other comments

Many thanks to all for a great opportunity to participate in the STSM within the COST project and for comprehensive support during the STSM!

Training School for Biometrics in Forensic, Security and Mobile Applications

Organising of the fourth Training School: 12th Int. Summer School for Advanced Studies on Biometrics for Secure Authentication. This year’s school is entitled “Biometrics in Forensics, Security and Mobile Applications”. This school has a long successful history and over the years it gained a remarkable reputation in the scientific community. Capitalizing on this success story and continuing the joint organization of this training school will have a high impact also on the success of this action.

This training school will include 3 practical sessions “hands on”, with a set of MatLab tools, to experiment the development and application of biometric algorithms to real, multi-biometric data. Multidisciplinarity is a key element of the training school. The lecturers will provide an in-depth view from several disciplines, including: the state of the art in fundamental biometric technologies and their application to forensic cases; Machine learning techniques; Forensic examination methodologies and their links with biometrics; Neuroscience and human perception; ethical and societal issues related to biometrics and forensics; statistical analysis of biometric data in forensics; industrial deployment of biometric technologies in law enforcement.