This paper discusses a novel approach to image acquisition which improves the robustness of captured data required for 3D range measurements. By applying a pseudo-random code modulation to sequential acquisition of projected patterns the impact of environmental factors such as ambient light and mutual interference is significantly reduced. The proposed concept has been proven with an experimental range sensor based on the laser triangulation principle. The proposed design can potentially enhance the use of this principle to a variety of outdoor applications, such as autonomous vehicles, pedestrians’ safety, collision avoidance, and many other tasks, where robust real-time distance detection in real world environment is crucial.
COBISS.SI-ID: 26952231
The paper presents a newly established database of deep convective storms that exhibit a cold ring at their cloud top, as observed in enhanced infrared (IR) window satellite imagery, and a detection algorithm, which serves for a detection of cold-ring patterns from the IR satellite images. This enabled us to study the cold-ring patterns as seen on the Meteosat data over the summer period between 2006 and 2010 for Slovenia and parts of Italy, Austria, Hungary and Croatia. The detection algorithm was able to find cold rings at different stages, where typically large hail was reported on the ground and as such it serves as an important source of information for any quantitative analysis of the cold rings in this region. We were able to develop a detection algorithm by using the knowledge of image-processing techniques from the biometric face recognition systems.
COBISS.SI-ID: 1024473684
In the book section "Acoustic Modality in Virtual Reality" we have presented acoustic fundamentals, principles of human hearing, anatomy and physiology of human ear, principles of acoustic measurements, spatial characteristics of human hearing, spatial sound reproduction techniques and the process of creating virtual acoustic environments.
COBISS.SI-ID: 10171220
The paper presents a multi-modal emotion recognition system exploiting audio and video (i.e., facial expression) information. The system first processes both sources of information individually to produce corresponding matching scores and then combines the computed matching scores to obtain a classification decision. For the video part of the system, a novel approach to emotion recognition, relying on image-set matching, is developed. The proposed approach avoids the need for detecting and tracking specific facial landmarks throughout the given video sequence, which represents a common source of error in video-based emotion recognition systems and, therefore, adds robustness to the video processing chain. The audio part of the system, on the other hand, relies on utterance-specific Gaussian Mixture Models (GMMs) adapted from a Universal Background Model (UBM) via the Maximum A Posteriori probability (MAP) estimation. It improves upon the standard UBM-MAP procedure by exploiting gender information when building the utterance-specific GMMs, thus ensuring enhanced emotion recognition performance. Both the uni-modal parts, as well as the combined system, are assessed on the challenging multi-modal eNTERFACE'05 corpus with highly encouraging results. The developed system represents a feasible solution to emotion recognition that can easily be integrated into various systems, such as humanoid robots, smart surveillance systems, etc.
COBISS.SI-ID: 9608276