When studying the development of different geomorphic processes, floods, glaciers or even cultural heritage through time, one cannot rely only on regular photogrammetrical procedures and metrical images. In a majority of cases the only available images are the archive images with unknown parameters of interior orientation showing the object of interest in oblique view. With the help of modern high resolution digital elevation models derived from aerial or terrestrial laser scanning (lidar) or from photogrammetric stereo-images by automatic image-matching techniques even single nonmetric high or low oblique image from the past can be applied in the monoplotting procedure to enable 3D-data extraction of changes through time. The first step of the monoplotting procedure is the orientation of an image in the space by the help of digital elevation model (DEM). When using oblique images tie points between an image and DEM are usually too sparse to enable automatic exterior orientation, still the manual interactive orientation using common features can resolve such shortages. The manual interactive orientation can be very time consuming. Therefore, before the start of the manual interactive orientation one should be certain if one can expect useful results from the chosen image. But how to decide which image has the highest mapping potential before we introduce a certain oblique image in orientation procedure? The test examples presented in this paper enable guidance for the use of monoplotting method for different geoscience applications. The most important factors are the resolution of digital elevation model (the best are the lidar derived ones), the presence of appropriate common features and the incidence angle of the oblique images (low oblique images or almost vertical aerial images are better). First the very oblique example of riverbank erosion on Dragonja river, Slovenija, is presented. Than the test example of September 2010 floods on Ljubljana moor is discussed. Finally, case study from November 2012 floods is presented. During November 2012 floods an initiative was launched to gather as much non-metrical images of floods as possible from casual observers (volunteered image gathering). From all gathered images the guidelines presented before helped to pick out 21 % images which were used for monoplotting.
B.03 Paper at an international scientific conference
COBISS.SI-ID: 37331245One of the major challenges in the topographic mapping and fast generation of digital terrain or surface models from stereo optical aerial or satellite imagery is the stereo rectification preprocessing step. The general case is the use of extrinsic and intrinsic parameters from each calibrated camera, in order to establish epipolar geometry. Stereo rectification consists of geometric transformation and image sub-pixel resampling. Such a task is computationally demanding when dealing with high-resolution optical imagery. This presents an increasingly evident problem as the remote sensing technologies are becoming more accurate, causing even higher computational demands. This paper proposes a novel method for fast rectification of stereo images pairs to epipolar geometry by using General Purpose computing on Graphics Processing Units (GPGPU). The method is capable of resampling large high-resolution imagery on-the-fly, due to efficient out-of-core processing. In the experiments a runtime comparison was made between the proposed GPU-based and multi-core CPU methods over a dataset consisting of 420 stereo aerial images, where the proposed method achieved a significant speedup.
B.03 Paper at an international scientific conference
COBISS.SI-ID: 18186518The scope of this doctoral dissertation is an algorithm for the compression of domain-bound image sequences. The notion of a domain-bound image sequence is used as a description for ordered image sequences of a linked content, which represent either the temporal, or the spatial course of change of an arbitrary domain. The theoretical description translates in practice, within the scope of our algorithm, to two similar tasks: the compression of temporal image sequences, thus videos, and the compression of spatial image sequences, for example medical image sets acquired by CT or MRI technologies. In this dissertation we describe the structure and functioning of the algorithm, which tackles the mentioned problem by projection to principal component space. Initially we introduce the mathematical background, which is the foundation for principal component analysis, a method frequently used in statistics. It is this method, which is the origin for the computation of projection spaces that can be used to represent images from a given domain, whereby the type of the image sequence is irrelevant. In order to extend the domain independence, provided by the underlying mathematical method, to the scope of the compression algorithm, selecting the subsequence of images that are the origin for the projection space computation is the first important step. For this task we use a double-criterion algorithm, which selects the images - we name them base images - based on their mutual deviation and distance within the input sequence. A projection space sequence is computed from the selected sequence of base images using a concept introduced in this dissertation, which defines that adjacent projection spaces are computed from overlapping sets of base images, which have thus at least one element in common. In analogy to the sliding window concept we describe this approach as a ''sliding eigenspace''. In parallel we introduce a method of computing projection spaces that allows for a later reconstruction of the input data at a significantly reduced computational cost. This is achieved by including intermediary computational results into the compressed data representation, while the influence on the compression ratio is insignificant. In the experimental analysis we provide a comparison between the developed algorithm, the previously generally used method based on projection to principal component space, and the H.264 standard. Thereby we prove that in terms of visual quality the algorithm does not only outperform the previously used method, but is actually capable of competing in it, as well as in compression ratio with H.264. The experimental results are further confirmed by a theoretical analysis, where we formally prove the advantages of the developed algorithm and evaluate the impact of the methods control parameters on compression efficiency.
D.09 Tutoring for postgraduate students
COBISS.SI-ID: 17863190