The success of in-vitro fertilization can be predicted by a correct quantitative and qualitative assessment of ovarian follicles. Several ovarian follicle detection and recognition algorithms have been published. Their effectiveness is inferior to human follicle annotations due to various kinds of noise, degradations, and artefacts in ultrasonic images. This paper deals with an approach to recognize antral follicles from 2 mm in diameter in 3D ultrasound data. Its detection phase looks for candidate follicular regions, while the recognition phase assesses the likelihood of a region to correspond to a follicle. Three innovative de?nitions underpin the detection: Laplacian-of-Gaussian-based directional 3D wavelet transform, adaptive multiscale search based on Gaussian mixtures, and recursive convexity-based region splitting. A likelihood index is also introduced to support follicle recognition. The proposed approach was tested on 30 ultrasound ovarian volumes generated by different sonographic machines in stimulated and non-stimulated examination cycles. The obtained follicle recognition rates exceed those of the best 3D approaches known by about 10 percent, while qualitative assessments yield comparable values.
COBISS.SI-ID: 21073174
Contemporary challenges, such as the management of are making vegetation monitoring increasingly more important. This paper proposes a new method for the detection of vegetation in LiDAR data. As vegetation points are characterised by non-linear distributions, they are efficiently recognised based-on large errors obtained when applying the local fitting of planar surfaces. However, there are many versatile objects particularly in urban environments, affecting the correctness of the classification. We have introduced three contextual filters to handle such exceptions. They are designed for detecting overgrowing vegetation, small objects attached to the planar surfaces (such as balconies, chimneys, and noise within the buildings) and small objects that do not belong to vegetation (vehicles, statues, fences). During the validation, the proposed method achieved over 97% correctness as well as completeness of vegetation recognition in rural areas while its average accuracy in urban settings was 90.7% in terms of F1-scores. The method is thus comparable with the current state-of-the-art, although the related methods often rely on radiometric or other auxiliary data. Furthermore, our method uses only three input parameters and, in spite of this, sensitivity analysis of the method confirmed the robustness against a sub-optimal definition of the input parameters. The method is suitable to process large amounts of points (128,000 points per second) and was used for classification of LiDAR point-cloud of the whole Republic of Slovenia.
COBISS.SI-ID: 19409174
The roof surfaces within urban areas are constantly attracting interest regarding the installation of photovoltaic systems. These systems can improve self-sufficiency of electricity supply, and can help to decrease the emissions of greenhouse gases throughout urban areas. Unfortunately, some roof surfaces are unsuitable for installing photovoltaic systems. This presented work deals with the rating of roof surfaces within urban areas regarding their solar potential and suitability for the installation of photovoltaic systems. The proposed method represents one of the first examples of environmental simulations for solar potential estimation over large-scale geographic areas. The solar potential of a roof’s surface is determined by a new method that combines extracted urban topography from LiDAR data with the pyranometer measurements of global and diffuse solar irradiances. Heuristic annual vegetation shadowing and a multi-resolution shadowing model, complete the proposed method. The significance of different influential factors (e.g. shadowing) was analysed extensively. A comparison between the results obtained by the proposed method and measurements performed on an actual PV power plant showed a correlation agreement of 97.4%.
COBISS.SI-ID: 16262934
Many complex systems in different areas such as sociology, biology, medicine, web and computer science can be represented as networks. For example, social networks consist of nodes representing people, and their relationships represented by edges. In most of these networks the nodes are arranged in dense groups that are called communities. Nodes in communities are more densely connected to each other than with other communities and generally share common attributes or properties. Identification of community structure helps when analysing the functionalities and organizations of networks. However, the communities must first be detected. Currently used algorithms for community detection in large-scale real-world networks are computationally expensive, or require a priori information such as the number or sizes of communities or are not able to give the same resulting partition in multiple runs. In this paper, we investigate a simple and fast algorithm that uses the network structure alone and requires neither optimization of pre-defined objective function nor information about number of communities. We propose a bottom up community detection algorithm in which starting from communities consisting of adjacent pairs of nodes and their maximal similar neighbours we find real communities. We show that the overall advantages of the proposed algorithm compared to the other community detection algorithms are its simple nature, low computational cost and its very high accuracy in detection communities of different sizes in networks with blurred modularity structure consisting of poorly separated communities. All communities identified by the proposed method for Facebook network and E-Coli transcriptional regulatory network have strong structural and functional coherence.
COBISS.SI-ID: 21987592
Predicting protein 3D structure from the amino acid sequence still remains an unsolved problem. If structure homologues cannot be identified, 3D structure has to be constructed from scratch or by using ab initio methods. These methods can also help to understand the principle of how proteins fold in nature. The paper introduces a novel differential evolution algorithm for protein folding optimization that is applied to a three-dimensional AB off-lattice model. The proposed algorithm includes two new mechanisms. A local search is used to improve convergence speed and to reduce the runtime complexity of the energy calculation. For this purpose, a local movement is introduced within the local search. The designed evolutionary algorithm has fast convergence speed and, therefore, when it is trapped into the local optimum or a relatively good solution is located, it is hard to locate a better similar solution. The similar solution is different from the good solution in only a few components. A component reinitialization method is designed to mitigate this problem. Both the new mechanisms and the proposed algorithm were analysed on well-known amino acid sequences that are used frequently in the literature. Experimental results show that the employed new mechanisms improve the efficiency of our algorithm and that the proposed algorithm is superior to other state-of-the-art algorithms. It obtained a hit ratio of 100% for sequences up to 18 monomers, within a budget of 1011 solution evaluations. New best-known solutions were obtained for most of the sequences. The existence of the symmetric best-known solutions is also demonstrated in the paper.
COBISS.SI-ID: 21401878
This paper proposes a new method for 3D delineation of single tree-crowns in LiDAR data by exploiting the complementarity of treetop and tree trunk detections. The method significantly improves vegetation monitoring aimed to support forest inventories, environmental protection, conservation of biological diversity, urban planning, etc. In our paper, described as the 2nd in this list of achievements, we have introduced the method, which extracts the vegetation points from the LiDAR point cloud. Advanced contextual filters enable distinction of such points from those belonging to objects such as vehicles, balconies, fences, and statues. Here, we try to assign the extracted points to individual trees. Treetops are defined by detecting concave neighbourhoods within the canopy height model using locally fitted surfaces. These serve as markers for watershed segmentation of the canopy layer where possible oversegmentation is reduced by merging the regions based on their heights, areas, and shapes. Additional tree crowns are delineated from mid- and under-storey layers based on tree trunk detection. A new approach for estimating the verticalities of the points' distributions is proposed for this purpose. The watershed segmentation is then applied on a density function within the voxel space, while boundaries of delineated trees from the canopy layer are used to prevent the overspreading of regions. The experiments show an approximately 6% increase in the efficiency of the proposed treetop definition based on locally fitted surfaces in comparison with the traditionally used local maxima of the smoothed canopy height model. In addition, 4% increase in the efficiency is achieved by the proposed tree trunk detection. Although the tree trunk detection alone is dependent on the data density, supplementing it with the treetop detection the proposed approach is efficient even when dealing with low density point-clouds.
COBISS.SI-ID: 18911510
This paper presents a vectorized matrix parameters encoding aspect for an evolutionary computer vision approach to procedural tree modelling. A serialized fixed-size floating-point encoded tree parameter set consists of a set of auxiliary local and other global parameters. The main goal of paper is to lower problem dimensionality needed for encoding local parameters. For evolution simulation, differential evolution algorithm is used. The optimizer evolves a parameterized procedural model by fitting a set of its rendered images to a set of automatically preprocessed reference photo images. The reconstructed tree morphology is then used for reconstructed tree animation, to generate similar geometrical tree models based on similar morphology. Examples of reconstructed model animation are shown, such as simulation of its growth, sway in the wind, or adding leaves. The contribution is important achievement, because we decomposed the representational vector of a tree into basic components that represent the germinational information of a procedural tree parameters more compact. It is also important that the method is using approaches from computer vision, especially evolutionary, with emphasis on memetic initialization and autoalignment, which together with dimension reduction after decomposition to basic components lowers the complexity of the algorithm and hence enables the study of applicability on photographs and morphological data of real trees from big data streams.
COBISS.SI-ID: 17793558
Complex network theory offers an efficient mathematical framework for modelling natural phenomena. However, these studies focus mainly on the topological characteristics of networks, while the actual reasons behind the networks' formation remain overlooked. This paper proposes a new approach to complex network analysis. By searching for the optimal functional definition of the network's edge set, it allows an examination of the influences of the physical properties of the nodes on the network's structure and behaviour (i.e. changes of the network's structure when the physical properties of nodes change). A two-level evolutionary algorithm is proposed for this purpose, whereby the search for a suitable function form is achieved at the first level, while the second level is used for optimal function fitting. In this way, not only the features with the largest influences are identified, but also the intensities of their influences are estimated. Synthetic networks are examined in order to show the superiority of the proposed approach over traditional machine learning algorithms, while the applicability of the proposed method is demonstrated on a real-world study of the behaviour of biological cells. The proposed method is important because it enables a simple interpretation of the obtained results, as unlike other machine learning methods does not work on the black box approach.
COBISS.SI-ID: 20349206
Traditional studies on the neural mechanisms of tremor use coherence analysis to investigate the relationship between cortical and muscle activity, measured by electroencephalograms (EEG) and electromyograms (EMG). This methodology is limited by the need of relatively long signal recordings, and it is sensitive to EEG artifacts. Here, we analytically derive and experimentally validate a new method for automatic extraction of the tremor-related EEG component in pathological tremor patients that aims to overcome these limitations. We exploit the coupling between the tremor-related cortical activity and motor unit population firings to build a linear minimum mean square error estimator of the tremor component in EEG. We estimated the motor unit population activity by decomposing surface EMG signals into constituent motor unit spike trains, which we summed up into a cumulative spike train (CST). We used this CST to initialize our tremor-related EEG component estimate, which we optimized using a novel approach proposed here. Tests on simulated signals demonstrate that our new method is robust to both noise and motor unit firing variability, and that it performs well across a wide range of spectral characteristics of the tremor. Results on 9 essential (ET) and 9 Parkinson's disease (PD) patients show a ~2-fold increase in amplitude of the coherence between the estimated EEG component and the CST, compared to the classical EEG-EMG coherence analysis. We have developed a novel method that allows for more precise and robust estimation of the tremor-related EEG component. This method does not require artifact removal, provides reliable results in relatively short datasets, and tracks changes in the tremor-related cortical activity over time.
COBISS.SI-ID: 21832982
Domain-specific languages (DSLs) allow developers to write code at a higher level of abstraction compared with general-purpose languages (GPLs). Developers often use DSLs to reduce the complexity of GPLs. Our previous study found that developers performed program comprehension tasks more accurately and efficiently with DSLs than with corresponding APIs in GPLs. This study replicates our previous study to validate and extend the results when developers use IDEs to perform program comprehension tasks. The results of the replication are consistent with and expanded the results of the original study. Developers are significantly more effective and efficient in tool-based program comprehension when using a DSL than when using a corresponding API in a GPL. The research is important because it shows that it is necessary to invest in supporting tools (IDE, debuggers) to achieve successful use of DSL. Therefore, we can expect the expansion of these tools supporting DSLs.
COBISS.SI-ID: 21123606