Background: Prediction models are used in clinical research to develop rules that can be used to accurately predict the outcome of the patients based on some of their characteristics. They represent a valuable tool in the decision making process of clinicians and health policy makers, as they enable them to estimate the probability that patients have or will develop a disease, will respond to a treatment, or that their disease will recur. The interest devoted to prediction models in the biomedical community has been growing in the last few years. Often the data used to develop the prediction models are class-imbalanced as only few patients experience the event (and therefore belong to minority class). Results: Prediction models developed using class-imbalanced data tend to achieve sub-optimal predictive accuracy in the minority class. This problem can be diminished by using sampling techniques aimed at balancing the class distribution. These techniques include under- and oversampling, where a fraction of the majority class samples are retained in the analysis or new samples from the minority class are generated. The correct assessment of how the prediction model is likely to perform on independent data is of crucial importance; in the absence of an independent data set, cross-validation is normally used. While the importance of correct cross-validation is well documented in the biomedical literature, the challenges posed by the joint use of sampling techniques and cross-validation have not been addressed. Conclusions: We show that care must be taken to ensure that cross-validation is performed correctly on sampled data, and that the risk of overestimating the predictive accuracy is greater when oversampling techniques are used. Examples based on the re-analysis of real datasets and simulation studies are provided. We identify some results from the biomedical literature where the incorrect cross-validation was performed, where we expect that the performance of oversampling techniques was heavily overestimated.
COBISS.SI-ID: 32284377
Background In clinical research prediction models are used to accurately predict the outcome of the patients based on some of their characteristics. For high-dimensional prediction models (the number of variables greatly exceeds the number of samples) the choice of an appropriate classifier is crucial as it was observed that no single classification algorithm performs optimally for all types of data. Boosting was proposed as a method that combines the classification results obtained using base classifiers, where the sample weights are sequentially adjusted based on the performance in previous iterations. Generally boosting outperforms any individual classifier, but studies with high-dimensional data showed that the most standard boosting algorithm, AdaBoost.M1, cannot significantly improve the performance of its base classier. Recently other boosting algorithms were proposed (Gradient boosting, Stochastic Gradient boosting, LogitBoost); they were shown to perform better than AdaBoost.M1 but their performance was not evaluated for high-dimensional data. Results In this paper we use simulation studies and real gene-expression data sets to evaluate the performance of boosting algorithms when data are high-dimensional. Our results confirm that AdaBoost.M1 can perform poorly in this setting, often failing to improve the performance of its base classifier. We provide the explanation for this and propose a modification, AdaBoost.M1.ICV, which uses cross-validated estimates of the prediction errors and outperforms the original algorithm when data are high-dimensional. The use of AdaBoost.M1.ICV is advisable when the base classifier overfits the training data: the number of variables is large, the number of samples is small, and/or the difference between the classes is large. To a lesser extent also Gradient boosting suffers from similar problems. Contrary to the findings for the low-dimensional data, shrinkage does not improve the performance of Gradient boosting when data are high-dimensional, however it is beneficial for Stochastic Gradient boosting, which outperformed the other boosting algorithms in our analyses. LogitBoost suffers from overfitting and generally performs poorly. Conclusions The results show that boosting can substantially improve the performance of its base classifier also when data are high-dimensional. However, not all boosting algorithms perform equally well. LogitBoost, AdaBoost.M1 and Gradient boosting seem less useful for this type of data. Overall, Stochastic Gradient boosting with shrinkage and AdaBoost.M1.ICV seem to be the preferable choices for high-dimensional class-prediction.
COBISS.SI-ID: 32198617
Background The proliferation of the scientific literature in the field of biomedicine makes it difficult to keep abreast of current knowledge, even for domain experts. While general Web search engines and specialized information retrieval (IR) systems have made important strides in recent decades, the problem of accurate knowledge extraction from the biomedical literature is far from solved. Classical IR systems usually return a list of documents that have to be read by the user to extract relevant information. This tedious and time-consuming work can be lessened with automatic Question Answering (QA) systems, which aim to provide users with direct and precise answers to their questions. In this work we propose a novel methodology for QA based on semantic relations extracted from the biomedical literature. Results We extracted semantic relations with the SemRep natural language processing system from 122,421,765 sentences, which came from 21,014,382 MEDLINE citations (i.e., the complete MEDLINE distribution up to the end of 2012). A total of 58,879,300 semantic relation instances were extracted and organized in a relational database. The QA process is implemented as a search in this database, which is accessed through a Web-based application, called SemBT (available athttp://sembt.mf.uni-lj.si). We conducted an extensive evaluation of the proposed methodology in order to estimate the accuracy of extracting a particular semantic relation from a particular sentence. Evaluation was performed by 80 domain experts. In total 7,510 semantic relation instances belonging to 2,675 distinct relations were evaluated 12,083 times. The instances were evaluated as correct 8,228 times (68%). Conclusions In this work we propose an innovative methodology for biomedical QA. The system is implemented as a Web-based application that is able to provide precise answers to a wide range of questions. A typical question is answered within a few seconds. The tool has some extensions that make it especially useful for interpretation of DNA microarray results.
COBISS.SI-ID: 2048297218
Quantitative evaluation of citation data to support funding decisions has become widespread. For this purpose there exist many measures (indices) and while their properties were well studied there is little comprehensive experimental comparison of the ranking lists obtained when using different methods. A further problem of the existing studies is that lack of available data about net citations prevents researchers from studying the effect of measuring scientific impact by using net citations (all citations minus self-citations). In this paper we use simulated data to study factors that could potentially influence the degree of agreement between the rankings obtained when using different indices with the emphasis given to the comparison of the number of net citations per author to other more established indices. We observe that the researchers publishing papers with a large number of co-authors are systematically ranked higher when using h-index or total citations (TC) instead of the number of citations per author (TCA), that the researchers who publish a small proportion of papers which receive many citations while the rest of their papers receive only few citations are systematically ranked higher when using TCA or TC instead of h-index, and that the authors who have lower proportion of self-citations are ranked higher when considering indices which include the number of net citations in comparison with indices considering only the total citation count. Results are verified and illustrated also by analyzing a large dataset from the field of medical science in Slovenia for the period 1986-2007.
COBISS.SI-ID: 32004569
When analyzing time to disease recurrence, we sometimes need to work with data where all the recurrences are recorded, but no information is available on the possible deaths. This may occur when studying diseases of benign nature where patients are only seen at disease recurrences or in poorly-designed registries of benign diseases or medical device implantations without sufficient patient identifiers to obtain their dead/alive status at a later date. When the average time to disease recurrence is long enough in comparison with the expected survival of the patients, statistical analysis of such data can be significantly biased. Under the assumption that the expected survival of an individual is not influenced by the disease itself, general population mortality tables may be used to remove this bias. We show why the intuitive solution of simply imputing the patient's expected survival time does not give unbiased estimates of the usual quantities of interest in survival analysis and further explain that cumulative incidence function analysis does not require additional assumptions on general population mortality. We provide an alternative framework that allows unbiased estimation and introduce two new approaches: an iterative imputation method and a mortality adjusted at risk function. Their properties are carefully studied, with the results supported by simulations and illustrated on a real-world example.
COBISS.SI-ID: 32255193