Objective. In clinical rehabilitation practice, assistive technology (AT) is chosen for each individual efficiently by a skilled multidisciplinary team. Due to demographic and societal changes in the developed world, these professionals are overloaded, so the selection more and more often has to be performed within an unskilled team. The process of selection can be time and resource consuming, as - among other issues - it is not clear what is the learning period required for use of a given AT. Hence, the article presents a study of a new approach to test and select appropriate computer access AT (CAT) for people with motor impairments based on the learning process. Methods. Six user interfaces (keyboard, small and large joystick, small and large trackball, and a head-operated mouse and keyboard) were tested on 92 users, of whom 63 were patients with muscular or neuromuscular diseases. We developed and used purpose-built software for testing sentence typing. We tested different criteria for selecting the optimal CAT and compared the results with the skilled clinician's choice. Results. The learning curves of the people with motor impairments closely resembled those of the healthy controls, although with lower performance. Daily computer use was not associated with CAT selection, but corresponded nearly perfectly to the level of functional ability of upper limbs. Agreement between clinician's choice and learning-based CAT selection was noteworthy, but far from perfect. If partial agreement was taken into account, i.e., CAT was considered as ordinal variable based on corresponding functional ability level, and the second best learning-based choice was taken into account if agreeing with the clinician's choice, the agreement was high for highest median speed as the CAT selection criterion. We also analyzed the mistakes made during the typing task. We found that the mean number of mistakes per character was lower for the control group compared to the patient group, and that it statistically significantly differed between devices within both groups, whereby more mistakes were made with the head-operated mouse and keyboard than with the other devices. Conclusion. Our learning-based approach appears to be an efficient guide for an unskilled clinician to choose an optimal CAT.
COBISS.SI-ID: 1223785
Healthcare quality monitoring by the Ministry of Health in Slovenia includes over 100 business indicators of economy, efficiency and funding allocation, which are analysed annually for over 20 hospitals. Most of these indicators are random-denominator same-quantity ratios with a strongly correlated numerator and denominator, and the goal is the identification of outliers. A large simulation study was performed to study the performance of three types of methods: common outlier detection tests for small samples-Grubbs, Dean and Dixon, and Nalimov tests-applied unconditionally and conditionally upon results of Shapiro-Wilk normality test; the boxplot rule; and the double-square-root control chart, for which we introduced regression-through-originbased control limits. Pert, Burr and three-parameter loglogistic distributions, which fitted the real data best, were used with no,one or two outliers in the simulated samples of sizes 5 to 30. Small (below0.2, right skewed) and large (above 0.5, more symmetrical) ratios were simulated. Performance of the methods varied greatly across the conditions. Formal small-sample tests proved virtually useless if applied conditionally upon passed normality pre-test in the presence of outliers. Boxplot rule performed most variedly but was the only useful one for tiny samples. Our variant of the double-square-root control chart proved too conservative in tiny samples and too liberal for samples of size 20 or more without outliers but appeared the most useful to detect actual outliers in samples of the latter size. As a possibility for future improvement and research, we propose pre-testing of normality by using a class of robustified Jarque-Bera tests.
COBISS.SI-ID: 29598681