The purpose of the paper is to address the phenomena of identifying low quality speeders in web surveys due to the fact that many reasons exist to eliminate the extreme speeders from the survey, particularly when they exhibit low response quality. However, to unambiguously identify low response quality speeders is a difficult task which is often ignored when various approaches were developed for eliminating the speeders. Namely, very often these approaches rely on some arbitrary technical criteria, for which they only pre-assumed to have some relation to response quality. However, this relation is often ambiguous or nonexistent. Therefore the presented paper addressed this issue starting with response quality perspective and not with response time perspective. In the empirical study (n=1440) we thus studied how the units with low response quality relate to corresponding response times and speeding. For this purpose, we defined eight response quality indicators and observed the overall response quality consequences in case we removed different shares of the fastest speeding units. The results showed that eliminating 0.5% of the fastest units was the optimal share, where we truly removed the units with the lowest (and often also unacceptable) response quality. On the other hand, this share also prevented that we did not removed the fast units (speeders) with still acceptable response quality, which is a typical mistake with other approaches. Of course, this specific share (0.5%) might be survey specific. The results show that in order to identify the optimal share of speeders – which removal would increase the overall response quality – in any specific survey, researchers should combine response times with response quality indicators (e.g., item nonresponse, straight-lining).
B.03 Paper at an international scientific conference
COBISS.SI-ID: 36158557This paper contributes a critical elaboration of specific error sources in data collection using smartphone sensors to complement survey data. It focuses predominantly on technical aspects that may have important methodological implications for social science research and addresses three main research questions: 1) How can technical characteristics of smartphones and behaviour of research participants in interacting with their devices affect the quality of data relevant for social science research? 2) How can these error sources be placed into the conceptual framework of the Total Survey Error? 3) How can device paradata contribute to better understanding of potential influences of these factors on the data quality?
B.03 Paper at an international scientific conference
COBISS.SI-ID: 36264285Based on the project, 1KA feature extensions related to data quality and paradata were implemented. The extension of 1KA's web panel interfacing function enabled the collection of survey data in several languages ??and paradata in integration with an individual web panel. The set of data collection specific to mobile devices and the data related to the properties of different types of devices (e.g. PC, mobile phone) have been expanded. Various non-response indicators were developed at the level of the respondent and at the level of the questionnaire. These include indicators of breakoffs (e.g. introduction breakoff rate, gross and net questionnaire breakoff rate) and indicators of usability of respondents (e.g. unit completeness level, unit missing-data level). Extensions have been implemented in the 1KA mobile application, which enables monitoring the progress of the online survey, which is connected to the web panel.
F.16 Improvements to an existing information system/databases
COBISS.SI-ID: 36677725