Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Session Overview
Session
RN21_02: Comparing Survey Modes
Time:
Wednesday, 21/Aug/2019:
2:00pm - 3:30pm

Session Chair: Benjamin Baisch, University of Salzburg
Location: GM.326
Manchester Metropolitan University Building: Geoffrey Manton, Third Floor 4 Rosamond Street West Off Oxford Road

Show help for 'Increase or decrease the abstract text size'
Presentations

Online, Face-to-Face Or Mixed-Mode? Findings From A Methodological Experiment For The Generations And Gender Survey

Almut Schumann1, Detlev Lück1, Robert Naderi1, Martin Bujard1, Susana Cabaço2, Tom Emery2, Vera Toepoel3, Peter Lugtig3

1Federal Institute of Population Reserach (BiB), Germany; 2Netherlands Interdisciplinary Demographic Institute (NIDI), Netherlands; 3Utrecht University, Netherlands

In a digitalised world CAWI becomes more and more an option for surveys due to cost savings and the accessibility for young generations. Therefore, the Generations and Gender Survey (GGS) – an international panel study focussing on family-demography and hitherto conducted in CAPI mode – carried out an experimental study which investigates the chances and risks of partly moving online for the next round of data-collection. It tests a sequential mixed-mode (push-to-web) design, combining CAWI and CAPI modes.

The particular circumstances to consider are a rather long questionnaire of approximately 60 minutes, a complex routing and a high degree on private and partly sensitive questions due to familial and relational queries. The experiment was conducted in three GGS countries – Germany, Croatia, and Portugal – with more than 1,000 respondents in each country. In all three countries a reference group was interviewed in CAPI mode, while an experimental group was interviewed in a sequential mixed-mode design (CAWI and CAPI). In each country a further specific experiment was carried out: In Germany strategies of incentivation, in Croatia the timing of reminders, and in Portugal two modes of selecting a contact person within the household were compared.

We are comparing response rates, costs, representativity, accuracy of measurement, and further aspects of data-quality for different modes of data collection and the experimental groups. The experiment identifies recommendable design characteristics for a push-to-web design for the GGS and other future surveys and it provides evidence for the importance of the country-context in that respect.



Measurement of Student’s Competencies: Effects of Group Testing and Individual Testing

Uta Landrock, Sabine Zinn, Timo Gnambs

LIfBi - Leibniz Institute for Educational Trajectories, Germany

The German National Educational Panel Study NEPS collects, among others, data on the development of competencies throughout the life span. Generally, assessments of competencies can be implemented as group tests or individual tests. Performance tests of groups in institutional contexts are difficult to organize. That also applies for the NEPS study, which is based on voluntary participation. The implementation of tests depends on the support of the institutions, which have to provide rooms and time slots. Furthermore, participating students must be present at given test times, which can be inconvenient for them and negatively affect their motivation to participate. In the NEPS study competences were also individually tested as computer-based web assessments (CBWA). Individual testing is easier to organize and more flexible for the participants. The students can decide for themselves when and where they want to participate. However, it is unclear whether group tests and individual tests have different effects on the results of competence assessments. As part of the NEPS, a “test delivery study” was implemented in the students’ subsample. Applying a split-half design, the participants were randomly assigned to group or individual tests. Competences in science, information and communication technology (ICT literacy), and domain general cognitive functions were measured. In order to answer the research question, the performance rates of the two test conditions were compared and assessed for potential intervening factors. Preliminary results indicate that results of the test groups are comparable.



The Quality Of ICT Skills Indicators - Comparing Self- With Direct-Assessment

Maja Rynko1, Marta Palczyńska2

1SGH Warsaw School of Economics, Poland; 2Institute for Structural Research, Poland

Basic ICT skills are a prerequisite for social inclusion in the information age. Level of population ICT skills as well as computer and Internet usage are of interest to the European Commission and the national governments. This is reflected in numerous strategic documents (e.g. Digital agenda for Europe, A new skills agenda for Europe, Long-term National Development Strategy for Poland). The EU survey on ICT usage provides a proxy on the level of ICT skills among individuals in Europe. However, it is based on individual’s self-assessment which may be prone to misjudgements (e.g. Dunning et al. 2004) and which is shown to differ from the results of ICT skills direct assessment from PIAAC survey.

Using data from the Polish follow-up study to the Programme for the International Assessment of Adult Competencies (postPIAAC) we compare self-assessment of basic ICT skills to the direct assessment. The postPIAAC was conducted between Oct. 2014 and Feb. 2015 on over 5000 respondents aged 18-69. The questionnaire included Eurostat questions on computer and Internet usage. Respondents performed also several tasks directly comparable to the Eurostat questions: coping files to a folder, using copy/cut/paste tools for text editing and using basic spreadsheet formulas. Among those who have not completed these tasks in the direct assessment, the majority declared they have carried out these task before (54%, 50% and 59% respectively). The preliminary multivariate analysis suggests that young and higher educated people are more likely to overrate their skills in the self-assessment. The discrepancy between the actual and the declared level of ICT skills poses a question of reliability of ICT indicators based on self-assessment.



Experimental Evidence On The Effect Of Using Rating Scales With Numeric Instead Of Semantic Labels

Tobias Gummer, Tanja Kunz

GESIS Leibniz Institute for the Social Sciences, Germany

Web surveys are increasingly being completed by using smartphones. This trend facilitates the need to optimize question design to fit smaller screens and thereby to provide respondents a better survey experience, lessen burden, and increase response quality. In this regard, it has been suggested to replace semantic labels of rating scales (e.g., “strongly like”) with numeric labels (e.g., “+5”). However, research on the applicability of these scales is sparse, especially with respect to interactions with other scale characteristics. To address this research gap, we investigated the effects of using numeric labels on response behavior in comparison to semantic labels. Moreover, we tested how these effects varied across different scale orientations (positive-negative vs. negative-positive) and scale formats (agree-disagree vs. construct-specific). To answer these research questions, we implemented a web survey experiment. The web survey was fielded in November 2018 (N=4,200) and quota-sampled from an access panel. Respondents were randomly allocated in a 2x2x2 between-subjects design in which we varied scale labels (numeric vs. semantic), scale orientation (positive-negative vs. negative-positive), and scale format (agree-disagree vs. construct-specific) of a rating scale comprising 10 items presented item-by-item. The experimental variations were assessed based on several response quality indicators (e.g., agreement, primacy effects, response times, straightlining).



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: ESA 2019
Conference Software - ConfTool Pro 2.6.132+TC+CC
© 2001 - 2020 by Dr. H. Weinreich, Hamburg, Germany