Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
Session Chair: Tobias Gummer, GESIS Leibniz Institute for the Social Sciences
Location:GM.326 Manchester Metropolitan University
Building: Geoffrey Manton, Third Floor
4 Rosamond Street West
Off Oxford Road
Doing research on/ and with youth in the era of the General Data Protection Regulation
Valentina Mazzucato, Karlijn Haagsman
Maastricht University, Netherlands, The
‘Youth hold the future’ is an often-cited phrase. Organizations such as the UN time and again state that it is important to listen to youth and for youth to be involved in decisions affecting their lives. However, youth are not often provided a platform in which they can speak or be listened to. One way to give youth a voice is through research, yet most research on youth continues to be based on adult assessments, be they teachers, caregivers, health professionals or parents. Additionally, in migration research, migrant youth are often seen as a vulnerable group needing protection, a stance that takes away from youth’s own agency. Finally, the newly instated General Data Protection Regulation (GDPR) and the way it is being interpreted by governments and institutions, makes it hard to collect data on youth. Using the experience of the Mobility Trajectories of Young Lives (MO-TRAYL) project, this paper discusses the challenges that formal ethical procedures present for youth centred methodologies in the era of the GDPR: from strict interpretations of the GDPR to protective gatekeepers, from institutional barriers to ethical guidelines and parental consent. Finally, it discusses ways in which we can use a more youth centric and inclusive approach to study youth mobility.
Mixing Pretest Methods of Structured Questionnaires
Marco Palmieri, Fabrizio Martire, Maria Concetta Pitrone
Sapienza, University of Rome, Italy
“Mixed Methods” is considered a new paradigm for new answers to the never-ending debate between quantitative and qualitative sociology (Morgan 2007; Tashakkori and Teddlie 2010). Nowadays, social researchers use this methodological approach in many research stages: the theoretical framework design, the data collection, the data analysis, etc. (Johnson and Onwuegbuzie 2004; Feilzer 2010; Small 2011), but few scholars explore the usefulness of Mixed Methods for pretesting questionnaires (Mauceri 2018). In this paper, we present the results of a pretest strategy based on mixed methods approach.
• Does the mixed-methods-pretest increase the number and the types of problems usually diagnosed by a mono-method-pretest modus operandi?
• How quantitative and qualitative pretest methods can be mixed?
In this study, three methods have been employed to pretest a questionnaire designed to investigate the attitudes towards immigrants: expert review, cognitive interview, verbal interaction coding. At the beginning we have used the expert review to locate the main problems afflicting the questionnaire. Then, the cognitive interview has been adopted to inquiry the cognitive processes of respondents involved in closed-ended question answering. Finally, we have used the verbal interaction coding to analyze the verbal interaction between interviewer and respondent in a real interview context. Offering different points of view over the problems concerning the questionnaire, the three pretest methods have allowed us to conceive solutions for the questionnaire biases taking into account different aspects of the interview process.
Potentials and Limits of Administrative Data to Adjust Internet Panel Surveys
University of Salamanca, Spain
Over the last years the use of internet panels to collect survey data has been expanding around the world (ESOMAR 2017). However, this new method is affected by two main issues: undercoverage and non-response. Undercoverage occurs when part of the target population does not have access to the internet while non-response error refers to the lack of response from a sample unit. These errors can affect the representativeness of the sample and bias the survey estimates. The administrative data collected from open sources can be used to compute survey adjustments that, if related to the likelihood of response and the survey target variable, can reduce the bias of the survey estimates. This paper examines the potential of aggregate administrative data to adjust the survey data collected from an internet panel in Spain.
To address this research question, I use statistical simulations and data from an internet panel. The statistical simulations are useful to assess the potential of the aggregate data compared to individual data to adjust surveys. Then, two surveys from an internet panel based in Spain are used to implement adjustments using the aggregate administrative data. The estimates from these surveys are compared to a major face-to-face study in order to assess the effect of the weights on the survey bias. The administrative aggregate data have been collected from the Spanish government data repositories as long as the data were available at the municipality level or lower.
Offering Versus Omitting An MC And/Or A DK-O: The Consequences Linked To The Respondents’ Approach To The Topic
University of Applied Sciences Upper Austria, Austria
The effect of different numbers of response categories is recurrently examined. Although opinions diverge, offering a five-point scale plus “don´t know” option might covers all the respondents’ needs. However, it could be that some respondents might tend to choose the “middle category” or the “don´t know” option indiscriminately, even if it would be possible to choose a more accurate solution. The paper addresses the consequences of offering versus omitting an MC or DK-O linked to the respondents’ approach to the topic. Data was collected by means of an online-survey dealing with student participation at a University in Austria. The analyses are based on 7.3% (n=1,282) of the invited students, who finished the survey. Four types of respondents were classified: “inexperienced”, “indifferent”, “irresponsible” and “regular”. The results show: The proportion of MC of inexperienced respondents decreases significantly, in case a DK-O is available. Vice versa, indifferent respondents choose a DK-O more often when an MC is not provided. Including an MC and DK-O decreases the consistency of the answers of irresponsible respondents, but excluding both does not improve the result essentially. In the end, just the consistency of answers of regular respondents is independent of the provided response scale format.