Creator: Mayer, Anne-Kathrin; Rosman, Tom; Birke, Peter; Gorges, Johannes; Krampen, Günter
Contributor: Mayer, Anne-Kathrin; Rosman, Tom; Birke, Peter; Gorges, Johannes; Krampen, Günter
Funding: Joint Initiative for Research and Innovation; Leibniz Competition (SAW 2013)
Title: Development of novices’ professional knowledge networks within the contexts of classroom teaching and information searches on the internet. Research data from a longitudinal study 2013-2015
Year of Publication: 2016
Citation: Mayer, A.-K., Rosman, T., Birke, P., Gorges, J., & Krampen, G. (2016). Development of novices’ professional knowledge networks within the contexts of classroom teaching and information searches on the internet. Research data from a longitudinal study 2013-2015 [Translated Title] (Version 1.0.0) [Data and Documentation]. Trier: Center for Research Data in Psychology: PsychData of the Leibniz Institute for Psychology ZPID. https://doi.org/10.5160/psychdata.mrae15ent24
The study aims at describing and analyzing the development of professional knowledge networks in psychology and computer science students (first to fourth semester). The study primarily focuses processes of restructuring knowledge (conceptual change) following the transition from secondary education (school) to tertiary education (university). Three domains of knowledge are investigated: (1) domain-specific knowledge, (2) information literacy, and (3) epistemic beliefs.
The study employs a quantitative four-wave longitudinal design. To gain empirical data on knowledge development, both psychology (N = 137 at the first wave) and computer science students (N = 89 at the first wave) were investigated by means of standardized tests. The first wave took part right at the beginning of students’ first semester, followed by three consecutive waves at the beginning of the second, third, and fourth semesters. Additionally, data on several covariates likely to influence knowledge development (e. g., cognitive ability, academic self-concept, learning and achievement motivation) were collected.
Within the project, three research guiding questions were addressed:
- development of methods to capture knowledge networks in the disciplines of psychology and computer science.
- descriptive analysis of changes in domain-specific knowledge, information literacy, and epistemological beliefs over the first three semesters and explanation of these changes by cognitive and motivational variables
- exploration of ways to promote domain-specific knowledge, information literacy, and epistemological beliefs in students.
Research Design: Partially Standardized Survey Instrument (provides question formulation; open answer format); repeated measurements
The individual measurement time points (t1, t2, t3 and t4) each comprised two data collections: First, the subjects had to complete an online questionnaire battery (approx. 1/2 to 1 hour) in a setting of their choice (e.g., at home). Subsequently, a test battery with the various performance tests had to be completed in the context of approx. 2-hour standardized group surveys in PC pools of the respective university (Trier University, Trier University of Applied Sciences, Saarland University).
Central procedures (recorded at all 4 measurement time points [t1-t4]):
Specialized knowledge in psychology: Here, a specially developed procedure was used to record specialized knowledge in memory psychology. The test measures the extent to which students understand that information is not stored statically in human memory, but is processed and changed by processes such as interference, chunking, and source control. For this purpose, subjects are given k = 9 descriptions of classical memory experiments. For each of the nine situation descriptions, subjects are given six possible outcomes of the experiment with different rationales. Subjects indicate on a seven-point rating scale how much they believe that each described outcome of the experiment and its rationale are correct or incorrect. Only one of the six statements in each case describes the actual outcome of the experiment with the explanation accepted in the research literature. Details on test construction and evaluation can be found in Gorges, Schneider, and Mayer (2015).
Expertise in computer science: The specially developed procedure for assessing algorithmic understanding measures the extent to which students can evaluate algorithmic strategies for the systematic solution of “classical” problems from computer science. The problem definitions or solution strategies are, on the one hand, the two common strategies “divide and conquer” and the so-called “greedy strategy”. On the other hand, the ability to recognize when probably no efficient strategy is helpful to solve the problem exactly (“NP-completeness”), as long as P = not NP. The short form of the procedure comprises k = 6, the long form k = 9 exemplary descriptions of classical problems, which are equally distributed among the three strategies, i.e. two (respectively three) descriptions per strategy. For evaluation, a seven-point rating scale is given in each case.
In the problems to be solved using the “divide and conquer” strategy, the subjects are given six algorithms that are to be evaluated for their efficiency with respect to their runtime using O-notation. The algorithms presented can be assigned to three efficiency levels: Two each solve the problem efficiently, two very inefficiently, and the remaining two have a runtime that can still be considered acceptable. For the descriptions of the “Greedy Strategy”, six solution proposals are given together with the problem definition, which are to be evaluated with respect to the correctness of the procedure; in each case, only two of the six statements are proven to be correct in the technical literature. In the situation descriptions for “NP-completeness”, six heuristics each are given for solving the problem, and the subjects are also asked to rate these for correctness on a seven-point rating scale. Again, only two of the six heuristics represent statements that are assumed to be correct in the literature as long as P = not NP.
Information Literacy: The so-called PIKE test was constructed according to the principle of the so-called Situational Judgement Tests: The items each describe a problem situation in the research process in one or two sentences and then list four possible procedures that are to be assessed in terms of their instrumental suitability for solving the problem on a scale from 1 = not at all suitable to 5 = very suitable. Both unsuitable and in principle suitable, but differently good or efficient solutions are given. Due to different research cultures and approaches in psychology and computer science, the test was adapted to each discipline (PIKE-P for psychology and PIKE-CS for computer science). Details on test construction and scoring can be found in Rosman and Birke (2015) and Rosman, Mayer, and Krampen (2015).
Epistemological beliefs: Epistemological beliefs were assessed using two different procedures, each discipline-specific. The EBI-AM questionnaire is based on established questionnaires on epistemological beliefs. Compared to these, however, it has the advantage with regard to the questions of the present study of representing absolute and multiplistic beliefs on separate scales. The instrument contains 23 epistemological statements; subjects are asked to rate the degree of their agreement with reference to their respective scientific discipline (psychology or computer science) on a five-point Likert scale (e.g., “The only certain thing in this discipline seems to me to be uncertainty”). Details on test construction and scoring can be found in Peter, Rosman, Mayer, Leichner, and Krampen (2015).
In addition, the established CAEB was used. On a semantic differential, subjects are asked to assess knowledge in their discipline (psychology or computer science). The instrument maps the dimensions of texture and variability of knowledge. Details on test construction and evaluation can be found in Stahl and Bromme (2007).
In addition, a large number of covariates were recorded at different survey time points, including epistemic curiosity, essentialism, intelligence, tendency to drop out, study satisfaction, certainty of study choice, personality, self-concept, and others.
Data Collection Method:
Data collection in the absence of an experimenter (Online questionnaire):
– Individual Administration
Data collection in the presence of an experimenter
– Group Administration (2-30 subjects per group)
– Paper and Pencil (only Raven’s, APM intelligence test)
– Individual Administration (only exceptional cases)
Population: College students (psychology and computer science)
Survey Time Period:
1st wave (baseline): october – november 2013
2nd wave: march -mai 2014
3rd wave october-november 2014
4th wave: march-mai 2015
1st wave (baseline): october 2013 – january 2014
2nd wave: april -june 2014
3rd wave october-november 2014
4th wave: april-mai 2015
Sample: Convenience sample
82% female subjects
18% male subjects
Computer science (t1):
22% female subjects
78% male subjects
Age Distribution: Psychology: 18-31 years; computer science: 17-32 years
Spatial Coverage (Country/Region/City): Germany/Rhineland-Palatinate & Saarland/Trier & Saarbrücken
Subject Recruitment: Recruitment was done through emails, flyers, notices on each campus, and advertising in classes. Participants received an expense allowance at the end of each measurement time point. Subjects who participated in all four measurement time points were paid a bonus after completion of the fourth measurement time point.
Sample Size: Psychology: 137 inidividuals (t1); Computer science: 89 indiviudals (t1)
Psychology: approximately 70-80% exhaustion rate and 84% survival rate from t1 to t4 (t2: 126 individuals; t3: 116 individuals; t4: 115 individuals).
Computer science: about 20-30 % exhaustion rate and 64 % survival rate from t1 to t4 (t2: 68 individuals; t3: 62 individuals; t4: 57 individuals)