Can my students make a graph? Measuring graphing practices with an auto-scored performance-based assessment
Monday, August 2, 2021
ON DEMAND
Link To Share This Presentation: https://cdmcd.co/QM89EM
Eli Meir and Susan Maruca, SimBio, Missoula, MT, Stephanie Gardner, Biological Sciences, Purdue University, West Lafayette, IN, Joel K. Abraham, Biological Sciences, California State University, Fullerton, Fullerton, CA
Background/Question/Methods Constructing graphs is a common way for researchers to explore and share ecological data, but is a skill that many students are still developing at the college level. While there is much research on graph interpretation, less is known about learning to construct graphs. To facilitate this research, we are developing GraphSmarts, a digital performance-based, auto-scored assessment of graphing skill.
GraphSmarts presents students with a hypothesis about food chain dynamics in a threatened ecological community, and three predictions which follow from that hypothesis. The student receives a data set and a simple graphing interface and is asked to construct a graph relevant to each prediction through selections in the interface, such as variables to plot, type of graph, and whether to use descriptive statistics. The graphing tool is surrounded by questions in intermediate constraint formats. In this talk we discuss our preliminary validation data and early learning about student graphing abilities.
Results/Conclusions We have validated GraphSmarts in a number of ways. Through 14 think-aloud interviews we found the interface was clear and usable. We conducted 43 interviews of biology undergraduates split between GraphSmarts and an equivalent pen-and-paper based assessment. On three measures, the graphs students constructed were similar while on two measures there were differences in how students graphed based on format of the task. We tested GraphSmarts as a stand-alone assessment in seven undergraduate biology classes as well as with 10 faculty members who we judge as more experienced graphers. Students in lower-level classes tended to make less expert-like graphs than those in higher-level classes. Faculty made better graphs than undergraduate students. GraphSmarts scores were similar from one term to the next in a class, evidence of reliability. Despite multiple lines of evidence showing GraphSmarts validly measuring graphing skill, we also saw a need for more refinement. Correlation between question and performance-based data was low, as was correlation between graphs students drew in response to the three predictions. We also saw evidence of learning from one graph to the next, making interpretation as a summative assessment more difficult.
Across the classes we tested in, we saw the most deviation from expert-like graphing practices in the graph types students chose, following axis conventions, and making fully thought out scientific claims based on their graph. We’ll discuss both the validation and the patterns across classes in our talk.