Research
As the adage goes “practice makes perfect”. We tell our children to keep practicing so they get better at sports, hobbies, and school. We also practice many of our skills every day: we write, we read, we drive, etc. Despite our intuitive experience of how practice improves performance, the computational mechanisms of this common phenomenon are not well understood. Instead, current research has focused on describing when practice makes perfect: you will want to space it out, and alternate topics, and test yourself as you go (but not too much!). Because we lack the how, considerable evidence from laboratory studies fails to generalize beyond constrained situations, fails to reach applied settings, or does not replicate. My research agenda fills this gap by specifying the computational mechanisms of how practice makes perfect. My work provides broader generalization, replicability, and applicability, involving testing detailed learning models and their boundaries in carefully controlled experiments and interventions embedded in educational practice.
Practice trains attention
Although to a non-native English speaker “cheap” and “sheep” often sound similar, for native speakers they are clearly distinguishable. I am interested in understating how this change surfaces from the accumulation of discrete learning opportunities. One hypothesis is that with each practice opportunity our attentional system changes to adapt to the sequence of information presented across opportunities thus far, weighting different properties differently depending on the nature of previous opportunity.
This line of research informs how attentional changes result from repeated practice. It suggests that human learning takes place at a local – opportunity by opportunity – timescale which accumulates into larger timescale effects (well-captured by existing models of learning). It also informs applied work, by allowing for evidence-based suggestions for practice beyond crude one-size-fits-all suggestions common in the literature. For example, instead of suggesting that students interleave their study, my demonstrates that students should interleave study of materials that require discrimination and should block study of other materials.
Representative publications:
Carvalho, P.F. & Goldstone, R.L. (2014). Effects of Interleaved and Blocked Study on Delayed Test of Category Learning Generalization. Frontiers in Psychology, 5 (936), 1-10.
Carvalho, P.F., & Goldstone, R.L. (2014). Putting category learning in order: category structure and temporal arrangement affect the benefit of interleaved over blocked study. Memory & Cognition, 42(3), 481-495.
Carvalho, P.F., & Goldstone, R.L. (2015). What you learn is more than what you see: What can sequencing effects tell us about inductive category learning? Frontiers in Psychology, 6(505), 1-12.
Carvalho, P.F., & Goldstone, R.L. (2015). The benefits of interleaved and blocked study: Different tasks benefit from different schedules of study. Psychonomic Bulletin & Review, 22, 281-288. DOI:10.3758/s13423-014-0676-4
Carvalho, P.F., & Goldstone, R.L. (in press). The most efficient sequence of study depends on the type of test. Applied Cognitive Psychology.
Practice trains forgetting
Since Ebbinghaus, we have known that memories are strengthened when they are reactivated. It is often said that repetition is the route to durable knowledge. However, continuous knowledge involves forgetting irrelevant information and not remembering all aspects of each learning opportunity (which would emphasize discontinuity). My research agenda explores the nature of repetition that allows us to connect, remember, and, importantly, forget information across discrete learning opportunities, in order to create continuous knowledge. The main insight from this line of research is that exact repetition improves memory but not selectivity and knowing what to forget is key to learning; to learn to solve a type of problem, one needs to forget each problem’s idiosyncratic details.
Representative publications:
Carvalho. P.F., Sana, F., Yan, V. X. (2020). Self-regulated Spacing in a Massive Open Online Course is Related to Better Learning. Nature Science of Learning, 5, 2.
Carvalho, P.F., McLaughlin, E. A., & Koedinger, K.R. (2017). Is there an explicit learning bias? Students beliefs, behaviors and learning outcomes. In G. Gunzelmann, A. Howes,, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (pp 204-209). Austin TX: Cognitive Science Society.
Thaker, K.**, Carvalho, P.F., & Koedinger, K.R. (2019). Comprehension Factor Analysis: Modeling student’s reading behaviour. Proceedings of the 9th International Learning Analytics and Knowledge (LAK) Conference.
Chouta, I. & Carvalho. P.F. (2019). Square it up! How to model step duration when predicting student performance. Proceedings of the 9th International Learning Analytics and Knowledge (LAK) Conference.
Large-Scale experimentation and models that generalize
Current research in cognitive psychology is faltering; there is an emphasis on specific tasks and laboratory paradigms that do not replicate or fail to generalize. In part this is the result of incomplete theories that do not generalize beyond very specific conditions that might be hard to reinstate. My research studying learning theory in natural environments (classrooms, educational technology, and online courses), has resulted in better and more general theories and contributed to better suggestions for practice that work across a wide array of situations. My proposal is that better models of learning based on a broader empirical base have the potential to improve educational practice. Applying learning theory to educational practice can in turn inform theories of learning, improving our understanding of cognition.
A larger empirical base is also achieved by testing models against multiple data sources. I am a strong advocate of open science and data sharing. As described above, my work relies in existing, freely available, large datasets. I also contribute by making all data I collect freely available through OSF or DataShop and promoting data sharing best practices by contributing to the development of tools that allow a wider range of scientists and practitioners to use existing data.
Representative publications:
Carvalho. P.F., Sana, F., Yan, V. X. (2020). Self-regulated Spacing in a Massive Open Online Course is Related to Better Learning. Nature Science of Learning, 5, 2.
Motz, B.A., Carvalho, P.F., de Leeuw, J.R., & Goldstone, R.L. (2018). Embedding Experiments: Staking Causal Inference in Authentic Educational Contexts. Journal of Learning Analytics, 5(2), 47–59. http://dx.doi.org/10.18608/jla.2018.52.4.
Carvalho, P.F.*, Braithwaite, D. W.*, de Leeuw, J.R., Motz, B.A., & Goldstone, R.L. (2016) An in vivo study of self-regulated study sequencing in Introductory Psychology courses, PLoS ONE 11 (3), 1-16.
Carvalho, P.F., Manke, K.J, & Koedinger, K.R. (2018). Not all Active Learning is Equal: Predicting and Explaining Improves Transfer Relative to Answering Practice Questions. In T.T. Rogers, M. Rau, X. Zhu, & C. W. Kalish (Eds.), Proceedings of the 40th Annual Conference of the Cognitive Science Society (pp. 1458-1463). Austin, TX: Cognitive Science Society.