IMPS 2019 Keynote Speakers

 

Susan Embretson

Susan Embretson, Georgia Institute of Technology, Atlanta, Georgia, USA

Susan E. Embretson is Professor of Psychology at the Georgia Institute of Technology. Previously she was Professor of Psychology at University of Kansas and Director of the Quantitative Psychology Program there when she moved to Georgia Tech in 2004. She has served as President of the Psychometric Society (1999), the Society of Multivariate Psychology (1998) and American Psychological Association, Division of Measurement, Evaluation and Statistics (1991). She has received several awards for her research on interfacing cognitive psychology with psychometric models including the Saul Sells Award for Distinguished Multivariate Research (Society for Multivariate Experimental Psychology, 2019), Career Contribution Award (National Council on Measurement in Education, 2013), Distinguished Scientific Contribution Award (American Psychological Association, Division of Measurement, Evaluation and Statistics, 2001) and the Technical and Scientific Contribution Award (National Council on Measurement in Education ,1994-1996). Her current research interests include explanatory item response theory models, automatic item generation and dynamic measurement.

Modeling Cognitive Processes, Skills and Strategies in Item Responses: Implications for Test and Item Design

Interpretations of test scores typically involves the assumption that examinees are applying the same cognitive processes, skills and strategies in their item responses. If so, then test scores indicate examinee’s standing on a single latent trait. Results from several studies using mixture and explanatory item response models will be presented to show that this assumption is often not met. The implications of these results for test scoring, interpretations and external correlates, as well as for item and test design will be discussed.

 

 

Burr Settles

Burr Settles, DuoLingo, Pittsburgh, Pennsylvania, USA

Burr Settles leads the research group at Duolingo, an award-winning website and mobile app offering free language education for the world. He also runs FAWM.ORG, a global annual songwriting experiment. He is the author of Active Learning — a text on machine learning algorithms that are adaptive, curious, and exploratory (if you will). His research has been published in Cognitive Science as well as all the major artificial intelligence venues such as NIPS, ICML, AAAI, ACL, and EMNLP. His work has been covered by The New York Times, Slate, Forbes, WIRED, and the BBC among others. In past lives, Burr was a postdoc at Carnegie Mellon and earned a PhD from UW-Madison. He currently lives in Pittsburgh, where he gets around by bike and plays guitar in the pop band delicious pastries.

Improving Language Learning and Assessment with Data

As scalable learning technologies become more ubiquitous, student data can and should be analyzed to develop new instructional technologies, such as personalized practice schedules and data-driven assessments. I will describe a few projects at Duolingo — the world's largest language education platform with more than 200 million students worldwide — where we combine learner data with machine learning, computational linguistics, and psychometrics to improve learning, testing, and engagement outcomes.

 

 

Eric-Jan Wagenmakers

Eric-Jan Wagenmakers, University of Amsterdam, Amsterdam, Netherlands

Eric-Jan Wagenmakers is a mathematical psychologist and a dedicated Bayesian. He works for the Psychological Methods unit at the University of Amsterdam where he coordinates the team that develops JASP, an open-source software program for statistical analyses (www.jasp-stats.org). Wagenmakers is also a strong advocate of open science and the preregistration of analysis plans. For more information see ejwagenmakers.com.

Bayesian Multi-Model Inference for Practical and Impractical Problems

Whenever a set of models is applied to data, uncertainty surrounds both the selection of the best model and the estimation of the model parameters. The coherent Bayesian answer to the model selection question is to avoid all-or-none selection altogether and instead retain model uncertainty, employing it for purposes such as prediction and parameter estimation. Advantages of this multi-model approach include a reduction of overconfidence, improved predictive performance, and an increased robustness against model misspecification. Moreover, the multi-model framework can be seamlessly integrated with the recent open-science ideals of multi-team inference and multiverse analyses. We debunk several philosophical objections to Bayesian multi-model inference and demonstrate its practical use for problems ranging from the simple to the complex.

 

 

Dylan Molenaar

Dylan Molenaar, University of Amsterdam, Netherlands
Early Career Award

Dylan Molenaar is an assistant professor at the University of Amsterdam, The Netherlands. He obtained his PhD in 2012 from the same university. His dissertation, entitled “Testing distributional assumptions in psychometric measurement models” and supervised by Conor Dolan, received the Dissertation Prize by the Psychometric Society in 2013. In 2014, he was a visiting scholar at the Ohio State University. His current research is funded by a personal grant from the Netherlands Organization for Scientific Research and focusses on psychometric models for responses and response times. Other research interests include factor analysis and item response theory in general, and modeling of intelligence and personality test data.

Beyond simple main effects:
Challenges to the substantive interpretation of higher-order statistical effects

Using traditional psychometric models like the linear factor model and the two-parameter logistic item response theory model, the simple main effects of items and subjects can be inferred from the first two moments of the matrix of observed scores. The resulting parameters are statistically useful for various practical issues including item calibration, test equating, and adaptive testing, and for various substantive issues like establishing group differences in IQ and personality. However, some substantive hypotheses predict statistical effects that go beyond the first two moments of the data. Examples of these higher-order effects include non-linear effects, non-normal effects, heteroscedastic effects, and mixtures of different effects. Studying phenomena like these is substantively interesting but statistically challenging as the distributional assumptions underlying common psychometric models may distort the modeling results once violated. In this presentation, the challenges of studying higher-order statistical effects are illustrated using cases from the fields of intelligence, personality, and response time research.