My research deals with Human Systems Integration and the modelling of systems in which humans as users or operators affect system performance. I am particularly interested in aspects of decision making and decision support.
In my research I try to understand how people use and respond to information from technological systems and how people's use of this information changes over time after they gain experience with a system. I conduct much of my research in collaboration with researchers and students from Israel and from around the world. My academic training is originally in psychology, but I am very interested in quantitative models from engineering and economics and try to use them in my work. My recent research also deals with more general issues, related to legal and ethical aspects of systems. For instance, I am studying the responsibility of humans when operating highly automated systems (and we developed a model to quantify this responsibility.) Publications on the different topics are in the list of publications.
Quantitative models of human decisions and performance
Although my background is in psychology, I try to develop in my research quantitative models that resemble models used in engineering to predict user decisions and performance from properties of the system, the user and the usage situation. These models should eventually serve as design tools that can allow us to develop better systems with less trial and error. Of course, the developing of these models also furthers our understanding of the cognitive processes involved in using the systems.
Human responsibility when using advanced automation
We developed a model to quantify the responsibility a human has when interacting with or using a system with advanced automation (Douer & Meyer, 2020, 2021). We show that as automation improves, human responsibility diminishes. This needs to be considered when designing systems or evaluating events that involve automation.
Responses to information from decision aids and warning systems
We develop models and conduct laboratory experiments and field studies on the use of information from decision support and warning systems. We study the factors that affect the responses to warnings and the causes for people choosing to ignore warnings. The ultimate purpose of this line of research is to develop and validate a model that will allow us to predict how people respond to a warning system, given the warning characteristics, the instances under which it is used, and the characteristics of the user.
One major outcome of this line of research was the distinction between reliance and compliance as two different responses to a warning indicator, an alarm or an alert (see Meyer, 2004, Human Factors, for an initial description of this distinction and Vashitz et al., 2009, Journal of Biomedical Informatics, for a conceptual and methodological extension).
Interaction with automation, adaptive systems and personalization
One of my main research foci in recent years has been the interaction with automation and systems that have some degree of autonomy. We tied this topic to the issue of adaptivity and personalization, claiming that a system that adjusts its functioning to the individual user, is actually a special case of automation (with the system automatically changing itself to suite the user's needs or preferences).
Security and privacy in computer systems
The security of computer systems (including devices such as Smartphones) is gaining more and more importance. In recent years there is growing awareness that user behavior is a crucial determinant (and often the weakest link) in the securing of systems. We study security-related behavior in both laboratory settings and surveys of actual users to develop models to predict the effect of different user and system properties on the user's tendency to use security features.
This line of research deals with the use of information visualization as part of the exploration of data and the interpretation of analytics results. The topic is an important part of the study of the human side of data science and machine learning. It affects the interpretation of the results from the analytics process and decisions regarding the next steps in the process.