Modelling Learning of New Keyboard Layouts

In Proceedings of the ACM Conference on Human Factors in Computing Systems, CHI 2017.
How long does it take you to learn a new keyboard?
  • Going from novice to an expert with a completely new keyboard can take as long as 50 hours.
  • Even after two hours of learning a new layout, visual search has a major impact on typing performance.
  • Even a modest deviation from Qwerty to a more efficient keyboard layout decreses typing speed by 2-7 WPM.
What the Model Does
  • The model can serve as a tool for anticipating users’ learning experience when practitioners tackle design problems such as:
    • Comparison: Which of the given layout alternatives has the lowest learning costs?
    • Immediate cost: What is the impact of layout changes on visual search performance immediately after the change?
  • The model can assist designers and decision-makers by predicting
    • how changes in layouts influence relearning times;
    • that large changes in layouts are possible when some maximum acceptable relearning cost is assumed;
    • how to change one layout into another subtly.

Predicting how users learn new or changed interfaces is a long-standing objective in HCI research. This paper contributes to understanding of visual search and learning in text entry. With a goal of explaining variance in novices’ typing performance that is attributable to visual search, a model was designed to predict how users learn to locate keys on a keyboard: initially relying on visual short-term memory but then transitioning to recall-based search. This allows predicting search times and visual search patterns for completely and partially new layouts. The model complements models of motor performance and learning in text entry by predicting change in visual search patterns over time. Practitioners can use it for estimating how long it takes to reach the desired level of performance with a given layout.


All model code and data are open for anyone to use.

  • Model
    • Model binaries (Linux, Mac, Windows) for easily producing simulated learning results.
    • Model source code in Common Lisp for closer inspection and modification of the model.
  • Data from 33 participants, who trained with a new layout for 2.5 hours (visually searching for cued keys).
    • Reaction time data, describing how long it took to find cued keys.
    • Eye tracking data, describing fixations while searching for the keys.
    • Keypress data from a final-session typing task, where the participants typed on a smartphone with the layout that they learned.
The binaries are available for Linux, Mac, and Windows. Model code in common lisp is here. Please contact Jussi P.P. Jokinen to receive the data.

Press Releases
The paper was given a best paper award at the conference.

PDF, 0,7 MB
Jokinen, J. P. P., Sarcar, S., Oulasvirta, A., Silpasuwanchai, C., Wang, Z., & Ren, X. 2017. Modelling Learning of New Keyboard Layouts. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17).

author = {Jokinen, Jussi P P and Sarcar, Sayan and Oulasvirta, Antti and Silpasuwanchai, Chaklam and Wang, Zhenxin and Ren, Xiangshi},
booktitle = {Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17)},
title = {{Modelling Learning of New Keyboard Layouts}},
year = {2017}
Frequently Asked Questions
Q: Does the model simulate learning only on keyboard layouts?
A: The model was designed to handle any 2d-layouts. However, currently it is best suited for grid layouts, where each element has the same shape, size, and colour.

Q: Can the model distinguish between keys of different colour, shape, or size?
A: Currently no. However, there is a prototype that can use top-down information, such as colour and shape, to guide vision, and thus learn certain layouts faster.

Q: How does the model compare to existing production-system models, such as ACT-R and EPIC?
A: The model shares many computational aspects with other cognitive models, but it is not designed to be as general as cognitive architectures, such as ACT-R and EPIC. For more information, see the paper.

For questions and further information, please contact:

Jussi P.P. Jokinen

jussi.jokinen (at)

+358 45 196 1429

This work has received funding from the joint JST–AoF project "User Interface Design for the Ageing Population" (AoF grant 291556) as an activity of FY2014 Strategic Inter- national Collaborative Research Program (SICORP), and from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement 637991).