TY - JOUR
T1 - Assessing the performance of short multi-item questionnaires in aesthetic evaluation of websites
AU - Papachristos, Eleftherios
PY - 2019
Y1 - 2019
N2 - In recent years, website aesthetics has received a fair amount of attention from the HCI community. This has led to the creation of a variety of multi-item questionnaires aimed at capturing users’ aesthetic judgments. Researchers have used these questionnaires in several HCI studies to investigate the relationship between aesthetics and other evaluative constructs such as usability. However, their usefulness as evaluation tools in visual design practice remains underexplored. Lengthy multi-item questionnaires can be particularly problematic especially in studies where participants must evaluate multiple designs or when they are required to give responses repeatedly in predefined time intervals. Despite the criticism, single-item scales have been used in many past studies in which questionnaire length could be problematic. Another alternative available to practitioners/researchers are short versions of standardised multi-item questionnaires that have been created for the aesthetic evaluations of websites. In this paper, we present a study in which we compare the performance of three such condensed aesthetic questionnaires (i.e. aesthetics scale, AttrakDiff, VisAWI) during a website redesign project. The short versions of those questionnaires were used by 187 users during an evaluation of 7 alternative website designs. The questionnaires were compared on performance criteria such as reliability, validity, and predictive ability. Data analysis showed that although AttrakDiff’s overall performance was better, a considerable amount of variance in aesthetic judgment could not be accounted for by any of the questionnaires.
AB - In recent years, website aesthetics has received a fair amount of attention from the HCI community. This has led to the creation of a variety of multi-item questionnaires aimed at capturing users’ aesthetic judgments. Researchers have used these questionnaires in several HCI studies to investigate the relationship between aesthetics and other evaluative constructs such as usability. However, their usefulness as evaluation tools in visual design practice remains underexplored. Lengthy multi-item questionnaires can be particularly problematic especially in studies where participants must evaluate multiple designs or when they are required to give responses repeatedly in predefined time intervals. Despite the criticism, single-item scales have been used in many past studies in which questionnaire length could be problematic. Another alternative available to practitioners/researchers are short versions of standardised multi-item questionnaires that have been created for the aesthetic evaluations of websites. In this paper, we present a study in which we compare the performance of three such condensed aesthetic questionnaires (i.e. aesthetics scale, AttrakDiff, VisAWI) during a website redesign project. The short versions of those questionnaires were used by 187 users during an evaluation of 7 alternative website designs. The questionnaires were compared on performance criteria such as reliability, validity, and predictive ability. Data analysis showed that although AttrakDiff’s overall performance was better, a considerable amount of variance in aesthetic judgment could not be accounted for by any of the questionnaires.
KW - aesthetic evaluation
KW - Aesthetic questionnaires
KW - design evaluation
KW - website aesthetics
UR - http://www.scopus.com/inward/record.url?scp=85055646546&partnerID=8YFLogxK
U2 - 10.1080/0144929X.2018.1539521
DO - 10.1080/0144929X.2018.1539521
M3 - Journal article
AN - SCOPUS:85055646546
SN - 0144-929X
VL - 38
SP - 469
EP - 485
JO - Behaviour and Information Technology
JF - Behaviour and Information Technology
IS - 5
ER -