Direct Integration: Training Software Developers to Conduct Usability Evaluations

Research output: Contribution to journalConference article in JournalResearchpeer-review

Abstract

Many improvements of the interplay between usability evaluation and software development rely either on better methods for conducting usability evaluations or on better formats for presenting evaluation results in ways that are useful for software designers and developers. Both approaches involve a complete division of work between developers and evaluators, which is an undesirable complexity for many software development projects. This paper takes a different approach by exploring to what extent software developers and designers can be trained to carry out their
own usability evaluations. The paper is based on an empirical study where 36 teams with a total of 234 first-year university students on software development and design educations were trained in a simple approach for user-based website usability testing that was taught in a 40 hour course. This approach supported them in planning, conducting, and interpreting the results of a usability evaluation of an interactive website. They gained good competence in conducting the evaluation, defining task assignments and producing a usability report, while they were less successful in acquiring skills for identifying and describing usability problems.
Original languageEnglish
JournalCEUR Workshop Proceedings
Pages (from-to)74-81
ISSN1613-0073
Publication statusPublished - 2008
Event1st Workshop on the Interplay between Usability Evaluation and Software Development (I-USED 2008) - Pisa, Italy
Duration: 24 Sep 200824 Sep 2008
Conference number: 1

Conference

Conference1st Workshop on the Interplay between Usability Evaluation and Software Development (I-USED 2008)
Number1
CountryItaly
CityPisa
Period24/09/200824/09/2008

Fingerprint

Software engineering
Websites
Software design
Education
Students
Planning
Testing

Cite this

@inproceedings{c892bf80e93211ddb0a4000ea68e967b,
title = "Direct Integration: Training Software Developers to Conduct Usability Evaluations",
abstract = "Many improvements of the interplay between usability evaluation and software development rely either on better methods for conducting usability evaluations or on better formats for presenting evaluation results in ways that are useful for software designers and developers. Both approaches involve a complete division of work between developers and evaluators, which is an undesirable complexity for many software development projects. This paper takes a different approach by exploring to what extent software developers and designers can be trained to carry out theirown usability evaluations. The paper is based on an empirical study where 36 teams with a total of 234 first-year university students on software development and design educations were trained in a simple approach for user-based website usability testing that was taught in a 40 hour course. This approach supported them in planning, conducting, and interpreting the results of a usability evaluation of an interactive website. They gained good competence in conducting the evaluation, defining task assignments and producing a usability report, while they were less successful in acquiring skills for identifying and describing usability problems.",
author = "Skov, {Mikael B.} and Jan Stage",
note = "Volumne: 407",
year = "2008",
language = "English",
pages = "74--81",
journal = "CEUR Workshop Proceedings",
issn = "1613-0073",
publisher = "CEUR Workshop Proceedings",

}

Direct Integration: Training Software Developers to Conduct Usability Evaluations. / Skov, Mikael B.; Stage, Jan.

In: CEUR Workshop Proceedings, 2008, p. 74-81.

Research output: Contribution to journalConference article in JournalResearchpeer-review

TY - GEN

T1 - Direct Integration: Training Software Developers to Conduct Usability Evaluations

AU - Skov, Mikael B.

AU - Stage, Jan

N1 - Volumne: 407

PY - 2008

Y1 - 2008

N2 - Many improvements of the interplay between usability evaluation and software development rely either on better methods for conducting usability evaluations or on better formats for presenting evaluation results in ways that are useful for software designers and developers. Both approaches involve a complete division of work between developers and evaluators, which is an undesirable complexity for many software development projects. This paper takes a different approach by exploring to what extent software developers and designers can be trained to carry out theirown usability evaluations. The paper is based on an empirical study where 36 teams with a total of 234 first-year university students on software development and design educations were trained in a simple approach for user-based website usability testing that was taught in a 40 hour course. This approach supported them in planning, conducting, and interpreting the results of a usability evaluation of an interactive website. They gained good competence in conducting the evaluation, defining task assignments and producing a usability report, while they were less successful in acquiring skills for identifying and describing usability problems.

AB - Many improvements of the interplay between usability evaluation and software development rely either on better methods for conducting usability evaluations or on better formats for presenting evaluation results in ways that are useful for software designers and developers. Both approaches involve a complete division of work between developers and evaluators, which is an undesirable complexity for many software development projects. This paper takes a different approach by exploring to what extent software developers and designers can be trained to carry out theirown usability evaluations. The paper is based on an empirical study where 36 teams with a total of 234 first-year university students on software development and design educations were trained in a simple approach for user-based website usability testing that was taught in a 40 hour course. This approach supported them in planning, conducting, and interpreting the results of a usability evaluation of an interactive website. They gained good competence in conducting the evaluation, defining task assignments and producing a usability report, while they were less successful in acquiring skills for identifying and describing usability problems.

M3 - Conference article in Journal

SP - 74

EP - 81

JO - CEUR Workshop Proceedings

JF - CEUR Workshop Proceedings

SN - 1613-0073

ER -