Abstract
Usability evaluation provide software development teams with insights on the degree to which a software application enables a user to achieve his/her goals, how fast these goals can be achieved, how easy it is to learn and how satisfactory it is in use Although usability evaluations are crucial in the process of developing software systems with a high level of usability, its use is still limited in the context of small software development companies.
Several approaches have been proposed to support software development practitioners (SWPs) in conducting usability evaluations, and my thesis explores two of these:
1) The first approach is to support SWPs by training them to drive usability evaluations.
2) The second approach to support SWPs involves minimalist training of end users to drive usability evaluations.
In related work, a set of five quality criteria for usability evaluations is applied to measure performance of usability evaluation efforts. These criteria cover thoroughness, validity, reliability, downstream utility and cost effectiveness.
This leads to my overall research question: Can we provide support that enables software development practitioners and users to drive usability evaluations, and how do they perform with respect to the quality criteria?
I studied the developer driven and user driven approaches by firstly conducting literature surveys related to each of these topics followed by artificial settings research and finally by conducting research in natural settings. The four primary findings from my studies are: 1) The developer driven approach reveals a high level of thoroughness and downstream utility. 2) The user driven approach has higher performance regarding validity 3) The level of reliability is comparable between the two approaches. 4) The user driven approach will, arguably, outperform the developer driven in terms of cost effectiveness.
Several approaches have been proposed to support software development practitioners (SWPs) in conducting usability evaluations, and my thesis explores two of these:
1) The first approach is to support SWPs by training them to drive usability evaluations.
2) The second approach to support SWPs involves minimalist training of end users to drive usability evaluations.
In related work, a set of five quality criteria for usability evaluations is applied to measure performance of usability evaluation efforts. These criteria cover thoroughness, validity, reliability, downstream utility and cost effectiveness.
This leads to my overall research question: Can we provide support that enables software development practitioners and users to drive usability evaluations, and how do they perform with respect to the quality criteria?
I studied the developer driven and user driven approaches by firstly conducting literature surveys related to each of these topics followed by artificial settings research and finally by conducting research in natural settings. The four primary findings from my studies are: 1) The developer driven approach reveals a high level of thoroughness and downstream utility. 2) The user driven approach has higher performance regarding validity 3) The level of reliability is comparable between the two approaches. 4) The user driven approach will, arguably, outperform the developer driven in terms of cost effectiveness.
Original language | English |
---|---|
Print ISBNs | 1601-0590 |
Publication status | Published - Jan 2013 |