Hoisl, Bernhard. 2014. Comparing Three Notations for Defining Scenario-based Model Tests: A Controlled Experiment. 9th International Conference on the Quality of Information and Communications Technology (QUATIC), Guimarães, Portugal, 23.09.-26.09..

BibTeX

@CONFERENCE{Hoisl2014,
title = {Comparing Three Notations for Defining Scenario-based Model Tests: A Controlled Experiment},
author = {Bernhard Hoisl},
year = {2014},
address = {Guimarães},
url = {http://2014.quatic.org/},
language = {EN},
misc = {9th International Conference on the Quality of Information and Communications Technology (QUATIC)},
abstract = {Scenarios are an established means to specify requirements for software systems. Scenario-based tests allow for validating software models against such requirements. In this paper, we consider three alternative notations to define such scenario tests on structural models: a semi-structured natural-language notation, a diagrammatic notation, and a fully-structured textual notation. In particular, we performed a study to understand how these three notations compare to each other with respect to accuracy and effort of comprehending scenario-test definitions, as well as with respect to the detection of errors in the models under test. 20 software professionals (software engineers, testers, researchers) participated in a controlled experiment based on six different comprehension and maintenance tasks. For each of these tasks, questions on a scenario-test definition and on a model under test had to be answered. In an ex-post questionnaire, the participants rated each notation on a number of dimensions (e.g., practicality or scalability). Our results show that the choice of a specific scenario-test notation can affect the productivity (in terms of correctness and time-effort) when testing software models for requirements conformance. In particular, the participants of our study spent comparatively less time and completed the tasks more accurately when using the natural-language notation compared to the other two notations. Moreover, the participants of our study explicitly expressed their preference for the natural-language notation.},
}

Abstract

Scenarios are an established means to specify requirements for software systems. Scenario-based tests allow for validating software models against such requirements. In this paper, we consider three alternative notations to define such scenario tests on structural models: a semi-structured natural-language notation, a diagrammatic notation, and a fully-structured textual notation. In particular, we performed a study to understand how these three notations compare to each other with respect to accuracy and effort of comprehending scenario-test definitions, as well as with respect to the detection of errors in the models under test. 20 software professionals (software engineers, testers, researchers) participated in a controlled experiment based on six different comprehension and maintenance tasks. For each of these tasks, questions on a scenario-test definition and on a model under test had to be answered. In an ex-post questionnaire, the participants rated each notation on a number of dimensions (e.g., practicality or scalability). Our results show that the choice of a specific scenario-test notation can affect the productivity (in terms of correctness and time-effort) when testing software models for requirements conformance. In particular, the participants of our study spent comparatively less time and completed the tasks more accurately when using the natural-language notation compared to the other two notations. Moreover, the participants of our study explicitly expressed their preference for the natural-language notation.

Publication's profile

Status of publication Published
Affiliation WU
Type of publication Paper presented at an academic conference or symposium
Language English
Title Comparing Three Notations for Defining Scenario-based Model Tests: A Controlled Experiment
Event 9th International Conference on the Quality of Information and Communications Technology (QUATIC)
Year 2014
Date 23.09.-26.09.
Country Portugal
Location Guimarães
URL http://2014.quatic.org/

Associations

Projects
Domain-Specific Languages for Model-Driven Security Engineering
People
Hoisl, Bernhard (Former researcher)
Organization
Information Systems and New Media IN (Details)
Research areas (ÖSTAT Classification 'Statistik Austria')
1108 Informatics (Details)
1140 Software engineering (Details)
1147 IT security (Details)
5367 Management information systems (Details)
Google Scholar: Search