MEDS: Methodology for Evaluation in Design Science
Richard Baskerville et al.
Abstract
Design Science Research (DSR) is a paradigm that centres on the development and evaluation of artefacts. While central, a proper scientific evaluation can significantly increase the work required to complete a DSR project. The high costs, workload, and delays caused by evaluations have been shown to lead to poor or absent evaluations. These findings present two paradoxes: (1) evaluation is considered essential, yet it is commonly done poorly or not at all; (2) DSR promises to improve research relevance through timely artefact development and reliability through scientific evaluation, yet poor and lengthy evaluations cause DSR to often fail to deliver on its promise. To address these paradoxes, this paper presents MEDS (Methods for Evaluation in Design Science), a four-step evaluation method that includes three component methods new to the evaluation guidance literature: MuSCoW (Must, Should, Could, Won’t), the DSR Evaluation Selection Framework, and Short-Scoping of evaluation methods. MEDS aims to address the above paradoxes and guide DSR researchers, especially novices, to achieve a programme of effective and resource-efficient DSR evaluations as an on-going component of existing DSR methodologies. By carefully planning, orchestrating, and scoping evaluation activities, MEDS delivers a rigorous evaluation programme without overwhelming researchers with a massive evaluation burden.
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.50 × 0.4 = 0.20 |
| M · momentum | 0.50 × 0.15 = 0.07 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.