Call for Papers

 

Motivation

 

For software engineering research results to impact modern, industrial software development and how it is taught in significant ways, the implementation and encapsulation of the results in sufficiently mature, effective, and usable tools appears to be a necessity. The more effective, usable, and available these tools and their supporting documentation materials are, the easier it is for prospective users to determine the scope, benefits, and limitations of the results in relevant contexts, thus maximizing the potential for more adoption and impact.

 

While the implementation and maintenance of such tools is not trivial and typically cannot be accomplished without significant resources, many recent technological advances (e.g., frameworks, libraries, and meta tools such as language workbenches) and social developments (e.g., the increasing trend towards open source software and the sharing of expertise via question-and-answer websites such as stackoverflow.com) can provide substantial help. Similarly, the creation of effective supporting documentation is often facilitated through the use of more “modern” formats such as screen casts and video tutorials [6], which can then easily be disseminated via services such as YouTube or FaceBook.

 

Many other communities have recognized the importance of tools and, e.g., created workshops specifically designed to facilitate the evaluation and comparison of tools. Examples include (all held in 2016),

 

  • the Language Workbench Challenge [2],
  • the Transformation Tool Contest [3]
  • the SAT Competition [4], and
  • VerifyThis Verification Competition [5]

 

While efforts have been made to compare modeling approaches (in, e.g., the Comparing Modeling Approaches Workshop [1]), the modeling research community does not appear to be paying as much attention as some other communities to more effectively leveraging tools for illustrating, evaluating, and disseminating research results, and for making a convincing case for more wide-spread adoption of modeling and MDE.
More specifically,

 

  • there is evidence suggesting that the quality of documentation of many MDE tools is too low [7],
  • there is insufficient support for determining and comparing the strengths and weaknesses of MDE tools, their suitability for specific tasks, and opportunities for interoperation and reuse, and
  • few repeatable tool evaluations and comparisons exist that use appropriate, publicly accessible use cases and that have been carried out by independent third parties.

 

Objectives

 

The high-level goal of the workshop is to support the effective development, maintenance, dissemination, and use of high-quality MDE tools and supporting documentation material. To this end, the workshop has the following objectives:

 

  1. Facilitate the determination of the state-of-the-art in MDE tools and comparative evaluations of existing tools by identifying comparison criteria, use cases, and evaluation procedures.
  2. Identify strengths, weaknesses of tools, together with opportunities for improvements, reuse, and ‘cross-fertilization’.
  3. Encourage and help tool developers create, maintain, and disseminate tools and supporting materials that demonstrate the benefits of modeling convincingly and encourage adoption of modeling in academia and industry.
  4. Collect best practices for the development, distribution, and maintenance of MDE tools and any supporting material such as tutorials and comparative evaluations.

 

The workshop welcomes regular paper submissions on these topics. But, to facilitate the comparison of tools, it also solicits submissions that demonstrate the use and utility of a specific MDE tool in the context of one of two challenge problems. The first is from the embedded, real-time systems domain and involves a ‘rover’, i.e., a small, Raspberry Pi-based vehicle with electric motors, sensors, wireless communication, and actuators such as cameras. The second challenge problem is concerned with the Internet of Things and targets the design of an ‘intelligent house’. More detailed descriptions of these two problems are available here.

 

Moreover, the workshop also features a video tutorials track which challenges the community to create appealing video tutorials that describe the use of a tool for a specific purpose in an informative and accurate, but also effective and attractive way. Links to sample tutorials, tools, and some advice in the form of best practices is available here.

 

 

Topics of interest

 

The workshop defines an MDE tool to be a tool that provides support for the creation and/or use of models for some significant development tasks (involving, e.g., creation, manipulation, transformation, evolution, communication, generation, execution, testing, simulation, or analysis) and that offers benefit to the user through the effective use of some or all of the three core principles behind MDE: abstraction, automation, and analysis; tools can come from industry or academia and be freely available, open source, or commercial. MDETools’17 welcomes submissions on aspects related to the development and use of such tools and their supporting materials.

 

MDETools’17 is particularly interested in the following topics:

 

  1. Convincing, insightful descriptions of the state-of-the-art in MDE tooling, in general and in the context of the workshop challenge problems (see below).
  2. Criteria and approaches for objective, repeatable tool evaluations and comparisons.
  3. Proposals for facilitating the creation, maintenance and dissemination (possibly using gamification) of high-quality tools and materials.
  4. Techniques and tools for the creation of attractive documentation material, in general and in the context of the video tutorial track (see below).

 

Intended audience 

 

The intended audience consists of all people interested in advancing MDE in industry or academia through the development of better MDE tools and supporting materials.


References:

  1. Workshop on Comparing Modeling Approaches, 16th International Conference on Model Driven Engineering Languages and Systems (MODELS’13), 2013, http://cserg0.site.uottawa.ca/cma2013models)
  2. Language Workbench Challenge Workshop, SPLASH 2016, http://2016.splashcon.org/track/lwc2016.
  3. Transformation Tool Contest, STAF 2016, http://www.transformation-tool-contest.eu.
  4. SAT Competition, 19th International Conference on Theory and Applications of Satisfiability Testing, 2016, http://www.satcompetition.org.
  5. VerifyThis Verification Competition, ETAPS 2016, http://etaps2016.verifythis.org.
  6. H. van der Meij, J. van der Meij. A comparison of paper-based and video tutorials for software learning. Computers & Education 78:150–159. 2014.
  7. N. Kahani, M. Bagherzadeh, J. Dingel, J.R. Cordy. The problems with Eclipse modeling tools: a topic analysis of Eclipse forums. 19th International Conference on Model Driven Engineering Languages and Systems (MODELS’16). 2016.