Keynotes - Table of Contents
Arie van Deursen
Software Engineering without Borders

DevOps approaches software engineering by advocating the removal of borders between development and operations. DevOps emphasizes operational resilience, continuous feedback from operations back to development, and rapid deployment of features developed. In this talk we will look at selected (automation) aspects related to DevOps, based on our collaborations with various industrial partners. For example, we will explore (automated) methods for analyzing log data to support deployments and monitor REST API integrations, (search-based) test input generation for reproducing crashes and testing complex database queries, and zero downtime database schema evolution and deployment. We will close by looking at borders beyond those between development and operations, in order to see whether there are other borders we need to remove in order to strengthen the impact of software engineering research.

The keynote will be based on joint work with former and current master and PhD students from Delft University of Technology, and co-workers in industry and academia.


Arie van Deursen is professor in software engineering at Delft University of Technology, The Netherlands, where he heads the Software Engineering Research Group (SERG) and chairs the Department of Software Technology. His research interests include empirical software engineering, software testing, and software architecture. He aims at conducting research that will impact software engineering practice, and has co-founded two spin-off companies from earlier research. He serves on the editorial boards of Empirical Software Engineering, the ACM Transactions on Software Engineering, and the open access PeerJ/CS, and is program co-chair of ESEC/FSE 2017, the joint meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering.

Jiawei Han
Mining Structures from Massive Text Data: Will It Help Software Engineering?

The real-world big data are largely unstructured, interconnected text data. One of the grand challenges is to turn such massive text data into structured data and actionable knowledge. We propose a text mining approach that requires only distant or minimal supervision but relies on massive text data. We show quality phrases can be mined from such massive text data, types can be extracted from massive text data with distant supervision, and entities/attributes/values can be discovered by meta-path directed pattern discovery. We show text-rich and structure-rich networks can be constructed from massive unstructured data. Finally, we speculate whether such a paradigm could be useful for turning massive software repositories into multi-dimensional structures to help searching and mining software repositories.


Jiawei Han is Abel Bliss Professor in the Department of Computer Science, University of Illinois at Urbana-Champaign. He was a professor in the School of Computing Science, SFU, from 1987 to 2001. He has been researching into data mining, information network analysis, database systems, and data warehousing, with over 800 journal and conference publications. He has chaired or served on many program committees of international conferences in most data mining and database conferences. He also served as the founding Editor-In-Chief of ACM Transactions on Knowledge Discovery from Data and the Director of Information Network Academic Research Center supported by U.S. Army Research Lab (2009-2016), and is the co-Director of KnowEnG, an NIH funded Center of Excellence in Big Data Computing since 2014. He is Fellow of ACM, Fellow of IEEE, and received 2004 ACM SIGKDD Innovations Award, 2005 IEEE Computer Society Technical Achievement Award, and 2009 M. Wallace McDowell Award from IEEE Computer Society. His co-authored book "Data Mining: Concepts and Techniques" has been adopted as a textbook popularly worldwide.

Gerard Holzmann
Cobra - an Interactive Static Code Analyzer

Sadly we know that virtually all software of any significance has residual errors. Some of those errors can be traced back to requirements flaws or faulty design assumptions; others are just plain coding mistakes.

Static analyzers have become quite good at spotting these types of errors, but they don't scale very well. If, for instance, you need to check a code base of a few million lines you better be prepared to wait for the result; sometimes hours.

Eyeballing a large code base to find flaws is clearly not an option, so what is missing is a static analysis capability that can be used to answer common types of queries interactively, even for large code bases. I will describe the design and use of such a tool in this talk.


Gerard Holzmann got his PhD from Delft University of Technology in The Netherlands in the deep dark days before there were PCs, iphones, or even an internet. He joined Bell Labs in Murray Hill,New Jersey, to help fix some of these things, but others beat him to it. At Bell Labs he did develop one of the first digital darkroom programs, and early versions of software analysis tools like Spin. After twenty-some years at Bell Labs he joined NASA/JPL to start a lab for reliable software (LaRS). For unclear reasons he was later made a JPL fellow, an ACM fellow, and was elected to the National Academy of Engineering. He left JPL in January 2017 to start a new research and consulting company called Nimble Research.