Tue 6 Nov 2018 08:30 - 10:00 at Horizons 6-9F - Keynote I Chair(s): Corina S. Păsăreanu

In many areas, such as image recognition, natural language processing, search, recommendation, autonomous cars, systems software and infrastructure, and even Software Engineering tools themselves, Software 2.0 (= programming using learned models) is quickly swallowing Software 1.0 (= programming using handcrafted algorithms). Where the Software 1.0 Engineer formally specifies their problem, carefully designs algorithms, composes systems out of subsystems or decomposes complex systems into smaller components, the Software 2.0 Engineer amasses training data and simply feeds it into an ML algorithm that will synthesize an approximation of the function whose partial extensional definition is that training data. Instead of code as the artifact of interest, in Software 2.0 it is all about the data where compilation of source code is replaced by training models with data. This new style of programming has far-reaching consequences for traditional software engineering practices. Everything we have learned about life cycle models, project planning and estimation, requirements analysis, program design, construction, debugging, testing, maintenance and implementation, … runs the danger of becoming obsolete.

One way to try to prepare for the new realities of software engineering is not to zero in on the differences between Software 1.0 and Software 2.0 but instead focus on their similarities. If you carefully look at what a neural net actually represents, you realize that in essence it is a pure function, from multi-dimensional arrays of floating point numbers to multi-dimensional arrays of floating point numbers (tensors). What is special about these functions is that they are differentiable (yes, exactly as you remember from middle school calculus), which allows them to be trained using back propagation. The programming language community has also discovered that there is a deep connection between back propagation and continuations. Moreover, when you look closely at how Software 2.0 Engineers construct complex neural nets like CNNs, RNNs, LSTMs, … you recognize they are (implicitly) using high-order combinators like map, fold, zip, scan, recursion, conditionals, function composition, … to compose complex neural network architectures out of simple building blocks. Constructing neural networks using pure and higher-order differentiable functions and training them using reverse-mode automatic differentiation is unsurprisingly called Differentiable Programming. This talk will illustrate the deep programming language principles behind Differentiable Programming, which will hopefully inspire the working Software 1.0 engineer to pay serious attention to the threats and opportunities of Software 2.0.

Erik Meijer received his Ph.D. from Nijmegen University in 1992. He was an associate professor at Utrecht University. He then became a software architect for Microsoft where he headed the Cloud Programmability Team from 2000 to 2013. He then founded Applied Duality Inc. In 2011 Erik Meijer was appointed part-time professor of Cloud Programming within the Software Engineering Research Group at Delft University of Technology. He is also member of the ACM Queue Editorial Board. In October 2015 Erik Meijer joined the Developer Infra Structure organization at Facebook where he works on leveraging ML to make developers more productive.

Meijer’s research has included the areas of functional programming, particularly Haskell, parsing, programming language design, XML, and foreign function interfaces. His work at Microsoft included C#, Visual Basic, LINQ, Volta, and the reactive programming framework (Reactive Extensions) for .NET. In 2009, he was the recipient of the Microsoft Outstanding Technical Leadership Award and in 2007 the Outstanding Technical Achievement Award as a member of the C# team.

Tue 6 Nov

Displayed time zone: Guadalajara, Mexico City, Monterrey change