In a great new piece in Wired, Jason Tanz announces the end of coding. That might be a bit tendentious but it shows that machine learning that comes with neural networks has the effect of computers programming themselves, with us humans reduced to providing some training data sets:

If in the old view programmers were like gods, authoring the laws that govern computer systems, now they’re like parents or dog trainers.

A neural network takes lots of input data and create its own internal wirings, matching input data to the desired output. It is self-learning, so to say. The neurons of a neural network effectively learn by adjusting their own relative weights (expressed by a number value) as well as by adjusting the weight of the relationships between different neurons (also expressed by number values). As all of the generated stuff is just lots and lots of numbers, we might not be able to say for sure how neural networks work and how they will react on specific input data:

The code that runs the universe may defy human analysis. Right now Google, for example, is facing an antitrust investigation in Europe that accuses the company of exerting undue influence over its search results. Such a charge will be difficult to prove when even the company’s own engineers can’t say exactly how its search algorithms work in the first place.

While this is surely true, it is perhaps too early to be pessimistic.

At the moment, a neural network is very difficult to understand, I agree with this. However, the idea behind neural networks is just a simple algorithm, which is used very many times in a neural network. It should be possible to find some more abstract and at the same time less abstract representation of the algorithm and its numbers.

An algorithm is not structurally different from code that is expressed in a programming language we use today. Math is just a different formal language from a “typical” programming language in that it is very abstract. For example, Python is an abstract language when compared to plain English, while a mathematical formula is an abstract language when compared to Python.

Paradoxically, while the algorithms used in machine learning are too abstract to be understood in action, all the different weights the neural network learns during training sessions are not abstract enough to be understood. The problem here is the mere number of neurons in a neural network, leading to a huge overall number of values.

As we today don’t really understand how exactly a neural network identifies images of cats, has it been trained to do so, we need to tackle the two problems outlined above and make the algorithm and all the numbers easier to understand. Inside a neural network trained to identify cat pictures, there will probably be a part responsible for recognizing cat eyes, and the perhaps 10.000 neurons that are doing this will be similarly wired to those of a different neural network trained to recognize images of human eyes.

We can look today at all of those 10.000 neurons and wonder how they manage to spot eyes, those of a cat or those of a human. It reminds me of looking at 10.000 lines of machine code, trying to understand what all those numbers might do. I’m confident that there will be a more abstract language than what we have now to describe the structure, rules and data (all in all: the knowledge) of a neural network. It will probably be a declarative language and it will help us to better understand and control machine learning. Or, to say it with Tanz:

We’re just learning the rules of engagement with a new technology. Already, engineers are working out ways to visualize what’s going on under the hood of a deep-learning system.

If that will actually work, we will continue to program computers, instead of training them. It worked well in the past with machine code. So, some declarative neural network assembler language would be great to start with.