Programming languages are an user interface


I had the pleasure over the Christmas period of meeting up with some of my friends from my university days. At least two of them had gone on to do pHDs in physics and they all had bad things to say about their experience: they hated programming.

As somebody who has always enjoyed programming, I find it hard to relate to the position these people are in. However, these are the very people we need to help the most. These are people who are trying to use their computer to do something new. These are people who need to be able to program and do so quickly if they’re to achieve meaningful results from their experiments. They need the programming language to be invisible, they need it to be an enabler for what they want to get done rather than a horrible chore.

Why are types chosen for the machine and not the programmer?

I’ve always believed that the true power of the computer is the ability for you to program it. When I say you, I actually mean you - the reader. I don’t mean me as a professional software developer, I mean the wider public. The first step in to having this programming revolution is to recognise that programming languages are an user interface. A modern programming language should not care at all about the hardware but on making the language ergonomic. It need to be easy to get stuff done.

To see what I mean, take a look at this article on basic C# types. Table three shows all the integer types, of which there are nine. Table four shows the three additional floating point types. That gives us twelve separate ways to represent a number.

Now there are good reasons why you might want those types. The int type is matched to the size of a word on 32-bit CPUs, which means it only takes a clock cycle to add two ints. However, we’re at the point now where we have so much computing power that premature optimisations such as this are simply unnecessary. If I were a Physics graduate I’d take one look at that and scream. I’m a professional programmer and I want to scream.

Rather than have 12 types, I have a better idea. Give me three: real, imaginary and integer. You should be able to add one to these values until you run out of memory; there should be no artificial ceilings to the numbers.

A physicist does not care that the computer I’m writing this post on a computer that uses 32-bit words for its instructions. He just wants his ultra-precise calculation to work without any fucking around. Make the language support large integer mathematics transparently out of the box. Every personal computer out there today can do these sorts of calculations without breaking a sweat. Even the machines of ten years ago are able to do large number arithmetic at fairly handy speeds.

We need to stop catering to the machine and make the language cater to the humans writing programs in it.

Should we force object orientation down people’s throats?

Part of me wonders whether forcing object orientation down the throat of these people is really what we want to do. Object orientation is a good tool to manage complexity in large programs. The problem is that with small programs, object orientation is more of an impediment than a saviour. I did not ask my friends about the size of the programs they worked on but I’m going to guess that it was probably less than 10,000 lines of code. On a project this size, object orientation is not automatically better than writing in a structured style. I’d say the cross-over point is much higher. It’s probably somewhere around the 50,000 lines of code mark.

Yet Java, C++ and C# eschew the structured style because they assume that the only things you want to write are large pieces of software. That assumption works for a typical desktop application or a business application. Most of these pieces of software are very large indeed and so you want to push the object orientated approach at the first available opportunity.

However, I think the future of programming is going to move away from building large software. Large software is hard to maintain and it requires a dedicated team of developers to do it. Maintaining a large program by yourself is too much work when you’re a professional who is not a programmer. Therefore, if there is to be a revolution in users developing their own software, then the programs they write are going to have to be small.

Is the pain worth it for these people? That’s sort of an open question for me and that’s really going to depend on the type of programming these people do.

What is clear is that an object orientated program is easier to understand than a program done in the structured style. This is a well documented truth. I would argue, however, that it would be harder for a layperson programmer to write a program using object orientated principles over structure programming principles.

This is why on the whole I think programming languages like Python and Ruby are better suited to these type of people. In fact, they’re better suited to every developer. If possible a language should support structured programming, functional programming, aspect orientated programming and object orientated programming. Let the programmer make the decision on which style they should use.

The ideal programming language should be as programming principle independent as possible. They can use whichever style they’re comfortable or change styles entirely where the problem demands it. For example, parsing data is especially suited to functional programming style.

So if Java, C++ and C# suck, what else can I use?

Python and Ruby in some ways already follow this philosophy. In fact, Yukihiro Matsumoto, the creator of Ruby even said this:

“Often people, especially computer engineers, focus on the machines. They think, “By doing this, the machine will run faster. By doing this, the machine will run more effectively. By doing this, the machine will something something something.” They are focusing on machines. But in fact we need to focus on humans, on how humans care about doing programming or operating the application of the machines. We are the masters. They are the slaves.”

I really couldn’t say it any better myself. Python and Ruby are very much children of this philosophy. However, in my view Ruby and Python do not go far enough. Programming is a human problem. The problem is that we don’t do a very good job of thinking precisely. Computers need incredibly precise instructions in order to do anything. If you get the instructions wrong, the program goes wrong and more often than not it crashes.

So wouldn’t it be great if the computer told us when we wrote something inconsistent in to our program? What do I mean by inconsistent? Well, say I had a Python method called “Quack()” on the argument:

    def QuackIt(n):

If I then write another method:

    def BreakQuack():
	badInput = 1

It is clear that this program contains an inconsistent assumption, namely that int has a method called “Quack()” when it does not.

It’d be nice if we could run Python in a special mode that statically analyses the source of your program for these sorts of problems.

How can you do this in a dynamic language? Python may be dynamically typed but it is also strongly typed. In Python you don’t have to explicitly define the type of a variable but it always has exactly one type. This means that you should be able to infer a whole range of pre and post conditions automatically. In a weakly typed language like VBScript this type of analysis is probably impossible.

Isn’t this just what a statically typed language does? No, not at all. For example, we might assert at the top of QuackIt that the object it’s trying to quack is a certain weight and a certain age. If I wrote these conditions in to an assert, and my static analyser could prove the assertions always hold. It’s clear that this sort of checking then this is superior to the sorts of things possible by carefully crafting your types.

Essentially, what I’m advocating for the future of programming is Design by Contract with a dynamic language. You’d write your code and litter it with asserts. Then you’d run your static analyser and check that none of the asserts ever come up false in any execution path.

This would allow people to develop features in a very quick, bug free way.

It also puts the emphasis of the programming task where it needs to be. We need to let the human do what they’re best at and let the computers do what they’re best at. Humans are good are being creative and thinking about features. Computers are good at mechanistically checking things. If we can get the computer to verify more of our program’s correctness before we even execute it for the first time, we should let it!

Debugging is the last thing any developer wants to do, especially a developer who is a scientist. Every modern language’s code needs to be heavily debugged before the code is usable. I see this as a failure in the user interface that languages are meant to provide. When a programmer fails to check the bound of a string, is that his fault? Yes, partly. But it’s more the fault of the language designer who allowed such a construction.


We focus a lot in user interface development on usability. I think programming languages need the same attention in this respect. I find myself in agreement with Matsumoto in thinking that the principle of least surprise is part of the solution. The other part of the solution is to integrate strong static analysis in to compilers. Only then can we unlock the revolution of the user programmer, where everyone can write or modify software to meet their ends.

  1. 2007-01-13 13:12:23 GMT
  2. #Programming
  3. Permalink
  4. XML