I make life easier, that is to say I've been writing software for 9+ years. Eschew hype; focus on delivery and performance.
Living in Switzerland 🇨🇠since 2017.
I make life easier, that is to say I've been writing software for 9+ years. Eschew hype; focus on delivery and performance.
Living in Switzerland 🇨🇠since 2017.
The oft-debated question.
I'll try to put it together as I see it at this moment.
In my last blog post about OOP and simplicity I already talked about simplicity. But this is a different context so I will expand on it.
From my last blog post I shared the following:
Looking at multiple dictionaries I find the following common definitions:
- easily understood or done; presenting no difficulty
- plain, basic, or uncomplicated in form, nature, or design; without much decoration or ornamentation
- composed of a single element; not compound.
And looking at the etymology (the root of the word, where it came from and how it came to be), from Etymonline.com I find the following:
The sense evolution is from the notion of "without parts" or "having few parts," hence "free from complexity or complication."
Link to the full etymology of "simple".
For the computer simple very precisely means one thing, less operations.
Obviously the computer doesn't think, but if it could think, since it is the one running the program, this would be its definition of simple.
This is the tricky one.
One programmer might think 10 classes for one operation is simple, while another might think one function is complex enough.
Programmers confuse ease with simplicity.
Ease, or easy, is about familiarity.
So if one programmer is used to LISP, it will seem easier for him, than another that is used to C, who will find LISP very difficult initially.
Easy is about familiarity.
And therein lies the issue.
Processors are not familiar to us.
We don't think like processors do.
So we try to make things easier, confusing it with simpler. From machine code to Assembly, from Assembly to C, and so on as the language gets more high level, and then with all kinds of abstractions.
I learnt a particular concept associated with the computers I use every day, quite early on as a programmer.
Computers are incredibly fast, accurate and stupid. — Unknown
And it went along with the concept of "thus, your job as a programmer is to drop down to the level of a computer, to understand what it's doing and how to tell it what to do".
That's one style of programming.
The other is the opposite.
Assume that the human is the source of truth for how things should be.
Modern OOP, the most popular paradigm of programming, is in this category.
Modern OOP is obsessed with, well, object-orientation. And this lends itself to abstraction on top of abstraction.
How can I make the real world fit a computer representation and bend the computer to my will.
The problem here is that computers don't understand OOP.
So when you actually want your computer to do OOP, you still have to bend to the computer's will.
More specifically, you have to bend yourself to the data.
The computer understands only operations.
But what does it operate on?
DATA.
I.e. information. The bits and bytes running through the wire, ostensibly representing something that humans find useful, normally. But otherwise representing something that computers find useful (as with API protocols).
Therefore so far there's two things the computer cares about. The data to operate on, and the operations.
We as programmers should probably realize our position.
We are programmers.
By definition, our job is to interface with the processor, in more, or less direct ways.
Perhaps we should take that into account, and consider what would be SIMPLE FOR THE COMPUTER.
If we adopt this model, perhaps other programmers would have an easy time understanding what's happening too. Because it's clear what's happening.
The computer is stupid. So we can easily figure out what a bit of code is doing if we keep it on the computer's level.
Probably we shouldn't program in Assembly, but we don't need to involve 10 classes/interfaces for 10 lines of procedural code.
We've seen the cost.
The cost of adding abstraction on top of abstraction, on top of yet another abstraction, instead of simply speaking the computer's language.
Software that doesn't do very much more than what it used to do 20 years ago, but somehow it performs way worse. Not just way worse, but orders of magnitude worse.
Moore's law has effectively been nullified by our obsession with not keeping things computer simple.
And did programmers become more productive as a result? I don't think so. We're fixing the same amount of bugs, spending the same amount of time arguing about solutions. And in fact now we spend a lot of time just churning, with new languages and new frameworks coming out so often.
Taking all of the above, and putting it together into a cohesive model, here's a proposal for a definition of simple.
The smallest set of instructions for a computer to operate on defined data, fulfilling the desired user or business requirements.
Usually, less instructions means less code, means less operations. That means less bugs. Less code = less bugs.
Usually, less instructions means less to keep in your head, which means simpler and less complex. Which means easier to read and understand.
Less complex (interconnected) means easier to modify without breaking something else. Which means more maintainable.
Less complex (interconnected) means easier to just throw away, and rewrite. Which again means more maintainable.
I think the above is quite a workable definition.
If for business reasons the solution requires an event-based system, that's fair enough. Sometimes the essential complexity of a problem does require that. But then probably you're better served by the languages built for those use cases (e.g. Erlang in the telecom sector)
If it requires no more than 10 lines of procedural code, then you'd be writing complex code by not keeping it to 10 lines of procedural code.