In this article I want to examine problems and solutions, and I want to discuss solving problems for the right outcome.
I see two kinds of problems – those that can be solved, and those that cannot. Having said that, let me add another, complicating, dimension, and introduce computational cost, or “think time”.
You can graph this and derive a typical quadrant diagram.
Given you have a problem to solve, the happiest place to be is in the bottom left quadrant, with a solvable problem, and low computational cost. Unfortunately, not all problems are in this quadrant.
If we just look at problems that have a solution, the computational cost can be an issue. A good example is encryption. We know any PKI encrypted data can be decrypted. Throw enough computing power and time at the problem and you will eventually get an answer, but it may be decades away. This form of encryption has never claimed to be unbreakable, just hard to break, and it is the cost of computation that makes it so.
Another example, from game theory, is the “tower of Hanoi” problem. The time to complete the game can be astronomical if enough disks are in play. If you have a sixteen-disk problem, and all you did at work was toil on the solution, one disk at a time, one disk every second, it would take around a month to complete. Eminently solvable, but at a large computational cost. You should note that many adherents consider this sixteen-disk problem trivial. The sixty-four disk problem is the gold standard. One apocryphal tale has some mystics working on this puzzle, thinking and moving one disk every daylight hour. They started at the dawn of civilisation, as I understand, and are now just partway through.
These types of problems could be deemed unsolvable because the solutions are beyond reasonable reach.
Inherently unsolvable problems are a different matter. Mathematicians have studied these for centuries, and have provided tools to quickly identify them. This is such an important discovery, because it helps us avoid that worst quadrant, and stops us “going down rabbit holes”.
The ability to recognise an unsolvable problem quickly will save you much time and anguish.
This study of unsolvable problems has spawned new types of mathematics. Two such branches are graph theory and topology, which have their ancestry in the “bridges of Konigsberg” problem.
The city of Königsberg is set on both sides of a river, and includes two large islands connected to each other, or to the two mainland portions of the city, by seven bridges. The problem was to devise a walk through the city that would cross each of those bridges once and only once. The mathematical proof that it couldn’t be done was eventually established by the eminent mathematician Leonhard Euler.
It seems that mathematicians have spent as much time on unsolvable problems as ones that can be solved.
Interesting history, but what is the practical application? It is this: problems seem unsolvable because of the constraints, either explicit or implicit, that we apply to the problem. There is also the matter of imagined constraints, which exist only in the mind of person attempting the solution.
If the rules make the problem unsolvable, your best strategy is to change the rules. You certainly won’t change the problem.
But, before you get into solving the problem, there are two big questions you must know the answers for.
Why am I trying to solve this problem? And, what do I expect to achieve?
Taking the bridges of Konigsberg problem as an example, the answer to the “why” question may be as simple as you are visiting there, and have only limited time. The “what” question might also be simple, in that you just want to see and cross all the bridges.
The “how” part now becomes very simple. The constraint of not crossing each bridge more than once is probably now not relevant, and removing that constraint allows plenty of possible solutions. If your “what” says you don’t care about the bridges, but want to see both sides of the river and both islands, then another set of possible solutions emerges.
If you clearly know why you are solving the problem, and what you expect to achieve then it is often straightforward to reduce the problem to a manageable matter. You relax the constraints or rules.
Everything should be made as simple as possible, but not simpler.
Changing the rules is popularly described as thinking outside the box. This comes from a puzzle dating back to at least 1960. You must connect the dots in a 3×3 matrix by drawing four straight, continuous lines that pass through each of the nine dots, and never lifting the pencil from the paper. We know that the answer is to “think outside the box” and not limit our solution to the 3×3 matrix.
What this tells us is that the constraint we first imagine – stay in the box – was wrong. We have made an incorrect assumption and applied an imaginary constraint. Your first strategy should be to examine all the assumptions you have made about a problem and either validate them or remove them.
Another classic problem is the “travelling salesman” scenario. This is stated as “given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city?”
The study of this problem, like the “bridges of Konigsberg” problem, has introduced new areas of mathematics, particularly optimisation theory. If there is a point-to-point route between all of the cities, this is a solvable problem, and there are many algorithms to help derive an answer. If, however, these point-to-point routes don’t exist, then it becomes intractable, and potentially unsolvable. At the time of the original proposition of the problem, travel was by car, and point-to-point was the norm. In a more modern world, air travel is more common, and given the hub-and-spoke model used by airlines, this problem actually became harder.
Going to my previous point, what you need to do is understand why you must solve this and what you are solving for.
For the travelling salesman, if you solve the problem for time, excluding all the other constraints, a different problem emerges. What you saying is “I don’t care how often she goes to Nowheresville, just get all the clients visited by Tuesday”. It then becomes a scheduling matter, and is readily resolved.
The tower of Hanoi problem becomes simple if you relax the rule about not putting larger disks on smaller disks. If all you want to do is move the tower to another point in the shortest possible time, relax this rule, and the sixty-four disk problem is resolved with minutes. For the mathematically inclined, you will see I have restated the problem so the time taken varies linearly with the number of disks, rather than exponentially.
From a practical perspective, being cavalier with the constraints may not be possible. Rules and regulations may well stand in the way. The purpose of this example is to show the potential impact of restating constraints.
All we have done is relax constraints, informed by the outcome we seek, and that transforms the problem We have replaced the original constraints with our own constraints which are the variables we think are more important.
I admit that not all problems lend themselves to restatement and simplification. But, I do suggest that a careful examination of the constraints may well make a problem more manageable. But what happens if you still have an unsolvable problem after relaxing such constraints as you can?
My advice is simple. Recognise it for what it is as soon as possible, then go to Plan B. If it is truly unsolvable you must not waste more time on it. Having a fall-back is important when you go into problem resolution, but you should not start thinking about it after Plan A fails, or is proven to be unsolvable.
That is the topic of the next article.