I think the mind is an interesting computer. Probably the best computer around with the highest error number. Our minds are so bad that we had to build computers to formalise our logic. Our minds are so good that we still cannot understand or mimic them with computers.
Some of the major problems occur when the human mind talks to the computer. The battle ground for this war of logic versus association occurs in none other than the mind of a programmer.
Most programmers (including myself) cannot write long drawn out stretches of code without making some syntax errors, more runtime errors and some nasty logic errors. When we see our mistakes they are obvious. After all, you cannot argue against the cold logic of flowing electrons through semiconductors.
Eventually we come to an agreement with our computer pals. Things work kinda... it goes well... for a while.
This is where paranoia sets in. We all indulge in occasional paranoia and this is normal. Most of us set it aside and move on. Some of us move into padded one bedroom apartments. The rest try to cover every possible outcome the universe has to offer with complicated programming.
"What if the database server is off?" "What if the DNS fails?" "What if this process is interrupted half way by a server crash?"
I Will not deny that some of the questions above are valid. The context of where they are valid is worth noting however. Most systems for instance assume that their database is there. Just let them crash.
Why would I say such a thing?
If you write code for every conceivable outcome and 5% of those situations do actually occur, you have written 95% of useless code. I've seen this. I've done this.
A prime example of this psychotic behaviour represents itself when a programmer decides that external systems should not be trusted. These external actors call functionality in your system, but because of your lack of trust you validate every single byte of data they send. Your code swells to ridiculous proportions. Your paranoid delusions suck you into a world of confusion and madness, and you start to distrust your own code, and the code written by those after you. You write in checks for everything. You picture your code being discovered and studied by hyper intelligent energy beings from the distant future. You've done it! You've written a bug free program!
You release your rock solid unbreakable monster state machine into the world and as you lay back in your chair and watch how it goes, something bad happens. One of the checks you are doing is causing perfectly valid system behaviour to be devoured as error. Because checks are just logical branches and not crashes you roll up into a little ball and suck your thumb while frantically wading through thousands of lines of code to find the culprit. You debug n levels deep to find nothing, because now you are sure it is one of your checks.
Days are spent in caffeine induced stupour searching for the smoking gun, and nights are spent tossing and turning while debugging strings of cheese in your dreams. You wake up with hunches and lay down at night with disappointment.
Finally, after blaming yourself, you find an incorrect configuration setting. The day is saved, but you will never be the same. You hang your head in shame as you sneak past people wanting to ask you if you found that difficult bug.
The moral of the story is now clear. ONLY do what is necessary. Accomplish this first. Plan for obvious outcomes, not for unlikely ones. A perfect parralel comes from a snippet of code that did it's rounds on the web:
if (true == false) { panic(); }
I learned this lesson fairly early on by reading and debugging other peoples' code. I have done this myself as well, blaming my checking code when something that was completely unrelated caused the problem. I have also written checks that broke everything.
Some of the major problems occur when the human mind talks to the computer. The battle ground for this war of logic versus association occurs in none other than the mind of a programmer.
Most programmers (including myself) cannot write long drawn out stretches of code without making some syntax errors, more runtime errors and some nasty logic errors. When we see our mistakes they are obvious. After all, you cannot argue against the cold logic of flowing electrons through semiconductors.
Eventually we come to an agreement with our computer pals. Things work kinda... it goes well... for a while.
This is where paranoia sets in. We all indulge in occasional paranoia and this is normal. Most of us set it aside and move on. Some of us move into padded one bedroom apartments. The rest try to cover every possible outcome the universe has to offer with complicated programming.
"What if the database server is off?" "What if the DNS fails?" "What if this process is interrupted half way by a server crash?"
I Will not deny that some of the questions above are valid. The context of where they are valid is worth noting however. Most systems for instance assume that their database is there. Just let them crash.
Why would I say such a thing?
If you write code for every conceivable outcome and 5% of those situations do actually occur, you have written 95% of useless code. I've seen this. I've done this.
A prime example of this psychotic behaviour represents itself when a programmer decides that external systems should not be trusted. These external actors call functionality in your system, but because of your lack of trust you validate every single byte of data they send. Your code swells to ridiculous proportions. Your paranoid delusions suck you into a world of confusion and madness, and you start to distrust your own code, and the code written by those after you. You write in checks for everything. You picture your code being discovered and studied by hyper intelligent energy beings from the distant future. You've done it! You've written a bug free program!
You release your rock solid unbreakable monster state machine into the world and as you lay back in your chair and watch how it goes, something bad happens. One of the checks you are doing is causing perfectly valid system behaviour to be devoured as error. Because checks are just logical branches and not crashes you roll up into a little ball and suck your thumb while frantically wading through thousands of lines of code to find the culprit. You debug n levels deep to find nothing, because now you are sure it is one of your checks.
Days are spent in caffeine induced stupour searching for the smoking gun, and nights are spent tossing and turning while debugging strings of cheese in your dreams. You wake up with hunches and lay down at night with disappointment.
Finally, after blaming yourself, you find an incorrect configuration setting. The day is saved, but you will never be the same. You hang your head in shame as you sneak past people wanting to ask you if you found that difficult bug.
The moral of the story is now clear. ONLY do what is necessary. Accomplish this first. Plan for obvious outcomes, not for unlikely ones. A perfect parralel comes from a snippet of code that did it's rounds on the web:
if (true == false) { panic(); }
I learned this lesson fairly early on by reading and debugging other peoples' code. I have done this myself as well, blaming my checking code when something that was completely unrelated caused the problem. I have also written checks that broke everything.
Comments