Programmers have an easier time scaling up than scaling down. You could call this foresight or over-engineering, depending on how things work out. Scaling is a matter of placing bets.
Experienced programmers are rightfully suspicious of claims that something only needs to be done once, or that quick-and-dirty will be OK [*]. They’ve been burned by claims that something was “temporary” and they have war stories in which they did more than required and were later vindicated. These stories make good blog posts.
But some things really do need to only be done once, or only so infrequently that it might as well be only once. And it might be OK for intermediate steps to be quick-and-dirty if the deliverable is high quality.
As a small business owner and a former/occasional programmer, I think about this often. For years I had a little voice in my head say “You really should script this.” And I have automated a few common tasks. But I’ve made peace with the fact that I often do things that (1) could be done far more elegantly and efficiently, and that (2) I will likely never do again [**].
Related posts
- Appropriate scale
- Scaling the number of projects, not the size
- Pareto’s 80-20 rule
- Objectives and constraints
- The bike shed principle
[*] “People forget how fast you did a job, but they remember how well you did it.” — Howard Newton
[**] I include in “never do again” things I might do in the future, but far enough in the future that I won’t remember where I saved the script I wrote to do the task last time, if I saved it. Or I saved the script, was able to find it, but it depends on a library that has gone away. Or the task is just different enough that I’d need to practically rewrite the script. Or …
https://xkcd.com/1205/
Arrgh! For nearly a decade I specialized in moving physicist’s lab rigs to pre-production prototypes, generally coded in C/C++ to make them as fast (and impressive) as possible.
But over and over again, I found my slap-dash prototype code had made its way into the next product phase, instead of being harvested as a proof-of-concept.
I intentionally switched to Python to ensure my prototype code would *not* outlive the prototype hardware, forcing it to be treated more as research documentation than a program.
Then I got good at making Python go fast. Big mistake. The product engineers hated it when their optimized C/C++ code was slower than my prototype. Meaning I repeatedly got yanked over to the product side and away from my lab.
At least I kept them from switching to Python…
I read somewhere that it’s good practice to do this gradually and only generalize little by little. If today you needed to perform an intricate task (perhaps even in the GUI), bookmark the tutorial that you followed (it only takes a second).
If you need to do it again, make a note of the commands you used and some gotchas. This is not a script, but can be used to copy-paste and then manually adjust the commands the next time something similar comes up.
If it keeps coming up, find out how to do all steps in the command line and put it together in a shell script with a few command line parameters etc.
If it becomes a major part of what you do, write a proper program.
At each time you only need to do a small step in the work, but it accumulates.
I agree that generalizing incrementally is a good idea. Then you’re not investing effort too far beyond what you know is useful.