In my experience, software knowledge has a longer useful shelf life in the Unix world than in the Microsoft world. (In this post Unix is a shorthand for Unix and Linux.)
A pro-Microsoft explanation would say that Microsoft is more progressive, always improving their APIs and tools, and that Unix is stagnant.
A pro-Unix explanation would say that Unix got a lot of things right the first time, that it is more stable, and that Microsoft’s technology turn-over is more churn than progress.
Pick your explanation. But for better or worse, change comes slower on the Unix side. And when it comes, it’s less disruptive.
At least that’s how it seems to me. Although I’ve used Windows and Unix, I’ve done different kinds of work on the two platforms. Maybe the pace of change relates more to the task than the operating system. Also, I have more experience with Windows and so perhaps I’m more aware of the changes there. But nearly everything I knew about Unix 20 years ago is still useful, and much of what I knew about Windows 10 years ago is not.
Sometimes debugging is painful, sometimes it’s a fun problem-solving excursion, and I think the difference is due to exactly what you’re talking about here. If I just spent half an hour dealing with windows network drivers, I feel like that’s a half hour wasted, because next time it’ll be entirely different (hopefully because it works out of the box). But after a half hour spent dealing with some tedious detail of the bash shell–and it has many–I feel as though I’ve learned something, and next time will be a little easier. I think this visceral sense that I’m building knowledge and not just learning tedious rules is what’s driven me to be such a UNIX geek in the present day.
In UNIXland, the interface and the internals are separate, and both are exposed. If I want to configure vim or emacs to behave the way I want, I will edit the necessary config file. This feels awkward and difficult to a Windows user, and might require consulting man pages or the Internet, but once I’ve gotten it figured out then this knowledge will carry forward. By contrast, I was trying to help my mother change a setting in Microsoft Word 2010, and I couldn’t do it. I’m pretty sure I still remember how to change this setting in Word 6.0, and I relearned how to change this setting for Word 95 and then relearned again for Word XP. By the time Word 2003 came out I had switched entirely to Linux and ceased caring about how to modify the default template, at which point apparently my knowledge and competence became obsolete.
When Windows or its ecosystem changes the interface, everything must be relearned. There is no .wordrc in the home directory, and if I opened a registry editor and went to the right node I could possibly change the setting correctly, or possibly mess something up even worse. And of course which node to modify and which value to put in changes between major releases.
John,
I agree whole heartedly. I also have noticed a couple of things that MS does pretty well with console apps. I still have programs that were originally compiled under Visual C++ 4.2. I’ve had to do very little fiddling to get them to move to 64 bit and the latest MS compilers. I had to give Microsoft a thumbs up for some backwards compatibility in their C++ runtime. If you stay away from the GUI programming on Windows, I’ve found my console apps just keep compiling and running.
True, but don’t linux distros suffer from the same problems as windows? Ie. From an end-user perspective, when dealing with complex applications with a GUI, things changes in ways that become less useful over major releases.
For example, Gimp on any of these platforms changes is ways that are not useful beyond one or two major releases.
However, win32 APIs or unix system calls don’t change much at all. I guess the difference is that in unix, the consistency of APIs extend beyond the core to some of the early applications. I’m not sure it is applicable to very many new applications written though.
An open source model is better positioned for change and we see the results daily. This leaves only your Unix/Linux explaination correct: do one thing right per application, and get it right the first time.
@Shiva: “True, but don’t linux distros suffer from the same problems as windows?”
Not really, but it depends on what you’re doing to some extent as well. For example, I’ve used the WindowMaker window manager across many years, multiple computers, and (at least) two different distros (Slackware and Ubuntu), and it’s stayed the same the whole time; I have one configuration that I migrate to any new machine or install and it always works. There’s no equivalent to this idea in Windows; you use the Windows UI or you use nothing, and the Windows UI changes with each OS upgrade.
If you are more dependent on the defaults and graphical configuration tools, you might have a worse experience. The point is you have a choice.
I wonder if its due to the more commercial nature of windows, probably there has been little profit to be had in the Unix world by changing core platform API’s etc.
Whereas Microsoft make money with each new release of Windows, Office etc.
Only recently have companies managed to spread to Unix to many consumer devices Android and Mac OS, so we may see more churn in the Unix world now too, though it would be harder as their are many players in the Unix world.
Maybe that’s the other side of the story, its just harder to get many players to agree to change something fundamental in the Unix world, like say the diver model.
I agree with the earlier postdescribing Microsoft’s profit motive being behind the constant new changes to operating systems and applications.
some of the change could be written off to evolving hardware. But not much. If Application developers like me can “refactor” our apps, why can’t Microsoft?
I am smart enough to know that my customers do not want their App interface to change every 2 years requiring extensive relearning and retraining. Why hasn’t Microsoft learned that yet?
Microsoft forces “new” technology on the users where it isn’t necessary. Take, for example, Word. When I was working in IT, we would be forced to upgrade to newer versions because we would receive documents from external companies in new formats that were unreadable by our current “old” version. By arbitrarily creating new formats that were not backward compatible Microsoft forced all users to upgrade (to guarantee a constant revenue stream). Because the profit motive in open software does not exist, Unix/Linux creates new formats only when there is a significant advantage to the user in doing so.
A consequence of what you’ve written is that, whereas Microsoft has a financial commitment to the way in which it architects its own software, the Unix crowd has an ideological commitment to the Unix philosophy.
The Unix crowd has essentially made a bet that system software is a solved problem, and that the answer is either Unix & C or something close enough as makes little practical difference (e.g., Plan 9 and Go). If they’re wrong — and the Lisp folks are pretty sure that they’re wrong — then the Microsoft way has the better upgrade path.
Even assuming that Microsoft’s changing technologies are the churning of standards, this churn still means that Microsoft’s technology is moving with respect to whatever The Right Thing happens to be for today’s platforms. On the other hand, the Unix crowd remains still with whatever that Right Thing is. Therefore, Microsoft has a better chance of finding that optimum, if only by random walk.
And Microsoft’s changes aren’t entirely churn. Look at PowerShell. Check out Microsoft Labs. Microsoft has a lot of smart people, and they’ve made some progress against the sociopaths.
I agree.
I find Windows terribly frustrating to work with. Things shift so terribly quickly that the stablest products are only produced by Microsoft itself.
After a couple weeks of hard work, I got a LaTeX document editing system and a C++ graphical development environment up and running on a box running Fedora. I started working on a scientific simulation propgram, and a paper about it. Then I upgraded Fedora. The absolute whizz-bang geniuses at Gnome had apparently decided that PCs should actually be Ipads, or IPods, or something. Something you play with your thumbs, I guess. Kids today!
Anyway, none of it works any more, so I am back on my Windows box. Maybe I’ll have another couple weeks to dump into the Black Hole Of Unix later this year.
Things evolve more rapidly on the Microsoft side because this evolution is a business necessity for Microsoft. If they didn’t produce new software platforms at a regular rate, your need to purchase the latest update would be far less. Your knowledge of platform N-1 thus becomes valueless rapidly. The Microsoft ecosystem is a great place for people who love shiny new things. I’m two or three platforms behind now, and I not sure I shall make the effort to catch up.
Change comes slowly to the linux community, because it is full of amateur hackers who learned C because it is simple, and more experienced developers who got tired of the Microsoft knowledge treadmill. This first group hates change because it makes them think. They like the command line because it hasn’t changed much in 30 years. The second group moves linux along at the rate at which really useful new things come to their attention.
I wish I could say that either the MS or linux ecosystem put their users first, but I can’t.
Rudolf: I agree that Microsoft has created some innovative products, and PowerShell is a good example. PhotoSynth is another. Even some of the changes that are churn from my perspective make life easier for many developers.
I have almost totally evolved from Windows to Linux. I program on multiple platforms and in multiple languages. Microsoft cares not one whit about external applications and developers other than as another revenue stream. Do you want the developer network? $$ Do you want the little Windows Logo? $$$ Do you want really useful information on Windows development? $$$$ etc. etc. and yes their internal apps will always outperform, because they have set up means to promulgate the changes internally both formal and informal.
And think about this… China’s government has full access to the windows source code.
If you are a developer, think about back propagation. That means that you can develop some kind of application, and its code can end up being back ported into the higher level system. Backdoor, anyone? Now I don’t know that this will happen, or that it is being done, but the capability is certainly there.
As to console apps, I have written many small apps, and a few large ones. Generally if well written in generic C, they will port across Windows, Unix, Linux, Solaris, and a number of other OS’s as long as the base libraries are supported by the target OS. It may require a different header or a few different libraries depending on the configuration, but generally it is trivial to port such apps between systems, even to embedded systems.
Make the library links usable, add the required headers and recompile. However in Windows 7, you will find that many of the restrictions will bite you to prevent the console app from running without some heavy handed manipulation of the security features. I don’t know about Windows 8, but I suspect it will be even more restrictive.
to a degree its not just microsoft, this is a difference between open source software and commercial software in general. if you release a new version of a software people are less likely to buy it if you do not make some easy to notice change, a new colour scheme (blue, its the new red), move around buttons, make a feature checklist so you can show all the things that changed from the previous version. and if you can make some new format that is completely incompatible with the old version all the better. this is an issue with the way people buy software as much as how people MAKE software. OSS is about making software to DO things, to get work done. a software is evaluated on its ability to solve a problem. commercial software is about selling a product, software is evaluated on its ability to get bought, actually doing the job is just one of many possible strategies to that end.
We usually think about planned obsolescence in regard to physical things, but it applies to software as well. http://www.youtube.com/watch?v=251qoGOqpdk
I think planned obsolescence, while in someways deplorable, is also one of the major things which drives progress and change, so its also a force for good. Though this is migrating into the philosphical area.
None the less good topic of conversation john.