The hard sciences—physics, chemistry, astronomy, etc.—boasted remarkable achievements in the 20th century. The credibility and prestige of all science went up as a result. Academic disciplines outside the sciences rushed to append “science” to their names to share in the glory.
Science has an image of infallibility based on the success of the hard sciences. When someone says “You can’t argue with science,” I’d rather they said “It’s difficult to argue with hard science.”
The soft sciences get things wrong more often. Sciences such as biology and epidemiology — soft compared to physics, but hard compared to sociology — often get things wrong. In softer sciences, research results might be not even wrong.
I’m not saying that the softer sciences are not valuable; they certainly are. Nor am I saying they’re easier; in some sense they’re harder than the so-called hard sciences. The soft sciences are hard in the sense of being difficult, but not hard in the sense of studying indisputably measurable effects and making sharp quantitative predictions. I am saying that the soft sciences do not deserve the presumption of certainty they enjoy by association with the hard sciences.
There’s a similar phenomena in computing. Computing hardware has made astonishing progress. Software has not, but it enjoys some perception of progress by association. Software development has improved over the last 60 years, but has made nowhere near the progress of hardware (with a few exceptions). Software development has gotten easier more than it has gotten better. (Old tasks have gotten easier to do, but software is expected to do new things, so it’s debatable whether all told software development has gotten easier or harder.)
Reminded me about this one: http://bits.blogs.nytimes.com/2011/03/07/software-progress-beats-moores-law/
I think the perception of infallibility is unwarranted, both in the case of the hard sciences (think astronomy before Einstein: hard, but wrong) and in the case of hardware (think bugs in circuit boards like the FDIV bug).
I have long felt that the terms “Hardware” and “software” were naive at best.
Supposedly the “hardware” was fixed, or at least difficult to change, while the software was more fluid and (and at least in some managers’ minds) easy to change. Yet today our hardware changes rapidly, going through hoops to make sure it works with 30-year old (in the case of x86) or older (in the case of IBM Z-series, aka great-grandson of 360) software.
Also, the FDIV bug was, essentially, a software bug, in the “compiler” for the HDL description of the divder, or a “typo” in the source of the table, if I recall correctly.
As such, software-style quality assurance tools should have prevented it.
A better example would be the 32-bit multiply bug, which was in fact caused by hardware issues (pattern sensitivity due to signals inadvertently coupling)
Many of us in the digital humanities have been riding this hobby horse for a long time. Those of us who take a formal approach to language consider the entire enterprise of understanding the humanities as an empirical one. Or should be. And can be.
Have you read this piece by chance? Hedges, L. V. (1987). How Hard is Hard Science, How Soft is Soft Science? The Empirical Cumulativeness of Research. American Psychologist, 42(2), 443-455. doi:10.1037//0003-066X.42.5.443
Adam: Thanks. I hadn’t seen that.