I have a conjecture regarding statistical methods:
The probability of a method being used drops by at least a factor of 2 for every parameter that has to be determined by trial-and-error.
A method could have a dozen inputs, and if they’re all intuitively meaningful, it might be adopted. But if there is even one parameter that requires trial-and-error fiddling to set, the probability of use drops sharply. As the number of non-intuitive parameters increases, the probability of anyone other than the method’s author using the method rapidly drops to zero.
John Tukey said that the practical power of a statistical test is its statistical power times the probability that someone will use it. Therefore practical power decreases exponentially with the number of non-intuitive parameters.
Related post: Software that gets used
This is a good point. Additive models come to mind as an example. A potential counter-example is a histogram (and similar non-parametric density estimators). It takes a bit of trial and error to set the bin width, but researchers use histograms all the time.
I would say that bin count or kernel bandwidth in density estimators is a fairly intuitive parameter, which is why it doesn’t impede adoption.
Maybe worth noting that sometimes a parameter can become not-a-parameter. If the method is “choose the L2 penalty term by cross validation”, then that term is no longer an input parameter.
A corollary is that whatever method you learned in school is likely to seem more intuitive than something you learn later — even if both have the same number of parameters.
The same goes for programming languages and functions. A function has three boolean parameters, which is which?
Strong typing helps, especially when you enforce a rule, somehow, that functions should have more than one argument of each type. Then functions are resistant to parameter reordering.
I think DavidC is right. A method that has a non-intuitive parameter demands another method for selecting that parameter.
This conjecture also makes me think about Bret Victor’s Ladder of Abstraction (http://worrydream.com/LadderOfAbstraction/) which gives lots of ways to experience these parameters.
On the other hand, parameters to be determined are parameters that can be tweaked to fit your model predictions to your expectations. This sometimes helps their popularity.