Newcomb’s Problem is a very divisive thought experiment in decision theory. The tl;dr version is:
A supreme intelligence Omega comes down from the sky, and offers you two boxes, labeled A and B. In box A, Omega has placed $1000.
Omega makes a prediction about whether you will open only one box, or both boxes. If Omega predicts that you will open both boxes, then box B is empty. If Omega predicts that you will open only one box, then Omega places $1000000 in box B.
In over a million trials, Omega has never incorrectly predicted someone’s decision. (Supreme intelligence is supreme!)
Omega places both boxes before you, and flies away to torment some other decision theorist.
Do you open both boxes, or just box B?
Well, you figure, Omega is gone now, its prediction already in the past. At this point, if I open both boxes, then I’ll get either $1000 or $1001000. If I open only box B, then I’ll get either $0 or $1000000. Regardless of Omega’s prediction, I’m always better off opening both boxes.
So you open box B, and find it empty.
You lament that Omega seemingly rewards “improper” decision theorists, and treat yourself to $1000 worth of whatever. Since you’re a dedicated rationalist, you probably make a bunch of sound investments with it and end up doing ok anyway.
Good for you.
Another approach is to commit to opening only box B, and stick to that commitment. Decision theory be damned, I want a million bucks. Before doing anything else, you set box A on fire, just to be sure. You open box B, and get a million dollars.
What use is a decision theory if it doesn’t optimize outcomes? That’s not to say that it’s worthless to study the science of priors and statistics and possible outcomes. Certainly, in most situations, we are not dealing with supreme intelligences handing out money capriciously. But in the real world, there are occasionally situations where the ends matter more than being theoretically correct.
On the other hand, perhaps a purity of theoretical correctness is an important part of one’s self-image. Maybe it imparts emotional satisfaction that a million dollars just can’t buy.
It’s perfectly fine to be a two-boxer. In a way, I respect it, like a monk or an artist. There’s a beauty in that purity.
But still. Fuck that, I want the money. I can be a monk later.
Most programming languages are to some extent art projects. There’s a lot of subjectivity to them. They exist to seek a beautiful ideal.
Most programming languages are also to some extent practical. You can do real stuff with them. They exist to make stuff.
The difference is, when tradeoffs must be made between “beautiful ideal” and “make stuff”, what wins?
If our ideal is in fact beautiful, and if we are skilled in our pursuit of it, then one might expect that these trade-offs will be infrequent. The most beautiful programming language is the one that is the most pragmatic.
But like boxes of money, programming languages do not exist in a vacuum. We have imperfect vision, and may find ourselves heading down a blind alley to explore what we hope will be beautiful. In the making of things, we may find that it was a mistake, and that the only way out is to choose to either make the language more beautiful (and lose pragmatic value for its current users) or preserve pragmatic value (and give up on some potential beauty).
Both choices are fraught with hazard. The first can alienate the community. The second can lead to a mongrel language with warts and bad parts.
Making software is the purpose of a programming language. That is the winning condition. Valuing something other than that seems to me akin to opening both boxes, because decision theory said so.
In box A, Omega has placed beauty. In box B, Omega has placed pragmatism and community, but only if Omega predicts that you will only choose box B. Omega has never been wrong.
Make your choice.
Except, in this version, you can open B, and get pragmatism and community. But once opened, if you choose A, then the community will eventually be lost.
I can see the value in a beautiful thing. But for myself, willy-nilly, I like useful software and happy communities more than programming languages for their own sake.