Sunday, 8 January 2017

The case for informed wireheading

Imagine an optometrist giving you an eye exam. You're sitting in a chair looking through trial frames. The optometrist picks a lens, slots it into the frame and asks you to look at an eye chart. You decide whether your vision is better or worse with the new lens and the process repeats, potentially starting from a new baseline.

Now imagine a version of this exam that involves a "neurometrist", a neuroscientist adjusting your state of mind with a combination of drugs, transcranial stimulation and/or brain implants and then alternately showing you what the world feels like with and without the adjustment. If you consider your altered state of mind an improvement your neurometrist locks it permanently in place the process is repeated.

People make bad choices with the mind-altering technology available today, so we should generally be wary of incremental hill-climbing when it comes to our psychological state. For some people ([though surprisingly not everyone](, taking heroin temporarily feels like an improvement. You could picture an evil neurometrist injecting people with heroin slowly leading them into a life of misery and addiction.

So, for the thought experience, assume that our neurometrist is not evil and in addition smart enough to avoid letting you slide into a local optimum. You can trust that the gains you make in the procedure will be permanent, and that that there are no states you'd be likely to find that are substantially more pleasurable or subjectively meaningful than the one you find yourself in after the procedure.

But while our neurometrist is not evil, he is also thoroughly agnostic about what constitutes a meaningful life. If you would truly prefer to be a brain in a vat having its pleasure centers stimulated rather than living a life of meaningful toil, than that's where you'll end up.

So the question is: How should you choose whether to accept a change that the neurometrist presents to you? Should you simply go with what feels better in each step, or reflect whether a given change aligns with your values and notions of a meaningful existence?

I would argue that you should go with what feels better and that you should not be concerned with whether the change aligns with your values. Your values may simply be parochial byproducts of human evolution. Our eternal values may simply be particular strategies for survival and reproduction. In this case, why should the loss of your values be any more meaningful than the loss of, say, the physical body, or the leaving behind of traditional social structures?

If on the other hand, some values are intrinsically meaningful in the sense that sustained meaning can only be derived by instantiating those values in your life, then the process the neurometrist would put you through should steer you towards those values.

Thursday, 22 September 2016

It Frequently comes down to Bayes

Let's say you give a coin-flip a 50% probability of coming up heads. If you're a frequentist, that number means that given a large enough number of coin throws, you'd expect roughly half of them to come up heads. If you're a Bayesian, you're saying that if you had to bet, you would be willing to take anything better than even odds.

When I first learned about this, I couldn't really see a difference between the two interpretations. I assumed that the schism between Bayesian and frequentist statisticians was some arcane conflict within statistics, likely driven by purely cultural and historical factors that don't really have a strong bearing on the actual content of the science. Since then I've come to wonder whether instead, the difference between Bayesianism and frequentism may reflect a much deeper paradigmatic difference in how people think about reasoning and knowledge.

Specifically, I hypothesize that a commitment to Bayesianism reflects a belief that we are scientific beings by nature, in the sense that the way we navigate the world is in principle not different from the way science proceeds. Something makes sense if and only if it makes scientific sense.

This doesn't mean that in order to convince a Bayesian of something, you need to have your idea published in a peer-reviewed journal. In fact, quite the opposite is true: Bayesians are fine with letting fuzzy evidence and vague gut feelings shape their beliefs. As a Bayesian, in order to evaluate a hypothesis you need to have a prior probability estimate, and any such estimate will eventually bottom out in guesswork anyway. We're heuristic creatures that do the best they can and science is just a way of being a bit more accurate about the things that we do naturally.

A frequentist will tend to draw stronger distinctions between the scientific and non-scientific mode. They don't like using priors (since intuitive guesswork doesn't seem scientific), which means that they can't actually compute how likely a given hypothesis is. Since the direct approach to hypothesis evaluation is not available, indirect methods are used, such as establishing p-values against a selected null-hypothesis. Strength of a belief in a hypothesis may be informed by such results, but the process by which this happens in considered personal, social or, at the very least, beyond formalization (lest we end up in Bayesian territory).

So for the frequentist, science is both more remote and more disconnected from our everyday lives and reasoning. It is a realm of pure transcendental knowledge, available through proper ritual, but ultimately infinitely remote and not directly accessible to individual and personal concerns. The connection between the truth of science and the truth of society is to be made via social means, i.e., the establishment of a social institution of science whose claims are taken to be authoritative, in the same way that claims made by strong or popular people may have been taken to be authoritative before the arrival of science. Being wrong then means disagreeing with this institution.

This stance introduces a difficult challenge for the frequentist, since clearly science has changed over time. Since past science as a social institution is not trustworthy, it is hard to belief that the future would consider the science of our day as trustworthy. Should we then not believe in today's science?

One possible answer to this challenge is to embrace  epistemological anarchism and simply claim that the dynamics of paradigm change indicate that there are limits to methodical knowledge which are pointless to talk about. This is an especially attractive perspective due to the fact that it reinforces the original stance of the frequentist: That belief change is a personal or social matter that is beyond formalization.  The same view is expressed less squeamishly in postmodernism, in which knowledge is treated as a social phenomenon. Here, taking knowledge seriously on its own terms is a dangerous mistake (lest you be like those horrible people in the past, who were wrong but convinced of their beliefs).

It does take a heroic effort to discount the common-sense evidence against this point of view though. All intuition indicates that getting things done outside a primarily social arena (where convincing others is enough) requires at least a tacit commitment to a more classical notion of truth. The snake bites its own tail here though, since decoupling intuition from formal reasoning was the initial thrust that led us down this path, so having completely disconnecting the two feels like the completion of a project rather than descent into absurdity that should trigger a reevaluation of our premises.

The project of science ceases to be coupled to the project of rationality, a technological extension and formalization of the brain's natural capacity for epistemological evaluation. Instead, it becomes a kind of social game that must be played by its own rules, until those rules change by a process that cannot be formally understood. Doing science is replaced by doing "theory" with is justified not by real applicability, but by the sheer weight of the social institution that has produced "theory" in the past. Since formal thought is disconnected from intuitive understanding, having a social institution is a good enough reason for a language game.

If this process starts to seem too empty, the void can be filled by evoking pre-rational motivations. This is reflected by cynical perspectives that use power as the lens through which to view human behavior (which includes the creation of theory). Another attempt to fill the void is the embrace of social activism. This seems superficially positive, but positive social activism requires strong moral foundations. In the absence of such foundations, the opposition cannot distinguish a legitimate movement for social change from a tribe that's beating the war drums. Those with legitimate social concerns get attention and support from an intellectual elite in exchange for providing meaning to people who play empty language games, but the marginalized pay the price of increased difficulty in building strong moral foundations.

Saturday, 10 September 2016

I vs S

One of my favorite random tidbits of information on the internet is how your preference for analysis or algebra (assuming you have one) may predict how you eat your corn. Algebraists tend to follow the patterns inherent in mathematical structures and - so goes the theory - also tend to follow the structure of the corn while eating, which means that they eat in neat, typewriter-style rows. Analysts don't care turn the corn while eating, which is arguably more efficient.

Being the pattern-obsessed, algebra-loving, typewriter-style corn eater that I am, I think this observation can be explained in terms of the perception axis of the Myers-Briggs Type Indicator:  Intuition vs Sensing,  which reflects your preference patterns vs ground-level data. The MBTI is a typology based on Jungian psychology. It's not as reputable as the Big Five and seems to be more popular in self-help and business circles rather than academic ones, but at least intuitively (hah!) I feel that it captures an important distinction between how people think and what they pay attention to.

Only about 30% of the population are I types. I don't know if IQ has any bearing on the distribution, but I would guess that the percentage of smart I people is also at around a third. I think because of this imbalance, quite a lot of S folks can get away with not taking I-style cognition seriously. In system-builder style professions, I think that Is who can keep their propensity for generalization in check have an edge, unless the time horizon for measuring success is short.

As I gain more professional experience, I find that my I-type skills turn out to be my most valuable strengths, primarily because there is a real need for them in my field and they are relatively scarce. It can be tricky though to find a work environment where such skills are both needed and appreciated.

Saturday, 27 August 2016

Real lives are stake, this is no time for abstract thought!

Here's an archetype of a conversation I've had in variations throughout my life:

Friend: I believe X is the right thing to do so we can all be happy. 
Me: Ok, but what if X were a Y, or if the situation was Z? Or what if the people you disagree with were advocating a minor variation of X, would you still agree? 
Friend: I distrust and am personally insulted by your attempts at analysis. You sound like one of those people who doesn't want others to be happy. 
Me: No, no, no. I like the happiness part, I'm just not sure about your specific suggestion of X. I'm simultaneously trying to find out why you think X will be effective and trying to relate X to a universal moral framework to ensure that doing X would be fair.
Friend: Real lives are stake, this is no time for abstract thought!

I'm fascinated (in the car crash sense) when people reject analysis because of the real that can't be captured by analysis. I mean it's a fair enough point that experience is richer than the symbolic, but if not for concepts, how are you going to talk about the real, and if you're using concepts anyway, wouldn't it be better if they were marginally consistent?

Of course the tumblr activism take is that consistency is just The Man's way of trying to oppress you and demands for logic and argument are really just expressions of power by the privileged within a system of deep seated and invisible oppression. Even if we were to grant that point, what's the answer? Abandoning logic to give completely free reign to your biases? Creating a new system not based on logic that nobody has thought of but works much better on a modern scale? Please try that over there and if I don't see ominous smoke clouds within a year, I may go check it out.

While I don't want to overinflate the importance of tumblr activism, it is useful in that it expresses a common tendency that many people share to some extent in unusual clarity. I'd like to dig deeper into that in future posts.


Saturday, 30 July 2016

Coalition building across ideological octaves

I think one of the problems in political discourse is that we have a tendency to oversimplify complex issues by mapping a wide variety of disparate viewpoints onto a small number of political factions.

I think this the result of our tendency to be tribal creatures first and rational creatures second. The primary distinction is whether you're in my in-group or my out-group. Rational discourse is secondary and also optional. Especially in political interactions, people will generally scrutinize you for tribal markers to find out if you're one of the Good People. This may take precedence over other concerns, such as trying to accurately understand your point of view.

Expressing political ideas serves as a flag that signal group affiliation in addition to being a contribution to public discourse. People may emphasize these functions differently. A lot of politicking revolves around strategically choosing where to plant your flag keeping in mind how other coalitions will react to you. For example, if your coalition is unpopular, you may choose to rally around an idea that most people will agree with. This forces your opponents to deny the stance, which goes some way towards delegitimizing them. Alternatively, if your cause is generally viewed as just, you may choose a slightly unreasonable idea as a flag, so you can more easily identify and target people that sway from the orthodoxy. (I think this is one of the reasons why social justice and nerds often don't mix.)

In some cases, the same flag can match coalitions that don't fundamentally have much in common, but that come to the similar conclusions on highly visible issues. For a group, this creates the choice of whether to explicitly distance itself from nearby groups at the cost of losing coherence and visibility in public discourse, or to risk being associated with those groups.

I think an example of this can be found in libertarianism, where very different motivations can lead one to argue libertarian viewpoints, but the general public generally sees a large ideological blob. One the one hand, libertarian ideas derive from a position that seeks freedom from coercion, a classical liberal project that aims to establish a society is held together by win-win cooperation and only resorts to coercive methods in extreme cases. Such a viewpoint is compatible with welfarist ideas as long as they have an opt-in component. As an example, one may imagine a libertarian charter city that uses European style welfare models, but where people are free to leave, limited only by contractual obligations they explicitly agreed to. On the other hand, libertarianism can also be an expression of the concrete desire to abolish existing coordination mechanisms and sharing arrangements. These two kinds of libertarianism, one which seeks to rebuild a strongly cooperative society on the basis of non-coercion and the other which would prefer to see society reshaped along less cooperative and more individualistically competitive lines, are very different in nature, yet look a lot alike when seen from outside.

I think a similar thing is happening with neoreaction and the alt-right. Neoreaction could be a post-post modernist rediscovery of order and structure that is informed by both the failures of naive, rigid modernism and the self-serving and dangerously ineffectual dispersions of post-modernism. The alt-right on the other hand is a more old-school nationalist or ethnocentric movement. From the outside they sure look a lot alike.

Blog - Reactivate

I've decided to try and reanimate this blog.

I've been meaning to put my writing into a more organized and less transient format than Facebook rants and reddit comments. Given the sheer amount of written content I've produced online, I find it surprisingly hard to sit down and write something that is not in reply to someone else, so I'm going to take it easy and try to start out with 200 words a week.

Since I'm too lazy to find a new name for a blog, I've decided to reactivate this old blog of mine (which, amazingly, 40000 people have visited in the meantime). The content will be different though, instead of torrenting tips, you will find my armchair musings on topics such as technology, futurism, consciousness, spirituality, game theory, video games, politics and whatever else comes to mind.

Tuesday, 3 January 2012

Buy steam games from abroad

I recently had trouble buying games on steam while traveling. The problem was that steam would automatically redirect me to the European store, where I was unable to buy games using my UK steam account.

I contacted support and they replied by giving me this link, which takes you directly to the UK steam store.

I assume that you can switch out "GB" with a different country code to go to different county's steam stores.