Projecting Conciliation

In some cases getting someone to think in a way similar to yours or changing your mind is especially important. Besides the usual value for your correct reasoning, there’s an immediate decision at hand which is directly dependent on the result of a conversation or argument.

Being non confrontational is really useful in these situations, but requires particular sharpness and preparedness. You need to understand the issue as well as possible, and a quality I can’t quite defined but have observed recently… it’s a mixture of believing in the best cognitive intentions of your counterpart, as well as trusting their smarts and being genuinely curious. There may be other factors that further help you get into the right frame of mind.

The behavior I’ve observed in people who work like this – I’m not particularly good at this approach, although I’ve somehow pulled it off on some rare occasions – is an apparently open but directed curiosity which drives everyone into the meat of the business and enlightens everyone… and is equally likely to change anyone’s mind, except that the driver, the one projecting conciliation, has a deep understanding of the issue at hand that makes them especially likely to be closer to a good answer to the problem at hand.

This kind of attitude, which is also very calm in the conversation, is the best I’ve seen to steer important meetings in the right direction – whatever it might be. It costs a lot of effort, but practice makes master… perhaps doing it in non-crucial subjects is a good idea from time to time.

On Sharing Knowledge

We’re filled with many amazing abilities which we consider mundane because we’re so used to them. Some of them are mentioned and described here.

Sharing knowledge is one such an ability – if only a few people were capable of doing it, the group would be inconceivably more powerful than the rest of humanity combined. In a matter of perhaps months, they’d accumulate experience beyond what anyone else could and be able to apply it accordingly, earning the capacity to outdo the rest of existence.

Now, that scenario is perhaps too extreme – and too far removed from our daily lives. But even a slight difference in the capacity to share knowledge can compound heavily in time; this is directly observable by seeing the difference between teams where people are jealous of their knowledge and position and teams where members are more carefree about their information and techniques. It is also observable in the difference between someone dabbling blindly (or just observing people) in a particular discipline or area of knowledge and someone with one or more mentors.

Transfering knowledge through text, sound or performance, and particularly with the more deliberate variants (dissertations, presentations, essays, tutoring, mentoring, demonstration) pushes the learner’s progress in ways that can be non-linear, helping people develop previously unsuspected insights.

The achievement of this insights is so pervasive among hackers – which are a knowledge work oriented population if there is one – that there are terms for different textures of insights: “zenning” and “grokking“. There are people who consistently help achieve insights, and those achieve folk-hero status among hackers. Brian Kernighan and Donald Knuth are two superlative examples of this breed.

We should, then, as a society, embrace, support and boost efforts aimed to sharing knowledge. We do try, really… the school system is facilitating mass, intergenerational knowledge transfer with some success.

There’s been a wave of internet users turned teachers, mentors and sharers that gives me hope. Systematic efforts like Open Source Ecology and Khan Academy fill me with hope. The sometimes pell mell efforts by individuals sharing info on subjects as diverse as personal appearance and cooking make me ecstatic.

Yes, books are there, and other media. But this is a new kind of effort, which takes the power of the internet and the services currently running on it (such as search and hosting) to reach an amazing number of people who can remix, enrich and reshare knowledge with thousands of points of view from all walks of life. The potential for amazing results are huge, as the knowledge being shared grows exponentially.

There is potential for things to go wrong, which I will write about in the future. Usually, you should avoid sharing dangerous information. How to build explosives, or break into computer systems, or do harmful stuff.

For all other things, please, do share. Teach, learn. Due to the accumulation of the effects, our future can be exponentially better – or worse – depending on this.

Avoiding Arguments

Sometimes arguments are not crucial to your ends.

As a means to get people to understand you, correct your ideas, help you shape the lens through which you see the world, arguments are amazing. But sometimes you just need to get something done.

Sometimes, you’re committed to a particular opinion, and are certain enough of your correction that you don’t want to waste time arguing.

I put a high price on certainty – the more certain you are, the more you should be willing to bet, be it in money, comfort or the possibility of winding working twice as much if you’re wrong. If you’re really certain, sometimes you just have to put your money where your mouth is, and commit. Offer to carry the burden.

“I’ll have our back if something goes wrong”

“I’ll be responsible for this, if we make it this way”

“If we go through this path, it will be so much easier, I’m willing to take a bigger chunk of work”

At other times, this won’t work – mostly because someone else is equally invested in a way to do things which is incompatible with yours. Offer the other person the chance to take responsibility; put them on the spot. If they don’t step up – well, I sure hope you’re right, because things are most likely going down your way.

Hyperbolic Discounting

There are many ways to observe and explain the behaviors I described in the previous article. Hyperbolic discounting is a way to think about the issue.

On the one hand, we have the thought that people tend to push things they don’t like further in the future, which is basically what thinking about “low time preference” gets you.
On the other, thinking about hyperbolic discounting allows you to analyze the tendency for people to choose what they’d like now, even in exchange for things they’d not want later. In other words, we tend trade “good times now” for “bad times later”, such as partying hard in exchange for feeling beat, cranky and in pain tomorrow.

This may have useful insights to apply when trying to calibrate time preference. Perhaps reframing the future situation or doing a mental excercise that allows us to feel as if the bad times are going to happen before the good times do, we’d be able to help balance our decision making tendencies.

There are some good reasons for hyperbolic discounting – such as increasingly lower certainty about the outcomes when the chains of event are spread on very long stretches of time.

In any case, today I ran a tiny experiment to improve my productivity without varying my time preference – I killed all the distractions and sat down to work. It was good for most of my work session, but I was filled with anxiety that things may be happening which were important for me, and I’d not find out because I got disconnected. I’ll keep the experiment running for a few more days and see what happens; I expect that I’ll get used to the pattern and stop feeling anxious.

I still got more done than usual, which is nice. I hope the pattern continues.

On Time Preference

Time preference is a concept used to describe how much or little is a person willing to postpone a gratifying outcome in exchange for an improved outcome.

If the time preference is “high”, it means a person is willing to trade more future benefit in exchange for immediate results. A common example is: “would you rather have 10 dollars now or 100 in a year?” People with a high enough time will have choose the smaller amount of money now, while people with a low enough time preference will choose the higher amount of money later.

In other words, a higher time preference means a lower capacity for delayed gratification. The amount and variety of situations where having a low enough time preference leads to improved results is overwhelming; from optimized spending of money to optimized allocation of time for diverse tasks, including following through with plans which require a lot of time working before the reward arrives.

Knowing about this – being able to name this phenomenon and think about it, allows us to identify it and plan for it. If you lead a team where some members have higher time preference, you may want to look at a way to introduce intermittent rewards which are not too far away from each other. This is, I believe, what “gamification” is all about.

If lower time preference teammates are present, make sure they understand the big picture, the end result of work. As this is usually easier to do than gamifying processes, lower time preference team members can be easier to work with. Unless, for some unfathomable reason, you can’t share the end goal. Then do gamify, because for all that people can delay gratification, if there’s no light at the end of the tunnel, having some small gratifying moments mixed into daily work can work as a motivator.

I have found that my time preference is too high for my taste, and that this is one of the reasons I have felt the need to build upon my discipline. In hindsight, I may have been able to notice this sooner if I’d had the right information – the signs were everywhere – which is why I’m writing on the topic out here.

I’ll try to set up some experiments, with two goals:

  • To deal with my too-high time preference (gamifying stuff, most likely)
  • To lower my time preference

I’ve not seen any papers on these kinds of experiments, but I sorely need to do this, so I’ll look it up. I specially don’t have a clue on what to do to lower my time preference, so I’ll need to think about the what and the why, to try and get a clue about the how. Any ideas, don’t hesitate to hit me up.

Brief comment AlphaGo’s victories

AlphaGo won 8 straight matches with two of the top players of one of the most computationally-complex perfect-information games out there: Go.

Its complexity stems from the sheer amount of possible scenarios that can play out in the game.

I thought that AlphaGo wasn’t really a breakthrough from usual playing techniques, although I failed to say so in public – Maybe I should cultivate the habits of predicting things publicly.

In any case, there are the advantages I think AlphaGo has over other Go AIs:

  • More computing resources (memory, processing power)
  • Access to better players

The first one is obvious. The second one, maybe not.

So here’s my take on the workings of AlphaGo: It has a combination of the Monte Carlo Tree Search algorithm and a Neural Network or other kind of pattern matching mechanism.

The pattern-matching mechanism, particularly if it is a Neural Network, would benefit from playing a lot of games; it could learn to prefer analyzing a particular “branch” or sequence of movements – we could say, roughly, pursue a train of thought – in a way that makes it likelier for it to win.

If a pattern matching algorithm plays only poor players, it will learn to beat them, but it won’t know how to beat good players. If it plays only a certain kind of game – say, the opponent plays always in a similar pattern – then it has gaps in the game, complete situations it can’t have a good heuristic.

Playing good players means that the tool explores many of the best techniques and possibilities frequented by good players, thus become better at choosing how to play against good players.

Now, you may think “well, it won’t be able to beat poor players, then, right?” But you’d be wrong. Because playing actually implies thinking about many turns in the future, and patterns for winning are already there, and there are techniques to secure your position… well, a poor player won’t be able to foresee much, which a good AI would, and even if the AI couldn’t bias itself towards the smartest plays, it can choose decent ones, which would be enough to beat a poor player.

In essence, I think AlphaGo has two mechanisms, then: one which has a bias towards immediate good-looking plays, and another one which has a bias towards statistically good-looking plays over many games.

I didn’t think all of that before the fourth game of AlphaGo vs Lee Sedol; I just had a hunch that you wouldn’t actually need a revolutionary AI to beat people at go. I may still be proven wrong if and when a paper about AlphaGo describes its inner workings.

What makes me feel more certain about it is this:

Mistake was on move 79, but only came to that realisation on around move 87

Demis Hassabis, CEO of DeepMind

And he had access to the data.

After move 87 or so, AlphaGo went haywire.

While about to post this, I found that the tweets by Demis Hassabis confirm my suspicions.

The neural nets were trained through self-play so there will be gaps in their knowledge, which is why we are here: to test AlphaGo the limit

So, while it’s amazing to see that a computer may outperform a person in a well-specified, perfect information game… I think it is at an advantage! Because Lee Sedol’s mind wasn’t trained on playing computers, but on people. I think that by exposing himself to a dozen or so further games with AlphaGo, Lee Sedol could start routinely beating it.

This reminds me of the image recognition neural nets which mistake static-like photos with all kinds of animals. You can find a set of images in which humans will routinely outperform them. As Go is a game where the “picture” is the result of your and your opponent’s plays, you can routinely set up the images.

Let’s see what happens in today’s games, anyway. Fun times to be alive in.

There is some good in being ever curious

Or: a healthy dose of skepticism should be accompanied by a healthy dose of curoisity. And, in my experience, people are seldom curious enough, so raising the bar is hardly a danger.

I don’t seem to have mentioned the 12 virtues of rationality before in this blog, but I have talked about rationality under another, hopefully less loaded, name: cognitive calibration.

This is an invitation to be curious about people around you and the decision they make, specially if they impact you. Why does a teammate want to go in a certain direction? Why does a colleague think our choice is poor (or stupid, depending on their niceness)? Why, what, who, when, where, how?

Cognitive calibration is all about having a better model of reality – the better calibrated our cognition is, by definition, the better our capacity to predict the future and understand the present.

One of the tools that can make a model give more accurate predictions is the abundance of data. Thus, always wonder, always question, always try to find out. If your model is wrong, or the way you’re getting data is biased, then your understanding of the present and future will be ever further from the truth.

In order to correct for this: wonder. Wonder whether you’re truly in the right, whether others are truly in the wrong. Look for the answers.

If reality doesn’t fit your predictions or understanding: wonder. Wonder about how stuff really works, about what’s really going on, about where you went wrong.

I’ve you’ve not changed your mind or don’t like questioning a particular subject: wonder. About when and why you became so protective, about how to overcome that protectiveness in order to better understand yourself and be able to wonder, once again, wonder if you are right – or how you can be right.

Curiosity seeks to anihilate itself, writes Eliezer Yudkowsky.

That’s true – but it’s like a phoenix in that it can be reborn time and again. For as long as I’ve been looking for answers, I’ve always found more questions. I think we’re far from the place where the amount of questions are going to start converging and reducing, the place where we know it all.

So please your curiosity and crash it into the facts, that you may be wiser, and curiouser.

Try not to get into trouble, but always bear in mind… how much trouble are you in by not knowing now, not being able to see, even in a blurry way, tomorrow?