Hyperbolic Discounting

There are many ways to observe and explain the behaviors I described in the previous article. Hyperbolic discounting is a way to think about the issue.

On the one hand, we have the thought that people tend to push things they don’t like further in the future, which is basically what thinking about “low time preference” gets you.
On the other, thinking about hyperbolic discounting allows you to analyze the tendency for people to choose what they’d like now, even in exchange for things they’d not want later. In other words, we tend trade “good times now” for “bad times later”, such as partying hard in exchange for feeling beat, cranky and in pain tomorrow.

This may have useful insights to apply when trying to calibrate time preference. Perhaps reframing the future situation or doing a mental excercise that allows us to feel as if the bad times are going to happen before the good times do, we’d be able to help balance our decision making tendencies.

There are some good reasons for hyperbolic discounting – such as increasingly lower certainty about the outcomes when the chains of event are spread on very long stretches of time.

In any case, today I ran a tiny experiment to improve my productivity without varying my time preference – I killed all the distractions and sat down to work. It was good for most of my work session, but I was filled with anxiety that things may be happening which were important for me, and I’d not find out because I got disconnected. I’ll keep the experiment running for a few more days and see what happens; I expect that I’ll get used to the pattern and stop feeling anxious.

I still got more done than usual, which is nice. I hope the pattern continues.

On Time Preference

Time preference is a concept used to describe how much or little is a person willing to postpone a gratifying outcome in exchange for an improved outcome.

If the time preference is “high”, it means a person is willing to trade more future benefit in exchange for immediate results. A common example is: “would you rather have 10 dollars now or 100 in a year?” People with a high enough time will have choose the smaller amount of money now, while people with a low enough time preference will choose the higher amount of money later.

In other words, a higher time preference means a lower capacity for delayed gratification. The amount and variety of situations where having a low enough time preference leads to improved results is overwhelming; from optimized spending of money to optimized allocation of time for diverse tasks, including following through with plans which require a lot of time working before the reward arrives.

Knowing about this – being able to name this phenomenon and think about it, allows us to identify it and plan for it. If you lead a team where some members have higher time preference, you may want to look at a way to introduce intermittent rewards which are not too far away from each other. This is, I believe, what “gamification” is all about.

If lower time preference teammates are present, make sure they understand the big picture, the end result of work. As this is usually easier to do than gamifying processes, lower time preference team members can be easier to work with. Unless, for some unfathomable reason, you can’t share the end goal. Then do gamify, because for all that people can delay gratification, if there’s no light at the end of the tunnel, having some small gratifying moments mixed into daily work can work as a motivator.

I have found that my time preference is too high for my taste, and that this is one of the reasons I have felt the need to build upon my discipline. In hindsight, I may have been able to notice this sooner if I’d had the right information – the signs were everywhere – which is why I’m writing on the topic out here.

I’ll try to set up some experiments, with two goals:

  • To deal with my too-high time preference (gamifying stuff, most likely)
  • To lower my time preference

I’ve not seen any papers on these kinds of experiments, but I sorely need to do this, so I’ll look it up. I specially don’t have a clue on what to do to lower my time preference, so I’ll need to think about the what and the why, to try and get a clue about the how. Any ideas, don’t hesitate to hit me up.

On programming productivity

Measuring programmer productivity is notably hard. It’s the topic of numerous, variable length publications.

Much of coding can be succintly cuantified and estimated; time should probably be spent automating those tasks, as those are the boring, repetitive, well-defined ones, like creating a CRUD or converting some programming-language-level construct into a interface-level-representation such as JSON or XML.

The other part is hard to estimate, mostly because it combines several tasks, like getting to know the domain, figuring out what needs to get done and actually doing it in a polished manner.

Some things that are usually oversimplified in attempts to measure programmer productivity, sometimes to hilarious effect: amount of lines of code, time spent sitting at the computer, and amount of artifacts produced.

All of those means of measurement can backfire hideously by creating the wrong incentives (lots of boilerplate, woolgathering, overengineering, overestimating work length).

Here are a few important measurements that can be made to help track this elusive statistic:

  • Explain your work to your teams regularly. Have them rate it. Keep a history. State what’s being solved, why it was solved in this particular way, what tradeoffs were involved, any difficulties you ran into, and how you overcame them. Ratings on two indicators are crucial: problem complexity and performance. They should include justifications to help you home into better practices.
  • Keep track of all the bugs in your code, the stage at which they were noticed, and the time that fixing them required.
  • Keep track of the references to your code, especially if you’re writing tools.
  • Have your peers rate you on helpfulness and knowledgeability.

If you encounter any unintended side effects or incentives, please let me know. Up to now, the only bug I’ve found for this kind of process is the popularity-contest-like aspect it can sometimes take. Thus the objective numbers I slid in there to help balance. If you find other ways to improve on this, let me know.

 

PSA: Secure your build processes

I need to say this because there’s too much moaning and grinding of teeth going on about npm packages loads of projects depend on.

If you have a project with dependencies, do yourself a favor and have an in-house mirror for those. It’s even more important if you’re a software shop which works primarily with one technology, which I presume is a very common case.

I’m not too node.js savvy, but in the Java and Maven repositories world, we cover our backs using either Nexus Repository (which appears to work with npm, too) or Apache Archiva.

That way, when we “clean install” the last checked in code for final delivery into the QA and deployment teams, we don’t run into crazy issues like having it not build because someone decided to take their code down – or had it taken down by force.

In a Netflix chaos monkey-like approach, try to foresee and forestall all causes for unreliability at go time, not only with this but any other kind of externalized source of services. You, your family, significant others, pillow, boss, co-workers and customers will all be happier for it.

Use LetsEncrypt

I’ve successfully updated my SSL certificate for this website, and automated the transaction as a result of the first repetition of the maneuver.

It’s a really nice way to keep your site secure, and it pushes you towards automating the renewals by having a relatively short certificate life span. Plus, it’s free.

Receiving the alert is very refreshing; I still had 19 days to renew my certificate, which would’ve given me plenty of chance to do it even if I’d not had a shellscript handy waiting only for a good chance to test it and schedule.

Keeping your client-server communications secure is a must-do; even if most of what I write here will eventually see the light of the day, much damage can be done to my image if the site was compromised; and this is just a personal website. If you have a website where clients log in and trust you their data, and for some reason you do not have secure connections enabled, do yourself a favor and fix that problem.

Thoughts About History

History is a subject that usually leaves me dissatisfied. It may be that I have the wrong approach in the way I think about it, but it has consistently left me feeling uncertain over the years.

We learn history from many sources; oral stories told by our family, which usually cover anecdotes and interesting tidbits; written texts by historians out there; the news of the day and from other times; from textbooks.

Now, oral stories are notoriously unreliable. I know, because I’ve seen the deformation of anecdotes firsthand during my lifetime – which is still on the short end of the scale. Stories about grandparents and further back in time… I can only expect they retain no more than a passing resemblance to what was going on.

The books by historians are in some ways similar to the news: they go through a publisher’s hands, they are subject to all kinds of pressures and interests. The winners write history.

Textbooks, at least in my country, are increasingly regulated. In public education they are literally handpicked. This kind of history has the strongest, most viable path to being censored/edited by an interested party, because there’s a single bottleneck in an office in a government building.

How can we ever be certain of what happened? All the time I’m uncovering facts that contradict my earlier knowledge in ways so blatant that it allows me to see that it’s not a model I’ve built: it’s a model I’ve been handed, and has been socially validated, and may or may not have anything to do with reality or what someone wants me to think about myself and my environment.

Many aspects of history are subtly manipulated in ways I’ve learned to identify over time, and which make me react intensely.

Attributing intent to people is one of those; defending actions in hindsight is another. It makes me want to see proof – that a certain intent was there, that a certain datum was there, that people demonstrated thinking with a certain pattern or using certain tools. But when I think about what kind of proof that would require, it is then that I feel helpless. See, because of my (hopefully healthy) dose of skepticism, I understand that I shouldn’t treat much of history as more than fables and fiction.

On the other hand, the effects of history are real. The effects of perceived history are just as real, although maybe not as intense. I think, then, that there is use for understanding what the world thinks of its own history, because that allows us to have a working model, a framework from which to work and communicate.

But we should be careful of the way we extrapolate, the way we apply the model to our current situations. Historic data we use as input for the way we think must be tested and considered “possibly wrong”, and the truthfulness we assign to it part of the model we’re working with.

So, I’m skeptic and mighty and have an unbreakable vow not to trust history? Not quite. I’m gullible with historical information – we all are, as humans are attuned to stories in their patterns of thinking and remembering. But I do take care when I have the opportunity to make a decision based on the past. Even for events in which I’ve been involved I try to get other versions, other sides to the story, to have better probability of understanding what truly was going on. On more than one occasion I’ve been surprised.

I acknowledge that this position is not very elegant; it imposes a huge burden upon the people looking back and looking forward, trying to make good choices. It seems questions the validity of basically everything we think we know about the past, although it actually only questions the accuracy of most of what we assume we know from the data we see… yes, not much better.

Nonetheless, I’d like to be proven wrong time and again. The way that’d work is by having a decision being taken on account of a model based in the understanding of what happened a some time ago, the further ago the better because accuracy dies over time. And have the decision work for reasons consistent with the model. I’ve seen this on many decisions taken from personal experience in management, software development, teaching, which tells me that many people really understand what they’re about in their daily work. It may be hard to set up a large scale experiment, but in the absence of data to validate our beliefs, we should acknowledge that lack instead of just defaulting to the most comfortable side

Notes on Java 8

Enum types are constructs which represent sets of known values. They are useful, but are something of a kludge in a way that reminds me of the String class shenanigans.

Enums types are declared like other composite types, like class or interface. They have their own keyword. Unsurprisingly, it is enum.

Now the use for them is clear: it avoids the need of having a load of “public static final” fields laying around, and gives them common functionality, like a “static T[] values()” which returns all the possible values, or a static “T valueOf(String)” which returns the value with a name matching the String parameter.

They can be used to build finite state machines, and can be used with the “switch” construct. They help avoid silly-in-hindsight but maybe-really-serious bugs by catching typos: all values must be declared, if a match was made with a String literal a typo would create a branch of never-used-code.

Another neat feat is that they can return their name as a String, and can be compared to each other – the order in which they’re declared determines which value is “first” and which is “last”.

Now, enums are actually classes with shenanigans added. Even though it’s never specified, an enum type is a class which extends from java.lang.Enum<E> and has some methods (which I suspect are injected at compile time, it would be nice to confirm by looking inside a .class file; for the record, clever use of reflections would make creating a generic method which is invoked for the special, shared functionality rather trivial)

As it’s a class, you can write other methods and declare variables in it.

But you can’t build an enum by hand – extending java.lang.Enum<E> is illegal. Even though it’s not final. Which is an annoying inconsistency and lack of elegance. Was it necessary to implement like this? It’s very likely, as there are many really smart people working on the Java language. It’s not pleasant, though.

Enums are treated like special citizens, and even have particular data structures and algorithms tuned to them (EnumSet and EnumMap); which further reminds me of the shenanigans that go on with the String class, what with special in-memory representation and all.

This are not bad things in and of themselves – the way that Enums and Strings break out of the pattern of the language; they don’t fit with the mental model that would arise out of studying the rest of the architecture, though, so care needs to be taken, that they won’t come back to bite you.

Low Energy

Perhaps like Bilbo missing his hat – not for the last time – I wonder now whether it was a good idea to commit to a full year of daily writing.

Although most of the content comes from personal experience, meaning I mostly consult sources to double check as opposed to for original research, it’s a heavy drain on the time and energy I have available.

This poses a serious threat to my desire of sustained output (in writing) and increased output (in code), and I need to deal with it.

Playing around with my sleep schedule may help, as could also playing around with my eating habits. The former is a bit hard to achieve due to a fixed daily work schedule and traffic.

In any case, I will run some experiments – I’ll try having a bigger breakfast, for instance, and see how it goes.

Today’s text was supposed to deal with Java enums, but my energy was too low to do it. I increased my at-home work-to-leisure ratio, which accounts for this drastic drop, although I’ve been tolerating a mild beating in the last few days. Tomorrow will be a new day, I suppose, and I hope to cover the intended topic.

Thanks for reading.

Argument for learning to develop with emacs

You may have read my thoughts on why people should learn to code with a good CLI environment at hand.

I’ll one-up myself – it’s good to have emacs at hand.

You have all the features of the shell – well, you have the shell itself, if you want to. But you have even more extensibility, because you have other aspects to work with than apps: you have text editing as a programmable activity.

So besides having and being able to easily create tools to work on data at one level (files), you have and can easily create tools to work on data at another level: content.

So, emacs is not necessarily the only environment which one-ups a good CLI, but it’s the one I know and I can get behind.

What’s so special having the content editing be programmable is a layer of power further from tweaking your environment and adding tools to your toolbox. This is like tweaking the toolbox, so that you can have some tools work better.

The effect is so strong that people who work with emacs often find themselves more and more working from within emacs. So you don’t have an environment from which you call the editor to do some work with. You have an editor from which you work, and as code is text, you have a powerful tool in which to build powerful tools to better build powerful tools.

This is highly desirable.

An argument for learning to code with CLI environments

There’s a difference between working in the command line and working in a desktop environment which I think is crucial to software developers’ formation.

When you’re working on a desktop environment, there’s a particular flow that shapes the way you interact with your programs and files: you’re working on a physical space, and to use your tools, you go to them, then get your data into them, use them, then get your data out, back into your desktop. For instance, if you want to take some data from a text document and perform numerical analysis on it, you’d go about it thus:

  • Go to the place where you launch programs from, and open an editor
    • browse for the document with the data
    • get the data out, presumably by copying it into your system’s analogy of a clipboard
  • Go to the place where you launch programs from, and open a spreadsheet program
    • paste the data
    • use the spreadsheet on it
    • store the resulting document where you want to do it

The important part here is that you go to the tool then get the data in the tool.

In the command line, it would be a little different.

You would point the relevant tool at the document, extract whatever data you need, and point another tool at the resulting documents if you still need it. It’s the difference between:

  • Navigate “Start menu -> programs -> accessories -> Notepad”
  • Navigate the menu “File -> Open”
  • browse for the document

And

  • Type “editor name_of_the_document” or “editor /path/to/the/document”, or in the worst case scenario, “/path/to/editor –option-or-options /path/to/document”

There are some workarounds to this situation, like having default applications being able to open a file, or right-click and selct “Open With…”; but those actions feel very ad-hoc.

Thus the command line feels more like you’re reaching for a tool and using it where you need it.

A consequence of this is that when you write an application for use in the command line environment, no matter how simple it is, it feels as if you’re changing your operating system to suit your needs. When you develop simple desktop applications, it feels as if you’re building environments to put your data into.

From this peculiarity of each environment stem patterns of use. Tool chaining and piping feel like “reach for this, apply on file/data, then reach for that and apply it on the results”, which helps you think about the whole process seamlessly. Building tools to put into a chain means that they could do only very simple things, and still be very useful. Thus are early developers encouraged.

Customizing the whole system… is very encouraging, of course, and having “commands” or “actions” at your disposal which you’ve built yourself, as opposed to having “containers” into which to put the data for work, is very empowering.

There may be some way of chaining programs in a desktop environment, or creating simple software in such a way that it feels like you’re changing the whole system. I have not encountered either.

The most powerful feelings around desktop environments I’ve done stem from manipulating the PATH variable and having some daemons doing fun stuff around the UI, the kind of stuff that would give you pause and wonder whether my computer’s behaving oddly or if you’re imagining things. Doing this in a command line environment is par for the course, but feels no less empowering.

I am not talking from nostalgia here – grew up using GUIs, not CLIs. Indeed, even though my first programs were CLI-based, as I was using them from a GUI, the feelings of changing my environment to suit my needs didn’t come along until I’d started deploying software in a Solaris environment and started automating things.

When I discovered the feeling of changing the PS1 variable, customizing the .bashrc or .profile, creating aliases… it was then that I started being prolific in my CLI-program writing for practical purposes. It seeped into my other operating systems / work environments; I routinely create a directory for shortcuts to the programs I run, I tend to run programs with the “Run” option of the OS I’m using (Win+R in widnows, Win + program name in Ubuntu, Ctrl+Alt+T and run the program from the shell in most other linuxes, Command+Spacebar + program name in MacOS), write UI gadgets or other empowering shenanigans.

I think that much of this would’ve started earlier if I’d had a powerful command line as a primary tool for using the computer at the time I started learning how to code. I don’t mean to disparage GUIs; I just think the kind of feeling delivered by a good CLI and the patterns of use they encourage can be of great importance when starting to code, enough to make it at least a great complement to GUI-based computer usage and software development practices at the time of learning.