Retrospective: Clean Code – Boy Scouts, Writers, and Mythical Creatures

Yesterday I gave a talk on Clean Code, based in content by Uncle Bob and Geoffrey Gerriets.

I had some technical issues – so I had no access to my presenter notes, damping my performance somewhat… and after I’d taken such pains to learn from Geoffrey’s talk at PyCaribbean on Code Review, Revision and Technical Debt.

The subtitle for my talk was: “Clean Code, Boy Scouts, Writers, and Mythical Creatures”.

It starts out by talking about the features of clean code, as described by Uncle Bob and his interviewed few in his book Clean Code – and comparing each group of aspects to physical things we can look up to, like a Tesla Model X, a pleasant beach, or a bike… all of which share traits desirable in our code.

Then going on to the maxim of “leaving the campground better than we found it”, with a nice example of some code taken from the IOCCC and how much more legible it became merely by reindenting it,  putting in relief the long term impact of little incremental changes.

The latter half of the talk was derived from lessons learned at Geoffrey’s talk: the process of a professional writer, compared to the process of a professional coder, and how they’re alike; the lessons form the writers’ day to day can be applied to our coding: design, write, revise, rewrite, proofread; some attention was given to the way that reviews may be given. The mythical creatures section –  which represent the different stages at which a developer may find himself – are an aid to this latter part of the talk by pointing out patterns of behavior that identify what may be important or not for a certain developer at a certain point in their growth. The advice to treat things that may be beneath a developer’s level as trivia and/or minutiae, as well as the advice on focusing and choosing improvements to point out instead of “trouble” may be the best of this part of the talk.

After realizing I’d burnt through the presentation and posing some questions to the audience, we discussed some interesting points:

  • How can code comments make code cleaner or dirtier?
  • How can rewrites alter our coding behavior?
  • How can we find time to have a re-writing flow if the management doesn’t know any better?

The mileage may vary, of course, so several people pitched in and we didn’t draw any firm conclusions, only presented ideas to try, which was interesting.

In the end, we came out with some good ideas on how to keep code from stagnating… hopefully our future selves will have ever fewer messes to deal with :^)

Retrospective: Reasons why you should love emacs

Last Saturday I was at Dominican College O&M’s campus at La Romana, as one of the speakers for the “OpenSaturday” series of events.

This was a complete success: full-room, engaged audience, excellent speakers.

I had the opportunity to give my first talk on emacs.

It was delivered using org-tree-slide-mode for one part — which was really cool for the audience and for me, too.

On the second half of the presentation, I used org-mode and demonstrated custom “ToDo” states and timestamps, org-mode html export (C-c C-e h H), syntax highlighting, Immediate mode, and Emmet mode. Of course, I demonstrated having multiple buffers open in several windows at the same time.

It was all in a hurry, because it was a 10-minute talk; I couldn’t demonstrate the keyboard macros – which would’ve been nice, as I was going to demonstrate an extraction of html form item names and the generation of php code to get it from the $_REQUEST superglobal; this makes use of emac’s ability to use search and all functions as part of the macro, which I know for a fact several editors can’t do.

The show-stealer was Emmet mode – I actually thought people would’ve been more surprised at noticing that the presentation was within emacs, but they weren’t. As many are CS students who are learning HTML, seeing html>(head>title)+body>div#content>ul>li*5 grow into the corresponding tree blew them away.

I’m planning to enhance that presentation to fill a 45-minute slot featuring keyboard macros, elisp functions + keybindings, and select parts of my .emacs file. Perhaps the presentation will be accompanied by a “Reasons to love Vi” by one of my colleagues, which would be sweet.

In any case, a great Saturday – hopefully things will keep on being fun.

Notifications

I usually don’t have internet on my phone unless I’m home.

I started playing a recent freemium game aiming to be an e-sport.

The game has offline notifications.

This decimated my capacity to properly concentrate, little by little. I can now appreciate the reason many decry this as the era of distraction, of people looking down at their phones all the time. Not to misrepresent my stance: I’d noticed people walking around carelessly, too concentrated on their phones; it’d not really worried me, as once upon a time I did that, only carrying books around instead of a phone.

What I hadn’t noticed is the barrage of notifications, at seeming random times; the concatenation of stimuli in near-random intervals that create a tick-like habit of constantly checking the phone, fomenting a mental nag.

Any kind of worthwhile work – which is any work not to be automated just yet – takes time and focus; it probably needs introspection and analysis. Otherwise, it should be automated as much as possible until we find the need to analyze and meditate anew.

In any case, I’ve since uninstalled the application, which had no option to deactivate the offline notifications. I’ve also uninstalled a few other applications, which I liked to use from comfortable places at my home. And I took some days to cleanse.

Now that I’m not twitchy with check-my-phone-itis, I feel a lot better, and will keep posting – because now I can think through stuff, and remember better.

In case there’s any doubt, I’m saying that constant nagging severely impaired my focus, productivity and overall quality of life. You may be affected, too. Run an experiment on living without constant notifications running you, and see what happens. As for me… well, it seems not to be my style at all.

Hyperbolic Discounting

There are many ways to observe and explain the behaviors I described in the previous article. Hyperbolic discounting is a way to think about the issue.

On the one hand, we have the thought that people tend to push things they don’t like further in the future, which is basically what thinking about “low time preference” gets you.
On the other, thinking about hyperbolic discounting allows you to analyze the tendency for people to choose what they’d like now, even in exchange for things they’d not want later. In other words, we tend trade “good times now” for “bad times later”, such as partying hard in exchange for feeling beat, cranky and in pain tomorrow.

This may have useful insights to apply when trying to calibrate time preference. Perhaps reframing the future situation or doing a mental excercise that allows us to feel as if the bad times are going to happen before the good times do, we’d be able to help balance our decision making tendencies.

There are some good reasons for hyperbolic discounting – such as increasingly lower certainty about the outcomes when the chains of event are spread on very long stretches of time.

In any case, today I ran a tiny experiment to improve my productivity without varying my time preference – I killed all the distractions and sat down to work. It was good for most of my work session, but I was filled with anxiety that things may be happening which were important for me, and I’d not find out because I got disconnected. I’ll keep the experiment running for a few more days and see what happens; I expect that I’ll get used to the pattern and stop feeling anxious.

I still got more done than usual, which is nice. I hope the pattern continues.

On Time Preference

Time preference is a concept used to describe how much or little is a person willing to postpone a gratifying outcome in exchange for an improved outcome.

If the time preference is “high”, it means a person is willing to trade more future benefit in exchange for immediate results. A common example is: “would you rather have 10 dollars now or 100 in a year?” People with a high enough time will have choose the smaller amount of money now, while people with a low enough time preference will choose the higher amount of money later.

In other words, a higher time preference means a lower capacity for delayed gratification. The amount and variety of situations where having a low enough time preference leads to improved results is overwhelming; from optimized spending of money to optimized allocation of time for diverse tasks, including following through with plans which require a lot of time working before the reward arrives.

Knowing about this – being able to name this phenomenon and think about it, allows us to identify it and plan for it. If you lead a team where some members have higher time preference, you may want to look at a way to introduce intermittent rewards which are not too far away from each other. This is, I believe, what “gamification” is all about.

If lower time preference teammates are present, make sure they understand the big picture, the end result of work. As this is usually easier to do than gamifying processes, lower time preference team members can be easier to work with. Unless, for some unfathomable reason, you can’t share the end goal. Then do gamify, because for all that people can delay gratification, if there’s no light at the end of the tunnel, having some small gratifying moments mixed into daily work can work as a motivator.

I have found that my time preference is too high for my taste, and that this is one of the reasons I have felt the need to build upon my discipline. In hindsight, I may have been able to notice this sooner if I’d had the right information – the signs were everywhere – which is why I’m writing on the topic out here.

I’ll try to set up some experiments, with two goals:

  • To deal with my too-high time preference (gamifying stuff, most likely)
  • To lower my time preference

I’ve not seen any papers on these kinds of experiments, but I sorely need to do this, so I’ll look it up. I specially don’t have a clue on what to do to lower my time preference, so I’ll need to think about the what and the why, to try and get a clue about the how. Any ideas, don’t hesitate to hit me up.

On programming productivity

Measuring programmer productivity is notably hard. It’s the topic of numerous, variable length publications.

Much of coding can be succintly cuantified and estimated; time should probably be spent automating those tasks, as those are the boring, repetitive, well-defined ones, like creating a CRUD or converting some programming-language-level construct into a interface-level-representation such as JSON or XML.

The other part is hard to estimate, mostly because it combines several tasks, like getting to know the domain, figuring out what needs to get done and actually doing it in a polished manner.

Some things that are usually oversimplified in attempts to measure programmer productivity, sometimes to hilarious effect: amount of lines of code, time spent sitting at the computer, and amount of artifacts produced.

All of those means of measurement can backfire hideously by creating the wrong incentives (lots of boilerplate, woolgathering, overengineering, overestimating work length).

Here are a few important measurements that can be made to help track this elusive statistic:

  • Explain your work to your teams regularly. Have them rate it. Keep a history. State what’s being solved, why it was solved in this particular way, what tradeoffs were involved, any difficulties you ran into, and how you overcame them. Ratings on two indicators are crucial: problem complexity and performance. They should include justifications to help you home into better practices.
  • Keep track of all the bugs in your code, the stage at which they were noticed, and the time that fixing them required.
  • Keep track of the references to your code, especially if you’re writing tools.
  • Have your peers rate you on helpfulness and knowledgeability.

If you encounter any unintended side effects or incentives, please let me know. Up to now, the only bug I’ve found for this kind of process is the popularity-contest-like aspect it can sometimes take. Thus the objective numbers I slid in there to help balance. If you find other ways to improve on this, let me know.

 

PSA: Secure your build processes

I need to say this because there’s too much moaning and grinding of teeth going on about npm packages loads of projects depend on.

If you have a project with dependencies, do yourself a favor and have an in-house mirror for those. It’s even more important if you’re a software shop which works primarily with one technology, which I presume is a very common case.

I’m not too node.js savvy, but in the Java and Maven repositories world, we cover our backs using either Nexus Repository (which appears to work with npm, too) or Apache Archiva.

That way, when we “clean install” the last checked in code for final delivery into the QA and deployment teams, we don’t run into crazy issues like having it not build because someone decided to take their code down – or had it taken down by force.

In a Netflix chaos monkey-like approach, try to foresee and forestall all causes for unreliability at go time, not only with this but any other kind of externalized source of services. You, your family, significant others, pillow, boss, co-workers and customers will all be happier for it.

Use LetsEncrypt

I’ve successfully updated my SSL certificate for this website, and automated the transaction as a result of the first repetition of the maneuver.

It’s a really nice way to keep your site secure, and it pushes you towards automating the renewals by having a relatively short certificate life span. Plus, it’s free.

Receiving the alert is very refreshing; I still had 19 days to renew my certificate, which would’ve given me plenty of chance to do it even if I’d not had a shellscript handy waiting only for a good chance to test it and schedule.

Keeping your client-server communications secure is a must-do; even if most of what I write here will eventually see the light of the day, much damage can be done to my image if the site was compromised; and this is just a personal website. If you have a website where clients log in and trust you their data, and for some reason you do not have secure connections enabled, do yourself a favor and fix that problem.

Thoughts About History

History is a subject that usually leaves me dissatisfied. It may be that I have the wrong approach in the way I think about it, but it has consistently left me feeling uncertain over the years.

We learn history from many sources; oral stories told by our family, which usually cover anecdotes and interesting tidbits; written texts by historians out there; the news of the day and from other times; from textbooks.

Now, oral stories are notoriously unreliable. I know, because I’ve seen the deformation of anecdotes firsthand during my lifetime – which is still on the short end of the scale. Stories about grandparents and further back in time… I can only expect they retain no more than a passing resemblance to what was going on.

The books by historians are in some ways similar to the news: they go through a publisher’s hands, they are subject to all kinds of pressures and interests. The winners write history.

Textbooks, at least in my country, are increasingly regulated. In public education they are literally handpicked. This kind of history has the strongest, most viable path to being censored/edited by an interested party, because there’s a single bottleneck in an office in a government building.

How can we ever be certain of what happened? All the time I’m uncovering facts that contradict my earlier knowledge in ways so blatant that it allows me to see that it’s not a model I’ve built: it’s a model I’ve been handed, and has been socially validated, and may or may not have anything to do with reality or what someone wants me to think about myself and my environment.

Many aspects of history are subtly manipulated in ways I’ve learned to identify over time, and which make me react intensely.

Attributing intent to people is one of those; defending actions in hindsight is another. It makes me want to see proof – that a certain intent was there, that a certain datum was there, that people demonstrated thinking with a certain pattern or using certain tools. But when I think about what kind of proof that would require, it is then that I feel helpless. See, because of my (hopefully healthy) dose of skepticism, I understand that I shouldn’t treat much of history as more than fables and fiction.

On the other hand, the effects of history are real. The effects of perceived history are just as real, although maybe not as intense. I think, then, that there is use for understanding what the world thinks of its own history, because that allows us to have a working model, a framework from which to work and communicate.

But we should be careful of the way we extrapolate, the way we apply the model to our current situations. Historic data we use as input for the way we think must be tested and considered “possibly wrong”, and the truthfulness we assign to it part of the model we’re working with.

So, I’m skeptic and mighty and have an unbreakable vow not to trust history? Not quite. I’m gullible with historical information – we all are, as humans are attuned to stories in their patterns of thinking and remembering. But I do take care when I have the opportunity to make a decision based on the past. Even for events in which I’ve been involved I try to get other versions, other sides to the story, to have better probability of understanding what truly was going on. On more than one occasion I’ve been surprised.

I acknowledge that this position is not very elegant; it imposes a huge burden upon the people looking back and looking forward, trying to make good choices. It seems questions the validity of basically everything we think we know about the past, although it actually only questions the accuracy of most of what we assume we know from the data we see… yes, not much better.

Nonetheless, I’d like to be proven wrong time and again. The way that’d work is by having a decision being taken on account of a model based in the understanding of what happened a some time ago, the further ago the better because accuracy dies over time. And have the decision work for reasons consistent with the model. I’ve seen this on many decisions taken from personal experience in management, software development, teaching, which tells me that many people really understand what they’re about in their daily work. It may be hard to set up a large scale experiment, but in the absence of data to validate our beliefs, we should acknowledge that lack instead of just defaulting to the most comfortable side

Notes on Java 8

Enum types are constructs which represent sets of known values. They are useful, but are something of a kludge in a way that reminds me of the String class shenanigans.

Enums types are declared like other composite types, like class or interface. They have their own keyword. Unsurprisingly, it is enum.

Now the use for them is clear: it avoids the need of having a load of “public static final” fields laying around, and gives them common functionality, like a “static T[] values()” which returns all the possible values, or a static “T valueOf(String)” which returns the value with a name matching the String parameter.

They can be used to build finite state machines, and can be used with the “switch” construct. They help avoid silly-in-hindsight but maybe-really-serious bugs by catching typos: all values must be declared, if a match was made with a String literal a typo would create a branch of never-used-code.

Another neat feat is that they can return their name as a String, and can be compared to each other – the order in which they’re declared determines which value is “first” and which is “last”.

Now, enums are actually classes with shenanigans added. Even though it’s never specified, an enum type is a class which extends from java.lang.Enum<E> and has some methods (which I suspect are injected at compile time, it would be nice to confirm by looking inside a .class file; for the record, clever use of reflections would make creating a generic method which is invoked for the special, shared functionality rather trivial)

As it’s a class, you can write other methods and declare variables in it.

But you can’t build an enum by hand – extending java.lang.Enum<E> is illegal. Even though it’s not final. Which is an annoying inconsistency and lack of elegance. Was it necessary to implement like this? It’s very likely, as there are many really smart people working on the Java language. It’s not pleasant, though.

Enums are treated like special citizens, and even have particular data structures and algorithms tuned to them (EnumSet and EnumMap); which further reminds me of the shenanigans that go on with the String class, what with special in-memory representation and all.

This are not bad things in and of themselves – the way that Enums and Strings break out of the pattern of the language; they don’t fit with the mental model that would arise out of studying the rest of the architecture, though, so care needs to be taken, that they won’t come back to bite you.