In real life, people struggle to hear, categorize, and remember stories that don’t fit one of the traditional narratives. A story of an open marriage that didn’t work out, where complete honesty wasn’t enough to bring acceptance for romantic love in the plural, where love died gradually until by the time the danger was recognized it was too late
becomes “he cheated and chose his girlfriend over his marriage.” Not the truth, but it fits the pattern, people can classify that and talk about that. Simple models, we like simple models, who cares about accuracy if it’s a nice simple model?
There are accepted narratives in software, too. The web application. The algorithm. The formal scrum iteration or the mostly-waterfall model. The agony of Release Week. When a project or framework or (what can we even call it when there are no clear application boundaries?) breaks out of established modes of computation, when User Acceptance Testing no longer makes sense, how do you even explain that to people?
"I don’t see how this could work. This doesn’t make sense. Stop it." If we can’t see the reasons for something, there must not be a reason. That person is stopped in the middle of the parking lot because they’re an idiot! There is no other explanation in my simple model!
I grow beyond myself when I am small in the world. Everything happens for a million reasons, and I might know a few, but usually I’m wrong. New ways to see, new ways to think, new ways to learn things I could not conceive before, this is the goal. In relationships and in computing.
A good novelist makes every piece of the story essential to the final resolution. Each event described in the book leads up to one big finale. That’s a good narrative.
We like to see the past in that light. We like to believe everything happens for a reason, and that reason is to get things where they are right now. “It was good this guy broke up with me, because I wouldn’t be who I am now otherwise.” Like the present circumstances are superior somehow to all the alternatives that don’t exist. It was all meant to be, we like to believe.
Kanban suggests we apply this model to the future.
"The first rule of kanban is that the later process goes to the earlier process to pick up products" ~ Taichii Ohno
That is, pull model: start with the customer demand. Figure out what we must do to meet that (deploy), what is the prerequisite for that (testing), what is the prereq for that (code), and so on down a forking chain. Anything not in demand by the next step in the process is busywork. If repairing your development environment is needed to write the code effectively, then it is not yak shaving. If you’re distracted by all the code formatting options, that is the non-work that seeps in to distract us from the end goal.
This is harder than we think. It’s harder because it is not the way the real world works. The Gulf of Mexico doesn’t suck water from the Mississippi River, and the river’s tributaries don’t pull rain down from the sky. This is not how the past works either: all those “it was meant to be” warm fuzzies are a coping mechanism to help us accept the past. Causality runs forward in time. Things are the way they are because they got that way. Everything happens for a reason - usually a million reasons - and every one of those reasons existed before the result. Not after.
Yet, the first rule of Kanban is right. When we’re aiming for something, when we’re building our project’s narrative, we can set up the causalities. Plan backwards, work forwards. If we aren’t careful and deliberate, then our work will emanate from our current situation, as reality does. Water will flow down the path of least resistance. We’ll have pretty colors in our IDE and a feature that no one asked for. Conscious effort can direct our flow.
And be careful to check over and over that our destination is still where we most want to go. Check, re-aim, and proceed down the shortest path from the new-here to the new-there. Otherwise we’re back to Waterfall.
Our project history may never read like a good novel, because we should change our minds on the best ending, and we don’t go back and edit the first chapters. Yet, if we think about it that way - how can all the characters on my team be important to the final solution? - we might get somewhere faster, and be part of a story bigger than ourselves.
Looking at other cultures, finding differences, can teach us about ourselves.
One example from business: in the US we rate managers on results. Quarterly or annual metrics measure an employee’s value. In contrast, many Asian countries give managers years to build a strong market position. Recognizing the distinction between short-term and long-term results, we can stop taking for granted that obviously the best way to evaluate a person is by this year’s numbers.
In programming, we can learn from other languages and their cultures. As a Java developer, I learned from Haskell that we can get way more out of the type system if we take the time to be more specific than String or Int. The Haskell community teaches that we can do better than testing a few examples. I learned from Ruby that code can be more expressive without curly braces. Ruby teaches the importance of careful, intuitive API design; in dynamically typed languages we like to remember or guess method names. The Ruby community teaches that the important part of coding is the people, both the people who use the programs and the people who write them.
Experiencing diverse cultures can help us question what we thought we knew, and grow as people and programmers.
Back in the Smalltalk days when OO was birthed, the idea was to have a bunch of objects that pass messages to each other. The focus was supposed to be on the messages.
But people like objects. They are easy to draw on a white board. And they come with cool tools like inheritance and polymorphism. Pretty soon these tools became the focus. People learned how to use them, and used them willy-nilly. All that worry about messages was left in the dust.
It’s almost like wanting to be a surgeon, so I learn how to use all the tools that a surgeon uses. Get certified in scalpels and retractors and ultrasound tissue disruptors. And bam, I’m a surgeon! Hire me, I can operate on anybody!
Functional programming is still in its youth. Which tools of the trade will become synonymous with functional, as its principles are left in the dust? The point of FP is making each piece of code interchangeable with its return value. Expressions that can be evaluated in whatever order, or never if they aren’t needed. Whatcha wanna bet people learn how to pass functions around, and start considering themselves functional programmers?
Objects that pass themselves and give access to all their innards are the opposite of the original intent of OO. When FP goes mainstream we’ll have side-effecting, environment-dependent functions passed around willy-nilly. “What do you mean I can’t pass this code over the network?” they’ll say, and RMI will be enhanced to Remote Method Passing, and committees will publish protocols.
OO was originally method-oriented. FP is more accurately expression-oriented. People get hung up on what they can do with the tools these paradigms offer. The essence of each is not what we can do, but what we don’t do. Objects don’t muck around with each others’ state, so we can be sure the code that sets, changes, and uses this data is right here with it. Functions don’t use or impact the outside world, so we can be sure this input entirely determines this output.
Reuse isn’t about typing. Characters are free. It’s about saving headspace: throw this task over a wall to some code I trust. Then I don’t have to worry about design, naming, or testing. But wait, can you trust that superclass? only if you’re diligent about enforcing LSP and various other principles we aim for as we try to use OO tools without stabbing ourselves with the scalpel. Functional programming will become just as painful if we let go of principles and hold tight only to the tools.
As a surgeon, I can excise your kidney, snip off those tonsils, or cut that baby right out of your uterus without stretching your vagina. But should I? The essence of medicine is guessing the right procedure for each patient. Most of the time, the right surgery is no surgery.
Therefore: Abuse of inheritance is like cutting out your kidney to cure the flu. And returning any old function from another function is using radioactive iodine to cure one cancer and cause another.
Maybe we should all take an aspirin and go to bed.
The sage responds:
“The adoption of new paradigms leads to the formations of patterns of
their misuse. And their replacement by other paradigms. Why should
this discourage you? It is the way we learn.
(Shrugs shoulders) Purity is a matter of perspective. Software is for
3 + 2 = 5
can there be two ways of looking at that?
Turns out there are. One can see = as equivalence, which means that
5 = 3 + 2
is the same statement.
Or, one can read = as “results in”, seeing “the operation of adding 2 to 3 yields 5.” In this case,
5 = 3 + 2
is meaningless and backwards.
It turns out that children learning arithmetic often think of equals this way, as a “results-in” sort of statement. Like:
flour + water + sugar + eggs + mixing + baking = CAKE
Buy 2 Pizzas = Get One Free
In most programming languages, the equals sign is closer to the “results in” meaning. The order is reversed, and we get
a = 3 + 2
meaning “the result of the operation of adding 2 to 3 is stored in a.” Then we can use one or more equals signs to test for equivalence:
a == 5
which is, again, an operation. Usually it’s the equals operator on a, with 5 as an argument.
Neither the child’s-arithmetic interpretation or the imperative programming interpretation have anything to do with the mathematical meaning of equals.
Why do we care? If you think back to algebra, where = very emphatically meant equivalence, that stuff was darn useful. You could figure out all kinds of puzzles with statements of equality. (Integrals were my favorite.) Order didn’t matter, and you could do whatever you wanted to both sides. This was deductive reasoning, real logic, in action.
This is where functional programming is trying to take us. “Don’t tell me what to do. Tell me what you know, and what you want to know, and we’ll get there.”
Moving from imperative to functional programming is akin to moving from arithmetic to algebra.
Like @runarorama said: “Code it functionally. Like an adult.” (That cracks me up.)
This comes from Surfaces & Essences, by Hofstadter
The other day someone told the story of a mom and teenage kid on a long car trip. At some point the mom says, “I can’t talk anymore, I have to focus on driving.” The kid was like, “It’s no fair, you get to play the whole time.”
The kid perceived driving as a game, because their experience with driving was within video games. They’ll learn eventually that real driving is not so fun: it goes on and on, and it’s life & death.
What makes driving in video games so much more fun than driving a real car? Part of it is the stakes. What happens if I slam on the brakes and skid into this turn? In a game, worst case, you wreck your digital car and start over again. Perhaps a friendly cloud with a winch hoists you back onto the road. In real life, you and your whole family and another family on the road all die. Or are maimed for life. Those are some consequences.
In the video game, we’re free to experiment, because we can always start over. If you never go off the road as a beginner in Mario Kart, you’re not trying hard enough. In real life, we can’t afford to experiment. There is no undo.
That freedom to experiment makes games more fun. It also makes it possible to learn. Push your limits, find out how fast you can take that turn. If you’ve never failed, then you could do better.
In development, we can give ourselves that freedom. We can make it possible to undo what we tried. Version control can help with this. It gives us savepoints to fall back to. Automated testing helps, so we know we flipped the car immediately, not three laps later.
This last weekend I worked some crazy stunts in git, rewriting project history and patching it together from multiple sources. This was fun, because I knew how to undo my mistakes. It was play, because I had the freedom to fail and learn and try again.
Learn a tool like git well enough, and it becomes the friendly cloud with a winch. Set yourself back on the road whenever you like, and then floor it.
The other day, Linda (age 5) and I went to the playground with the swings. She took 3 different shoes with her, to test which one goes the farthest when kicked off. I wore the same black flats that I kicked all the way across the playground last time.
On the way home, I asked, “So which shoe was the best?” She had a clear princess shoe, a black tie shoe, and a gray-and-purple running shoe. “The black one,” she replied. Then: “Your shoes went far, and they’re black too. It must be that black shoes go the farthest.”
That’s her working theory, that black shoes travel the farthest through air.
This is perfectly valid induction. In her observation, every black shoe went farther than every non-black shoe. Yet it makes no sense: there’s no causal link between color and distance traveled.
This illustrates the limitations of inductive logic. “It’s true there and there, this new situation is the same in some way, probably true here too.” It also illustrates the power of the human mind to construct categories and make judgements based on surface characteristics.
This is the power of the silent analogies we make in our heads, the recognizing of likeness and the inference of shared properties. This affects our code too.
One reason OO is so popular is that we can think about objects, we can form analogies to the familiar physical world. Yet that costs us, because that Customer is not a customer. Limitations of the physical world leak into our designs.
For instance, in real life, only one thing can be in each place at a time. A customer only has one name, and when it changes in real life we want to change it in the database. Yet, if we loosen the analogy between the real customer and the customer object, we don’t have to destroy the old name to assign the new name. And I don’t mean making a CUSTOMER_HISTORY table to store the old data. I mean rethinking the whole database. Make a customer record into the sum of everything we’ve learned about the customer over time. We are not limited to specific slots that map to boxes on a form.
Event sourcing, it’s called, the perception that data is a sum of prior events rather than the occupant of a place. It looks at the cause of things - how did it get this way?
If Linda looked at the cause of each shoe’s trajectory, she’d find a lot more to do with a the flick of the ankle at the top of the swing than the color of the shoe.
My husband likes to say, “You can tell me what to do or how to do it, not both.” If I ask him to clean the garage, he gets to clean it his way.
When we talk to customers, we ask them to explain the problem. What is the need that our software can help meet? If the customer tells us, “Put this field here and store it in the database at this time,” or various other details of the solution, we gently steer the conversation toward the root of the problem. We want them to tell us what problem to solve, and let us solve it.
So it is with declarative programming. Tell the computer what to do, and let it worry about the details of how. Don’t tell it how to filter the list; tell it that you want a filtered list, and let it worry about how or when to create it.
Why? well, maybe the compiler or optimizer or underlying library knows a more efficient way to do it than we do. Or, because when we get all wrapped up in the “how,” we lose track of the “what” and the “why.” If not when we’re writing code, then when we’re reading it: the cognitive load of “how” takes up brain-space we could use for “why.”
Bret Victor mentions this: “Give the computer high-level goals” in his Future of Programming talk forty years ago.
Kathy Sierra talks about cognitive load in her “Your app is making me fat” post. Although: I disagree with the assessment of lowered willpower in people who think hard and then eat chocolate - the brain uses a lot of sugar and those people needed to replenish. Yeah. Chocolate for coders!
As @franklinchen points out, the word “pragmatic” has negative connotations because people use it as an excuse to do whatever is convenient to them at the moment.
"Pragmatic" does not mean "convenient." It means carefully considering the balance of time spent now vs time spent later. It means picking your fights, and fighting hard for the right hill. It means picking the fights most important to the particular situation.
In contrast, extreme idealism fights the same fight all the time, no matter what. The pragmatist is always improving. The particular improvements vary.
In politics, that means calling your US Congressperson to support free speech sometimes, and volunteering for a local candidate that supports midwifery sometimes. It means voting by candidate rather than party.
In programming, pragmatism means writing wickedly detailed tests when it’s critical, and skeletal ones other days. It means hitting the points in the code review that help the whole team grow. It means refusing to release incomplete features that will screw up the product, and other times delaying the release until critical features are complete. It means using our brains to make decisions, taking responsibility for those, but not beating ourselves up when the outcome is one we could not have predicted. As Franklin put it, “a real pragmatist isn’t negative all the time about change or risk, but confronts it head on and looks at all sides.”
Pretend we know as much science as the ancient philosophers. (Philosophy was the closest thing to science they had in ancient times.) We might observe that rainwater in the parking lot moves toward the grates. “Why?” we ask each other. Does water like grates? Is it afraid of sunlight? Is each droplet shepherded in that direction by the Goddess of Rainwater? or does each droplet feel a societally-imposed moral compulsion to plunge itself into darkness?
It is a long jump from water skimming across the concrete toward the sewer to the principle of gravity, to the mass of each water molecule drawn toward the mass of all the molecules in the Earth. This property of matter causes rainwater to move down, and the shape of the parking lot translates “toward the center of the Earth” into “mostly sideways toward the low point with the grate.” We need more observations and a great deal of reflection to recognize the observed behaviour as a logical consequence of a universal principle.
It took thousands of years to understand the principle of gravity, but it was worth it. Knowing the principle lets us predict the motion of water (and other things) on arbitrary surfaces that do not contain grates.
Move from observed behaviour of objects to established properties of types and we gain this predictive power in our programs. Example-based testing establishes behaviour; property-based (aka generative) testing documents inherent properties. As humans it is easier to generalize from a few examples than from the careful mathematical expression of a property. Yet, it’s also much easier to generalize inaccurately and reason poorly looking at a few cases of rainwater in parking lots than when considering all abstract cases of matter.
Property-based testing. It’s a thing. And it’s only the beginning.