
Remembering the human in the age of the algorithm
With advanced analytics the organization can discover new information, develop new mental models and explore new types of sense-making. It can understand the world differently and make new hypotheses about what leads to progress.
The promise of the algorithm is to help the organization make sense of the world in ways it never could before.
But the algorithm won’t do this on its own. Making a company data driven doesn’t make it anything. Getting the improvements we seek is up to us: the questions we ask and the decisions we make. And how we carefully choose and design different capabilities to serve us getting there.
I believe our customers hold the key to our future. That making data BIG doesn’t make it any better — on its own. That the wrong data is more dangerous than no data, and that the measure of success are the questions we discover we can ask — not the volume of data we can churn.
In this talk I present some of the barriers and opportunities we face in the age of the algorithm. Especially related to better understanding our customers, and how critical we become as human coaches and leaders in our new found partnership between us and the machine.
Slideshow on Google Slides:

#Slide 2: The gap — thin data and bias
The presentation discusses two central gaps.
The first is related to the difference between thin/big (context-less) data, and thick (contextual) data. The former can tell you what is happening while the latter can help you better understand why it is happening. Without thick data we will not be able to understand the motivation and relationships that drive behaviour, just record that behavior is happening.
The second gap is not to overestimate the computer which is nothing more than a binary system operating on commands and training data. If we are not careful any bias we have will get transferred to its calculations and risk getting multiplied. An e.g. product oriented organization will only get more product oriented.
Becoming more customer centric demands a change in how we think, work, measure and prioritize (1). What the data can do is help us better understand the complex nature of our customers and their networks, but we have to tell it to do that — that has to be the specifc ask and the computer needs the data to understand what that means.
Sources:
- Sensemaking, Christian Madsbjerg, https://www.redassociates.com/sensemaking
- You look like a thing and I love you, Janelle Shane, https://www.janelleshane.com/book-you-look-like-a-thing
- (1) Designing a customer way of working, Helge Tennø, https://everythingnewisdangerous.medium.com/designing-a-customer-way-of-working-d50c3e464ea1

#Slide 6: A model is only a simulation of the world
Models aren’t magic. They are like a novel written by an author telling a story based on her own ideas, experience, insights and goals. It is a narrative created by a human using training data selected by other humans. A model is always biased, it often uses spurious (pretend) correlations has blind spots and reflects the goals and idelology of its creators. We need models, but for the right tasks and under close investigation and scrutiny.
Sources:
- Weapons of math destruction, Cathy O’Neil, https://blogs.scientificamerican.com/roots-of-unity/review-weapons-of-math-destruction/

#Slide 7: The creative and obedient nature of the AI
Artificial intelligence will try to do exactly what you tell it to do. But it doesn’t know or understand the problem you give it. It will look for the fastest way to get to its goal (efficiency is key which can lead to remarkable discoveries) and it won’t have the capacity to reflect on context (it is beyond literal in its interpretation of commands).
It can often end up weird or funny like when finding the best way to get a robot from A to B it will build a tall robot and have it fall over. Because you only said “build robot” and “get from A to B” everything else is up to the machine’s unlimited creativity.
It will discover any loophole and immediately make obvious the importance of context when making sensible solutions: in its thin context-less world what makes for a good result can be absolute nonsense in a context-rich world.
Sources:
- You look like a thing and I love you, Janelle Shane, https://www.janelleshane.com/book-you-look-like-a-thing

#Slide 13: Context less data mixed with context less AI
An AI is incomplete. Its single set of parameters produce a sense of tunnel vision lacking awareness of its surroundings.
According to anthropologists (2) what influences a human being is their environment, relationships, culture and situation (their context). Modelling answers for the real world needs to include some or all of these in order to find solutions that match and make sense.
The computer doesn’t know what it doesn’t know. It is not aware of the context surrounding and shaping the data it is looking at or how its models operate.
An AI doesn’t have ethics, understand culture, it doesn’t know what is important and what to ignore. It might be producing decisions that make complete sense inside it’s artificial world, but in the real world these decisions can become problematic, impossible or irrelevant.
Sources:
- You look like a thing and I love you, Janelle Shane, https://www.janelleshane.com/book-you-look-like-a-thing
- Anthro-vision, Gillian Tett, https://www.penguin.co.uk/books/441314/anthro-vision-by-tett-gillian/9781847942890
- (2) Why the world needs anthropologists, Dan Podjed et. al., https://www.routledge.com/Why-the-World-Needs-Anthropologists/Podjed-Gorup-Borecky-Montero/p/book/9781350147133

#Slide 17: Models are how we make sense of the world
“Mental models are how we understand the world. Not only do they shape what we think and how we understand but they shape the connections and opportunities that we see. ... A mental model is simply a representation of how something works. We cannot keep all of the details of the world in our brains, so we use models to simplify the complex into understandable and organizable chunks.”. — Farnam street (3)
We need models and we use them all the time consciously and subconsciously to make sense of the world influencing our decisions.
In physics the theory of relativity is a model and in biology natural selection is another. Even in design we would use mental models of our customers e.g. either personas or customer journeys (both models) to filter and prioritize complex information into one framework that helps us apply artificial rules to the world and simplify it.
So what could possibly go wrong with models when we ask them to simulate the world?
- Every model is biased, it decides for you which information is important or not. We should carefully consider which models to use to make sure we include the information needed going into it and coming out from it.
- The purpose of a model is not the building of the model, but the outcomes it creates when being applied. Sometimes making e.g. a persona becomes the goal in itself as the project demands it. The model will then be designed without the important context of the value it will produce when put to use.
Sources:
- Mental models: the best way to make intelligent decisions, Farnam Street, https://fs.blog/mental-models/ (3)
- You look like a thing and I love you, Janelle Shane, https://www.janelleshane.com/book-you-look-like-a-thing
- The geography of thought, George Nisbett, https://www.simonandschuster.com/books/The-Geography-of-Thought/Richard-Nisbett/9780743255356

#Slide 22: Our technologies are not trying to achieve the same as us
A technology is simple, uncomplex, designed for one goal in one context. It doesn’t have depth it often has very simple relationships and only cares about a filtered version of its own environment.
A human on the other hand is remarkably complex. She steps into and out of multiple complex networks all the time. Any tiny influence can have effects that we can not even imagine. Any big influence can lead to nothing when competing against all other influences impacting her at the same time.
Treating a customer as if it is something as simple as a piece of code, a button or algorithm removes what is human from them. This can interfere with our ability to understand them, build for them or even improve ourselves.
There are plenty of examples where the effect of technology when unchecked can ha severe negative effects on humans. Because we focus on monitoring and measure the simple technology and void the responsibility of understanding the real world and human impact.
Source:
- Stand out of our light, James Williams, https://www.cambridge.org/core/books/stand-out-of-our-light/3F8D7BA2C0FE3A7126A4D9B73A89415D

#Slide 28: Even Google knows they know almost nothing
As our volume of data is exploding its easy to think that we know what needs to be known, but we don’t. We know a thin sliver of what is out there and we’ve packaged it neatly into predetermined categories and filing cabinets (data bases).
Given all the information that is available it is remarkable how little we know about e.g. the brain, love, human decision making or even taste (4).
Advertising, design, art is still a mystery (and let’s hope it remains so) demonstrating that we need to be humble. Because with the current information/data we’ve only peeled back the first layer of the onion with many mysteries left.
At any point in history there has never been a better time to be curious. Because even if the current wave of algorithms and big data is has its limitations the opportunities are energizing. We can with the right questions and mindset go further and far beyond what we think is possible and what we already know we know.
Sources:
- “the mind knows not what the tongue wants”, Howard Moskowitz, https://www.ted.com/talks/malcolm_gladwell_choice_happiness_and_spaghetti_sauce/transcript (4)
- Small Data, Martin Lindstrom, https://www.martinlindstrom.com/our-books/small-data/

#30 What we don’t know we don’t know
If we keep asking the same questions we’ll keep exploring the same space and come up with similar results. New data doesn’t change the answer, it keeps us looking in the same directions. If we want to know something we don’t know, if we want to be surprised (learn) it’s not the data that is the key (a small thing that can unlock an enormous change)(5) it’s the questions we ask.
If everyone asks the same questions to the same people in the same way we will only see and act on the same world as our competition. Competitive advantage does not come from creativity alone (combining data), but seeing data nobody else sees and then making a combination nobody else can make.
Being surpised means discovering something you didn’t know or expect. It is the best indicator of learning something new. How can we get to data, information, insights nobody else has, and maybe most didn’t even know existed? Changing how we see the world and discovering new opportunities.
Sources:
- Why the world needs anthropologists, Dan Podjed et. al., https://www.routledge.com/Why-the-World-Needs-Anthropologists/Podjed-Gorup-Borecky-Montero/p/book/9781350147133
- Malcolm Gladwell on an incentive: a small thing that can unlock a big change like a key, latch or bullet, Tipping Point, https://www.gladwellbooks.com/titles/malcolm-gladwell/the-tipping-point/9780316316965/