- Content Hub
- Business Skills
- Strategy Tools
- Competitive Advantage
- Everyday Chaos
Access the essential membership for Modern Managers

Transcript
Welcome to this edition of Expert Interview from Mind Tools with me, Rachel Salaman.
Do you think technology is making your life simpler or more complicated? Either way, there's no doubt that it's having an impact, especially at work. So, how should we be adapting to make the most of all the incredible innovations around us?
That's what we'll be exploring today with David Weinberger, a leading thinker in all things tech-related. He's been an internet advisor to presidential campaigns, writer-in-residence at Google, and a Fellow at both Harvard and the U.S. State Department.
He's the bestselling author of the "Cluetrain Manifesto," and also a new book, "Everyday Chaos: Technology, Complexity, and How We're Thriving in a New World of Possibility."
David joins me on the line from Boston, Massachusetts. Hello, David.
David Weinberger: Hello, Rachel.
Rachel Salaman: Thanks so much for joining us today. What inspired you to write this book?
David Weinberger: I write about the effect of technology on ideas. I have a background in philosophy – I used to teach philosophy a long, long time ago. And part of it is a longtime interest in the way the internet, our experience of the internet, has been changing how we think about things, and a longtime interest in how we think about how the future happens.
And [also] a shorter-time interest, but really intense, in the sort of artificial intelligence known as "machine learning." And all those threads came together.
Rachel Salaman: And why did you call it "Everyday Chaos?"
David Weinberger: Because one of the themes of the book is that we are learning to embrace, and even just acknowledge, the chaos that's always been around us, throughout our history.
We're now coming to be able to acknowledge and embrace it, because we have technology that enables us to thrive with it.
Rachel Salaman: Did you choose to focus on the positive aspects of this new world, or do you just not see many negatives?
David Weinberger: No, this is a choice to focus on the positive. The book acknowledges (at least much of) the negative about the internet and about machine learning. There's so much negativity, and has been for at least the past ten years, about the internet – negativity it has earned.
[With] machine learning, some of the problems now are becoming quite apparent, I think, to most people. But I don't think we even know what all the problems are yet, and there are quite serious problems.
The book is, in many ways, unusually for what might be classified as a business book, concerned with broader historical, cultural, social ideas than I think is typical.
Rachel Salaman: Let's get into some of those ideas now, and the book starts with your thoughts on prediction, which is how we estimate what's going to happen in the near or distant future, from the weather to market volatility. What is changing in how we predict things and what are the implications?
David Weinberger: Technically we do have a pretty new way of predicting now. Traditionally we've either noticed general (it's pre-scientific anyway) trends or how things generally work, which things are associated, we've had statistical predictions (which we still do, of course) where we amass data and use that to see correlations and make predictions and that can work pretty well, we've been doing that in weather for hundreds of years.
There's a second way in which we predict, however, which is also pretty old, where you build a conceptual model of how the thing you're predicting works. So if it's weather, rather than simply gathering data about average rainfalls and temperatures and making predictions based on that, you say, "OK, I think I know how weather works." This was actually [the case] in 1900.
There are seven principles that determined what the weather will be, and they are the density of the air and the warmth of it, etc. So seven factors that go together that enable us to predict the weather.
This is a law-like approach, it fits in very nicely with Newton's view of the world, the universe, that everything is governed by these laws. And if you know the laws, and you have some data you can feed into it, you can then make predictions.
So, we use all these ways, depending on how well they work and what we want from a prediction. But we now have a new way, which is an elaboration of the statistical method – machine learning takes in vast amounts of data without us necessarily giving it a sense of how that data goes together, how the factors go together. You just give it massive piles of numbers and you let it make probabilistic correlations among all those pieces.
And in some instances, and this I think is the really startling thing and very upsetting to our Western ideas about how the world works, in some instances it will make predictions that are accurate (they're always probabilistic but they are probabilistically accurate) [and] we can't figure out how it made those predictions, and we can't derive general laws or rules from it.
It seems to be just this vast collection of particulars, and that's upsetting to our assumptions about how we think the world works.
Rachel Salaman: So what do you think the pros and cons of machine learning, or artificial intelligence, in predictions are?
David Weinberger: The pros are easy. We use machine learning in order to make predictions, or to classify things – like is this a picture, what objects are in this photo – because it works, and when it doesn't work, we don't use it.
So, we get better predictions, better classification, where better can mean more accurate, or can simply mean faster or cheaper. I think the really interesting thing is where it's more accurate than humans can be, including, for example, in the weather, weather prediction.
The dangers take a little longer. One of the ones that is more or less the original sin of machine learning, because it's connected to the very essence of machine learning, is that machine learning learns – hence the name, it's learning from data.
With normal computers, computer programming, if you want to write a program that will predict sales for your business, or whatever, you build a conceptual model. And you say, "That's going to be determined by the number of salespeople we have, the number of marketing leads we have," and so forth. And you know what the factors are and then you know what the connections among them are: more leads means more sales calls, or more effective sales, etc.
So, you know that stuff and you build a computer program that represents that, just exactly like building a spreadsheet, which is a form of computer programming. And then you feed numbers through it, and all that stuff works.
With machine learning, you just give it the buckets and buckets and buckets of numbers that you have, and it goes through and finds the relationships without knowing ahead of time that these numbers are salespeople (represent the number of salespeople), and these numbers represent the number of leads – no!
Machine learning just looks for correlations among all these pieces of data. So, that means that when you feed in data it is learning from that data, and if the data reflects human stuff then very likely that data represents human biases and prejudices.
So if you feed in employment information – because you're trying to train a machine learning system to be able to sort through job applications to pick out who should be seen by a human, to be interviewed by a human – it's very likely that that machine learning system is going to "learn" (and here I'm putting that in quotes) that being a woman does not correlate very well with being a senior manager, because that's the historic bias that the data represents.
It can be very hard to notice that bias, find all the factors that might reflect that bias in the data, to get rid of that bias, and so machine learning represents a genuine danger of not only reproducing but actually amplifying existing biases. That's a real issue, for sure.
Rachel Salaman: And how seriously are the people who work in this field taking that, and what are they doing to mitigate against it?
David Weinberger: There's a lot of work being done by computer scientists, often in conjunction with social scientists who are also very engaged in this.
As far as I know, all of the major software companies, the platform companies and the like, are hyper-aware of this. And I suspect there won't ever be a complete solution, any more than there is a complete solution to the problem of human bias. Nevertheless, the solutions are not simply technical.
Now the technical part of it is really important: cleaning the data, new tools to investigate how systems are making their "decisions," what factors are playing a role in it – all of that is really important, but there's a set of human things that need to be done, as well. And I think and hope that most of the large companies (and I hope many of the smaller ones) are investigating them.
For example, it's common now for there to be a call (which, I think, needs to be taken seriously) to make sure that a diverse and representative set of people are involved in every phase of the development of the machine-learning system. Including thinking through the data that's being collected and where there might be reflections of hidden bias, but also in the sort of outcomes that are desired.
So, for example, if you use machine learning as a city – you bring in a system in order to figure out the most effective bus paths and routes – it can succeed at that brilliantly. The routes now can carry more people who are getting where they're going faster, and everybody is happy. Except you didn't consider the effect of those routes on what you want your city to be.
It may be that you achieve that effectiveness of the routes by cutting down on the number of stops in the poorer parts of town. The machine-learning system solved the problem that you gave it but that's not really the problem you want solved, because you've now made your city worse for many people in it, including some of the more vulnerable ones.
So, the human side of this needs to thoroughly surround the design, development and deployment of machine-learning systems.
Rachel Salaman: Do you think there are any situations where it's actually better to rely more on human-based traditional methods of prediction with no machine learning, where artificial intelligence just wouldn't be as accurate or as helpful?
David Weinberger: Yes, [but] I don't think we yet know where they are. Some of the more obvious ones are... For example, in the United States machine learning is frequently used in order to decide or recommend who should get bail and how much money they should pay, because we have a weird and inherently unjust bail system in the U.S., as well as at the other side of the process, helping to determine the sentence.
And this is the famous (in fact, it's the go-to) example for this sort of question, because some journalistic research showed that yes, those systems are really biased! They are perpetuating very pernicious biases in the U.S., and so at least you want to, I think, hold off on using them.
The judicial system is one in which, at least traditionally, we absolutely want to trust in the system and it is thought – and I think this might change, but at least for a long time it's been assumed – that to have trust in the judicial system citizens need to have evidence of how it's working and that it's working fairly.
There needs to be a lot of transparency in the system, and if you're not getting that from your machine-learning system then that's maybe a good place to hold off.
Rachel Salaman: In your book you also explore anticipation, which is related to prediction, it's about preparing for an expected outcome. How does anticipation contrast with your concept of "unanticipation?"
David Weinberger: So, I count prediction as one type of anticipation, and anticipation is a much broader thing.
Prediction is a fairly new idea in the world, well what we count as a prediction, but anticipation has been our fundamental strategy (it's literally a Paleolithic strategy) of strategies.
The first time that a human being flinted an ax to use the next day in the hunt, that person was anticipating the use of the ax. Clearly that has worked pretty well for us since we are the dominant species on the planet, although I think there's probably some argument about whether that was a good thing or not.
Nevertheless, it seems to have worked and we will always do it. When you look both ways when you're crossing the street, you are doing this because of the very vivid sense of anticipation, and if you don't then you die pretty quickly. So, we're always going to anticipate.
The thing that's really interesting to me, especially about how the internet has trained us to think about the future, is the many, many ways on the internet [that] we purposefully refrain from anticipating as a strategy for success. I'll give you an example or two.
There's a thing called a minimum viable product that is a very popular way of launching a product on the internet. So, the traditional way that everybody knows, I think, is that you try to figure out what your market needs and you package it up and then you produce it, and you try to get it right on the first try, because you only get one chance to launch a product – at least that's what I was told when I was a marketing guy 20 years ago.
With a minimum viable product, you go the other way. Rather than trying to anticipate what your market needs, you have an idea for a core feature for a product. So, if you are Dropbox the core idea is: let's let people work on their stuff wherever they are, it doesn't matter, it's all up in the cloud.
And so you launch with that and pretty much only that. You go out with the absolute minimum set of features that people will pay for.
And then, rather than having focus groups beforehand and doing all this research, you watch what people do with it, you talk with them, you watch how people talk with one another on the net about the product and see what they actually want (not what they think they want when you go out and you ask them, but what they actually want) from your product.
And then you start adding features until it gets very feature-rich. As I say, [it's] a very common approach on the internet because it works; it's way less risky than trying to guess exactly what people need and it's a great business approach.
But the essence of it is the opposite of what we have done for thousands and thousands of years. The essence of it is to hold open possibilities, rather than trying to narrow them.
You're listening to Expert Interview from Mind Tools.
Rachel Salaman: Another key concept that you explore in your book is "interoperability" – a bit of a mouthful! Could you give us some examples of what that is?
David Weinberger: It's actually I think at the root of the unanticipation that we see all around us, not just in these MVPs, but in lots of different things on the net.
So, on the internet it's common, if you're a software developer, to put your software as open-source, meaning that anybody can reuse it without asking permission. Or, in the world of academics, open-access publishing, where you post it on the web and anybody can reuse it without asking permission.
There is a sort of a pattern here, but all of this again is a form of unanticipation. It's a form of interoperability because interoperability means that what you have designed for your system can be reused by other systems and works really well, and works in unanticipated ways. It's put to unanticipated uses and often can be used without even asking permission to do it.
Interoperability is obvious in the standards that enable the internet to work, whether it's one of the deep standards for the transmission of data, or it's one of the higher-level ones that we are generally more familiar with (such as a JPEG, a graphic) that we just accept, as we should.
I posted a JPEG of something and I put it out for open use, as is increasingly the case, and if you want to use it you just grab it and use it. Use it in whatever application you want, whether it's email, or you can edit it in a photo system, or you can use it in desktop publishing, you can use it as data to be input into your machine learning system. All of these different uses without me having to give you permission, and without you having to do any additional work to reuse it.
Without interoperability, the internet would just be discrete, closed lumps and clumps of data and content. But, because the internet, in its root and essence, is about the sharing of data and content, that makes interoperability at the very heart of the internet itself.
Rachel Salaman: You devote a chapter of the book to strategy, drawing a line from the beginning of the concept through to the present day. Could you just summarize that evolution?
David Weinberger: Strategy is a new idea. It seems like a very old one because it has an ancient Greek root, but it's actually a very new idea.
To have a modern idea of a strategy, you have to believe that the future is stable enough and knowable enough, that there are laws that are governing how the future unfolds and we humans can understand those laws and apply them.
So, military strategy really only came to be, as we think of it, in the 19th century, when we were in the mood to think that there are these large laws controlling even the behavior of armies and battle.
Business strategy is much newer than that. It's measured not in centuries but in decades, and it already is beginning to change its shape.
There are books like Taleb's "The Black Swan" and Rita Gunther McGrath's "The End of Competitive Advantage," both of which stress the volatility of your environment, rather than the stability of it, the volatility of it – that a small change can occur and wipe you out (as in "The Black Swan") or that you cannot let your vision 10 years out blind you to the micro changes in the flow of information all around you, that are opportunities and risks that you absolutely need to be paying attention to.
That trend away from focusing on having a single 10-year strategy, that you put all of your resources behind and commit to, I think we are already losing the sense of the environment that gives us the confidence that that sort of strategy is the right way to go forward just by itself.
Rachel Salaman: And you say that in the current phase business strategy is guided by interoperability, so what does that look like in practice?
David Weinberger: If it is the case that we are in a more chaotic environment than we used to like to think, that very little changes can have tremendous effects, that we need to be paying attention as much as we can to the changes that are happening all around us, all the time – because everything affects everything all the time, all at once – then one of the ways of thriving, in such an environment, is not to lock into a single source or a single set of expectations about what will happen.
Designing for interoperability, as opposed to designing for rigid, time-defying products, means that you are aware that you can increase the value of your products if you make them usable in ways that you did not anticipate, and you do that by making your products interoperable.
Rachel Salaman: The last part of your book looks at progress and how we understand that idea, what are your main points?
David Weinberger: I look at progress because it is one way that we measure success, so in some ways that chapter is a way of trying to rethink what we count as what success is and what the shape of success is.
Progress is, again, a pretty modern idea. It's about 1700 where we really became convinced that we can be smarter than the ancients, and the ancient Greeks and the Romans! Unbelievable, but the idea took root pretty quickly.
And so when we think of progress, we think generally of a timeline that is inclined to the right, with some dots on it that represent the steps that we took. And that's a useful story to tell, but it's never exactly true, because each of those steps resulted from lots of failure, very likely, and each of those steps, again, very likely occurred because of an intersection of developments elsewhere.
It's a way more complex map than we like to think – we like to think achievable goals, climbing up a hill step by step and you succeed. But, if you look at what progress looks like on the internet in particular, it's a very different picture, and we're getting very accustomed to this new picture. I personally think it's healthy, and that's why in the book my attitude is pretty optimistic.
Rachel Salaman: Your book reads like a well-researched paper on technological change in the 21st century. What tips do you have for people who would like to find practical applications for what they learn?
David Weinberger: Ultimately, I think it comes down to seeing what can be done if we drop our very comforting and ancient assumptions that the future is relatively simple, that our job is to pick a single path and to succeed at it and that is what it means to succeed in business or to live your life successfully, because that's not in fact how it has ever been. Now we have tools that not only show us that but also enable us to take advantage of it.
We have, I think, passed through the phase in which we feel that the major, impending problem of the internet is that there is too much information and we're overloaded.
I think, as far as I can tell, we generally are at the point now where we don't feel overloaded with information – we have trouble finding the right information, but we want more and more and more of it, and we assume that the right approach is not to try to narrow the information down to the one little bit that we need, but to take in as much of it, and in as much complexity, as possible.
Machine learning helps us actually find the patterns in that complexity so we can succeed with it. But as it's doing that, it's also teaching us that there is truth in that complexity and that the reduction of complexity to simple ideas is something we had to do.
So, rather than to always filter information down to the minimum, we are instead learning (and properly so) that there is virtue in trying to open up as much information as we can, and look for the patterns in it, look for the small things that might have a huge effect on our business and our lives – so, getting away from information reduction.
Strategies that embrace interoperability also seem to me to be a very practical thing to do. Practically, that can mean opening up an open platform for people (customers and others) to actually make more of your product than you can. But it can also mean recognizing that you succeed, in almost all cases, you're going to succeed because you are hyper-connected into the rest of the environment.
So, supporting industry standards or working with even competitors to build new standards, especially if you're trying to build a new market. If you are Tesla and now Toyota, they are both open-sourcing their patents, for example. It seems pretty extreme, but they are trying to build a new market, in some ways a new world.
Rather than thinking that you are always in a zero-sum game (in which it's either you or your competitors), when it makes sense – it doesn't always, but when it does – engaging with your competitors in order to make more things possible, rather than thinking about your customers as consumers of your product, recognizing that they are full partners in the success of your product, listening to them, enabling them to add value to your product by adding features, transforming the way it works.
These are all very practical and important ways of approaching the future not as if it's a narrowing of possibility, but rather as an opportunity to make more possibility, or as the book says and more or less concludes, to make more future.
Rachel Salaman: Finally, I'm curious to hear how you view your "Cluetrain Manifesto" from a distance of 20 years. For people who aren't familiar with it, could you briefly describe it and then share your reflections on how well it's aged?
David Weinberger: Four of us wrote the original website in 1999 and then the book in 2000. The motive back then was... We had been on the web from the beginning and we felt that we were talking for many, many people on the web who were in despair at the way in which the media, and business in general, was portraying the web – as a way of publishing (if you're in media), or as a way of marketing (if you're in business).
And, of course, it is those things, for sure. But we thought that, more importantly, it was a place... And people were coming to it, and were so enthusiastic about it because we saw it as a place in which we could speak in our own voices to one another about what matters to us – that is that it was fundamentally a social thing. And I think that's right, I think the Cluetrain got that right.
It reads pretty obnoxiously at this point, and I think it got some things pretty much wrong, as well. The main thing that it got wrong is in thinking that the internet would, inevitably, win in its struggle for cultural values, that the "internet values," that we thought were inherent in the internet, inevitably would triumph.
And, clearly, that's not all that happened. I did not anticipate the many ways in which the internet has gone wrong and has turned out to be terrible for many classes of people, and economically devastating in some ways.
The dependence upon social software whose interests are not aligned with its users, not perfectly aligned – and Twitter can be a very different experience for people like me and people who are being hated upon. It can be devastating to people, to individuals. The ability to "dox" people, that is to find people who for whatever reason, usually very bad reasons, you choose to hate, and to find their real-world, identifying information and addresses – it puts people's lives in jeopardy and lets them live under threat. That's pretty awful, we did not foresee any of that.
I do want to say one more thing really quickly! The negativity about the internet, I think, is totally warranted, but often leads us to forget about just how transformative the internet, in fact, was in some very important and very good ways. And I think it is worthwhile occasionally remembering that, as well as remembering the suffering that it has and continues to cause.
Rachel Salaman: David Weinberger, thanks very much for joining us today.
David Weinberger: Thank you so much for having me.
The name of David's book again is, "Everyday Chaos: Technology, Complexity, and How We're Thriving in a New World of Possibility." I'll be back in a few weeks with another Expert Interview. Until then, goodbye.
Image of David Weinberger CC-BY Alberto Mingueza @AMingueza