Why Technologists Keep Guessing Wrong On the Outcome of Technologies

An article in the Wall Street Journal yesterday highlighted that Uber and Lyft haven’t lived up to their promises.  Ride sharing was supposed to make traffic better but study after study has shown that it has made traffic worse.  The thing no one is talking about is, this is a bigger trend in tech – believing something will be good and it turns out bad.

Remember how the Internet was supposed to save democracy by making every citizen well educated and encouraging civil discourse?  I don’t feel like we live in that future.  Remember how social media and smartphones were going to connect us and help ease loneliness?  The data says people are lonlier than ever.  Remember how blogging was going to free the Internet from control of the big media companies and highlight the truth?  Instead, we are in a a world of spam and clickbait and are now begging traditional media to fact check everything for us.  And remember Wikinomics and Wisdom of Crowds?  The trends they reference changed things, just not necessarily in the ways we thought.

How did tech get everything so wrong, and not just wrong but almost entirely opposite of how it really turned out?  And why does it keep happening?  

The simple answer is – human beings continually miss second order effects.  The truth about ride sharing is that “all else being equal” ride sharing probably would indeed reduce congestion.  But all else isn’t equal.  When ride sharing started, it changed behavior patterns for public transit, walking, biking, etc.  Humans need to be trained in complex systems to understand that the world is dynamic and adaptive and nonlinear.  I joined the Board of the New England Complex Systems Institute because I am very passionate about this problem.  It’s one few people understand.

When it comes to AI, we are making many predictions, as a society, about the impact it will have on many areas of life.  The only thing I know for certain is that we are likely to be very very wrong, particularly if we don’t consider second order effects.  So how do you think about this appropriately?

When thinking about “will AI put us out of work?” It isn’t enough to ask if AI will take jobs – it will.  You have to ask how people will respond when AI takes jobs.  Humans will continue to do what is in their best interest, so, the way the job loss happens matters.  It’s path dependent.  Losing blue collar jobs vs white collar jobs vs American jobs vs foreign jobs first may all have different impacts. 

There are lots of predictions about AI, and it isn’t helpful to go through them one by one, but, the broader issue is that whatever we are predicting, we are most likely to be wrong if we don’t think about possible second order effects.  Tech has a bad history of this, and it is to our benefit not to repeat it.

Leave a comment

Your email address will not be published.