In the early days of Talla, one of our data scientists, Daniel Shank, attempted to replicate a popular paper out of Google, on “Neural Turing Machines.” He spent a ton of time trying to get it implemented in the real world, and never could. He gave a public talk about it here, which is interesting to watch.
I was reminded of this today when I read about the real-world failures of Google’s diabetic blindness AI. While showing 90 percent accuracy in the lab, it did poorly in the real world. Why? Because the real world is messy.
If you aren’t technical, some things seem easy that actually aren’t. For example, a user needs to add their name to a form so you can store their name in your database. Easy enough right? All you have to do is add a field and a button. But, what if the user puts in numbers, or weird symbols? What if they say their name is 6$#rt!? Should you accept that? What if it isn’t capitalized? Do you need capitalized names? This is called input validation. You have to make sure people don’t put bad input into the system, because every product manager who builds tech products knows that users will definitely put things into your system you did not expect. Always. Now extrapolate that across all areas of data input and output.
What you learn is that the real world is messy. Data is incomplete, inaccurate, on media you didn’t expect, and then other things happen that aren’t data related. Power goes out. Wifi goes down. There are a million things that could go wrong, and so inevitably some of them do.
But AI hasn’t been heavily impacted by these things yet because AI has been in the “research” phase these past few years. The AI celebrities we all listened to were researchers who could make new models or solve new problems, or deal with new types of data. But they are losing steam because so much of this doesn’t actually work in the real world.
It works in the lab where the world is tightly controlled, data is clean, and often generated just for the target application. In the real world, AI systems have to deal with the messiness, and they generally don’t do that well. It is a big problem.
The reason this is important is that, the power dynamic in building AI is currently shifting from researchers to application engineers and product managers who can make stuff work. As tools and platforms abstract away more of the matrix algebra, hyperparameter tuning, and other things that required a deep understanding of how neural networks work in order to do, the power is shifting instead to optimization and application. It’s less about can you build a model to do X, and more about can you get it to run with a certain latency, or power budget, or on a specific footprint. The mindset is different.
A lot of my investing focus has changed over the past 9 months to figuring out what really works in applied AI. I’m more concerned with teams getting AI into the world for real applications than I am teams that publish lots of papers, because building real world AI is still hard, and requires a combination of AI understanding and real world engineering that is still relatively rare.
While some say we are going into an AI winter, the truth is, the Covid crisis is accelerating a secular trend towards AI deployment in the real world, making the companies that can actually build and implement things very valuable.