Tectonic Forces That Will Change the World of Technology and Business

Grainger Engineering alumnus and CEO of IBM Arvind Krishna discusses the future of AI, quantum science, and hybrid cloud computing.

Spring 2021

When the company announced that he would become its next leader, Reuters called Arvind Krishna “IBM’s surprise CEO.” But with 30 years at the company, including time running IBM’s hybrid cloud and AI business, others knew it made all the sense in the world.

Arvind Krishna
Arvind Krishna, CEO, IBM

CNBC said he was “a technologist with business chops.” Outgoing CEO Virginia Rometty called him “a superb operational leader, able to win today while building the business of tomorrow” and said he “built an outstanding track record of bold transformations and proven business results.”

Running a tech giant worth more than $100 billion and with more than 380,000 employees surely requires both – proven results and bold transformations.

In September 2020, not long after taking over, Krishna met with Grainger Engineering students and Dean Rashid Bashir to discuss where he sees transformations coming and where students and 100-year-old Fortune 500 companies alike can drive results in the future. He focused on hybrid cloud technologies that connect on-premise IT infrastructure, private and public clouds, artificial intelligence, and quantum computing.

Krishna earned a PhD in Electrical & Computer Engineering from The Grainger College of Engineering in 1991. The thoughts he shared with students have been edited and condensed.


Big Changes on the Horizon

I see three tectonic forces that are having a profound impact on the technology landscape: Hybrid cloud, AI, and quantum.

Let’s start with the first one. Cloud computing started about 15 years ago. But so far only about 20 percent of workloads have moved to cloud, so there’s a long way to go. Hybrid cloud is how you unlock the value of the other 80 percent.

With AI, estimates are we’re only four percent of the way in. The world of AI actually hasn’t even started. We’ve got to get some invention done. We’ve got to get a lot of work done in academia because that’s where the leading work always happens.

As for quantum, it’s still in the research stage. But when it happens – and I’ll put a timeline out there and say in less than five years we’ll see some practical applications – it’s going to make a huge difference.


Zero to 100

This stands in stark contrast to the fast pace of adoption that we have witnessed with other technologies. Take smartphones. 2007 was year one. By 2017, we were at 100 percent penetration. Take internet. 1990 was year one. By 2000, it was 100 percent. You can go on. DVDs. CDs. In 10 years, you go from zero to 100.

This begs the question, 15 years in, why is cloud only 20 percent? Because the part that was easy to move has gone. Everything separable, carved out, or greenfield has moved. But everybody else has constraints that are harder to overcome. Organizations have unique latency needs. They want to have choice and flexibility. They’ve made massive investments in their IT infrastructure. And they have to deal with regulations and data sovereignty issues.

Let’s take the example of a big bank. They might say: “I’ve got a big infrastructure. I have laws and regulations and regulators. I’ve done my retail banking. But I also have marketing apps, and I have ways I want to reach my customers via the cloud. How do I go about making life easier for that while at the same dealing with the sovereignty of my retail banking application in-house? How do I ensure that everything that I have in my multiple private cloud and public clouds are all working together?”

That’s exactly what a hybrid cloud is. And we believe that unlocks 2.5 times more value than relying on a public cloud alone. 

AI is going to create $16 trillion of productivity and impact on the world. But if you’re going to do that, you’ve got to apply it to every business process.

Huge Commercial Advantages

I believe Moore’s law is pretty close to running its course. By that I don’t mean semiconductors have run their course. Clever design, clever users, a lot of this is going to carry on. But Moore’s law – just feature size reduction giving you huge cost advantage – is running out of steam. We’re at about seven nanometers. You’ll get to five in maybe another five years. Maybe we’ll get to three, maybe not. But the cost advantage won’t always be there.

Quantum can help us solve that. It’s the biggest technology platform shift in decades. We’re at about – you can debate – 30 to 60 qubits right now. When you get to about 1,000, you begin to solve some problems in materials and maybe in risk that you couldn’t solve with classical computers. That’s when you start to see huge commercial advantages for the first movers.

There’s so much work to do. How do you write code to deal with the noise that might be in a quantum computation? What a field for researchers to go crazy on! I’m calling it coding, but I think classic coding is not going to work. You’ve got to think about, “Hey, each computation may have errors. How do I do something but still bound the error?” These are problems which hurt people’s heads.

When we got to quantum mechanics as an undergrad is when you saw the shift. Ninety percent left and said: “That’s not stuff I can deal with.” Well, here, you’re going to have to map problems on to energy fields. How do you think about that? How do you teach that? How do you lay it out in a way that is accessible to more people?


Why Quantum?

One of the reasons we work on quantum is: Three percent of the world’s energy is used to make fertilizer. Why? You use a ton of electricity to fix nitrogen and then you give that out as fertilizer. We’re essentially still relying on a process that was invented about 100 years ago. Clearly there must be a chemical process that’s more energy efficient. Why can’t we work on what is that chemical process? Or if you burn fossil fuels, let’s suppose they’re not going to go away completely, why is that only 25 or 30 percent efficient? Why can’t we get that to be 90 percent efficient? We could reduce emissions by two-thirds if we can get there.

There are fundamental chemistry problems that we could solve with quantum. To give you another point of comparison, think about this. You would need a classical computer with 1,086 bits to model the penicillin molecule, which has 41 atoms. That would require building a computer with not as many transistors than there are atoms in the observable universe. That’s just not possible. But with quantum computers, we will be able to simulate the behavior of matter down to the atomic level instead of doing guesswork. This could help us to get better flow of lithium ions inside an electrolyte. And if we get the physics right batteries could be 100 times more efficient than what we have. Now that could really make a big difference to our global transportation systems.

These are the problems we do work on and progress is, I’ll say, mediocre. But one of the reasons so many people, not just the quantum information science folks, are interested is there’s a good chance you can solve these problems. When you ask, “Why would a Daimler be excited to work with IBM on quantum?” It’s actually because of battery technology. Or “Why would JSR, a Japanese chemical manufacturer, work with IBM?” It’s because of some of the other issues. Those are the areas that I believe are fundamentally important to solving climate change.

With AI, estimates are we’re only four percent of the way in. The world of AI actually hasn’t even started.

Life and Death

AI is going to create $16 trillion of productivity and impact on the world. But if you’re going to do that, you’ve got to apply it to every business process. One key area we have to think carefully about is explainability. For instance, the applications of AI in medicine are extremely promising. But that’s clearly an area where you’re making life-and-death decisions. If you’re going to use AI in areas like medicine, you have to ask yourself: Do I have a good way of understanding it? Do I know I’m not going to make a mistake? Is it explainable? Can the user understand why an algorithm is recommending something?

There’s an important difference between consumer AI and business AI. If an AI system suggests a wrong movie, it’s not life and death. But if I think about mission-critical systems, like an AI system that helps you decide whether you should remove four inches of a cancerous lump, that may be life or death. There’s the difference of why some areas are a little slower to adopt AI, in spite of the enormous potential it has.

Where AI has a reversible consequence, it could be truly autonomous. Some would argue that that’s not really autonomous because I can intercede and it’s reversible so I can step in later. But that’s probably a little philosophical. In those systems, it will run autonomously. Then you will get to some systems where there are hard rules. You know, whatever the system does, the physical system doesn’t go outside some boundaries that are determined. It can perhaps be autonomous in that case. Then I get to cases where the consequence cannot be reversible and/or the system is unbounded. I think for a long, long time those are going to have human intervention or a human watching it.

Think of a radiology example just as a thought experiment. Obviously, it’s human intervention. If an AI system is guiding a radiologist and telling them, “Look for that or this thing.” You can make that a lot more productive. Maybe improving productivity by two, three, five times. You’re operating at closer to machine speed, but you are still relying on the ultimate judgement of the human.


An Extreme Optimist

When it comes to the topic of jobs being displaced by automation, I’m probably an extreme optimist. Let me give you two examples. I think circa 1900, close to half of the population in the United States worked in agriculture. When growing food was that hard without machines, your complete time was taken up in those tasks. What do we have today? Only three percent of people work in agriculture. But between 10 and 20 percent is in food service. As it became easier to get the food, practical costs became lower, then you got leisure time, so you say, “Hey, I’ll go to a restaurant.” Suddenly an industry came around that didn’t exist much in 1900.

And I’ll give you an example from the 1930s to 1950s, one that AT&T really worked on. When they looked at the number of long-distance calls happening – until the 1930s it was switchboards, people literally plugging cables and connecting cities – when they did the math, it said you need 20 million switchboard operators. They quicky realized that wouldn’t work so electronic communication came about. But what was the flip? Having a call center, which didn’t exist before. Or having the call center remote from where the people were calling could come to be, instead of just having people in every place.

When people get freed up from other tasks, they move on to other ones. Maybe I’m not actually going to be a store clerk ringing up a cash register, maybe I am going to be giving you a great experience, so you love the product when you walk into a store. You’re a fan of that product because you have that great experience. You may buy it online. It may be a robotic warehouse with drones. But instead you need the customer service experience.

The critical insight here is that humans are pretty ingenious in coming up with new forms of work that offer higher value and that cannot be done by machines.