Fast Solutions, A Long Time Coming

Machine learning techniques have been around since the 1950s. Recently, Grainger Engineers have been applying them to electronics design. They hope to make chip development faster and cheaper – and, in the process, address some of the supply chain headaches we’ve all been experiencing.

Fall 2022

These days, we expect our cars to be smart spaces. The Bluetooth connects seamlessly. The GPS navigation boots as soon as you turn the key – if your car even has a key to turn anymore, rather than a wireless fob that stays in your purse.

But, for many manufacturers, delivering on that expectation hit a bump during the COVID-19 pandemic. In the spring of 2022, for example, BMW announced that they were being forced to switch chip manufacturers for the entertainment systems of many of their luxury models. As a result, Apple CarPlay and Android Auto would no longer come standard.  The company promised that an over-the-air update would resolve the issue sometime in the summer. 

Meanwhile, Nissan blamed missed profit estimates on supply chain problems, and Toyota said that because of semiconductor shortages it would produce 100,000 fewer cars monthly than it had expected.

“Semiconductor shortage is a new normal, same as pandemic, and we have to live with it because this is not going to finish tomorrow morning,” Nissan Chief Operating Officer Ashwani Gupta said during an investor briefing call.

Xu Chen
Professor Xu Chen

Many pandemic-related supply chain issues have been driven by manufacturing and shipping delays, which will need to be addressed long-term. According to Grainger Engineering’s Professor Xu Chen, chip design improvements can be a means of keeping industry on track, as well.

“You have chips now in everything,” said Chen, who is part of the Department of Electrical & Computer Engineering. “People have come to realize that the amount of time and effort it takes to design computer hardware is one of the bottlenecks in the supply chain. If we can shorten that design cycle, it has so many positive impacts for industry – including supply chain issues. And using machine learning is a great way of helping that, because machine learning is something that makes everything faster.”

To that end, the college’s Center for Advanced Electronics through Machine Learning (CAEML) applies machine learning to the design and optimization of computer chips and microelectronic systems. Center research is intended to increase the efficiency of electronic design automation tools. That increased efficiency makes the work faster and cheaper, while also yielding designs with improved performance  and reliability.

Professor Elyse Rosenbaum
Professor Elyse Rosenbaum

As a National Science Foundation-sponsored Industry-University Cooperative Research Center, CAEML works closely with businesses that design  and manufacture electronics and prepares students to work in the industry. It is led by the Department of Electrical & Computer Engineering’s Professor Elyse Rosenbaum and includes about 15 faculty members  at UIUC, Georgia Tech and North Carolina State University.

Launched in 2017, the center recently received a second five-year grant that will take it through at  least 2027. Its work during that period will impact  any number of industries, including automakers.

Ever Closer to Optimal

Machine learning is having a moment, but it’s been a long time coming. In 1959, an IBM programmer wrote what many consider the first example of machine learning – a checkers application that improved its game the more it played. By the 1960s, nearest neighbor algorithms began allowing for basic pattern recognition. And by the 1990s, Deep Blue was  beating human champions at chess. 

“It became glamorous and everybody began  working on it in, say, the last 10 years. For the most  part, it’s not being used to solve the problems  we’re interested in. The dominant uses of machine learning are for image recognition, for natural language processing and for those damn recommender systems that tell you what you want to buy next or what movie you want to watch next,” said Rosenbaum, who holds the Melvin and Anne Louis Hassebrock Professorship  in Electrical and Computer Engineering.

CAEML is changing that by focusing on two types of problems. First, the center’s members are interested in improving the design automation tools that are used extensively by chip companies when they’re creating integrated circuits or entire microelectronic systems. Those tools are already incredible. Their development has allowed the electronics industry to go from a processor with three million transistors – when Deep Blue came on the scene – to processors with tens of billions of transistors today. But the electronics industry wants yet more transistors, and improved design automation tools are one of the ways to get there.

The other element of CAEML’s work is design optimization – creating new methods for hitting  system specifications as efficiently as possible.  Those specifications might be how much a component costs to manufacture, how well it performs, how  long it lasts or how much power it consumes.

“We do good-enough design rather than optimal design, and we wanted to use machine learning to help people come closer to that optimal design,” Rosenbaum said. “You don’t necessarily – or ever – produce the optimal design, because there are  so many components making up a given system.”

Not Enough Computers in the World

Chen is looking at both types of problems with partners from Intel and IBM. In particular, they’re exploring ways of improving design tools for and optimal designs of high-speed interconnects. Those interconnects include everything from the three-foot USB cable that connects your external hard drive to your laptop, to the super-short copper traces that connect individual components on a circuit board.

Regardless of their application, these interconnects are difficult to model. You want to send ones and zeros from here to there. They’re sent as a voltage waveform. But, unfortunately, they’re not necessarily going to be the same when they get to the far end. You want to introduce as few errors as possible and meet the various power, cost and lifespan specifications.

“The root of the signal degradation is physics.  The interconnect is subject to electromagnetic interference like noise from an external source. There’s attenuation. The materials that you use to make the physical link are going to be dispersive. That is, different frequency components propagate at different speeds. The structures have mutual capacitance and mutual inductance, so they will cross-talk. That means one signal will impact another signal unintentionally,”  Chen said.

“It becomes a very complicated problem to predict: If I excite a waveform that looks like this at the transmitter, what exactly is the waveform at the receiver?”

Decades ago, engineers would build a prototype and measure the outcome. Then they began modeling the outcome using computer simulations of the physics involved. Now, with the techniques that the CAEML team is developing, machine learning can guide them toward designs that are most likely to be the closest to optimal. Chen is also developing machine learning techniques to predict the performance of a high-speed interconnect based on information regarding the design, without having to run the physics-based models.

“In many fields, not just [electronics design,] people are into this sort of virtual prototyping. That’s both to accelerate the design cycle and to lessen the cost. But, more importantly, it allows the engineers to do a lot of what-if analysis. If they have an idea, it’s very fast and very cheap for them to test it and get instant feedback on how that idea will materialize and see if it is  viable or not,” Chen said.

This approach is also expanding beyond single components to systems of heterogeneous components – for example, multiple chips interconnected in a single silicon package. Rosenbaum is working with Professor Max Raginsky, a William L. Everitt Fellow in the Department of Electrical & Computer Engineering, on dynamic behavioral models of such systems.

“You want to use these individual models to interconnect them into an entire system. And then you want to simulate the behavior of that system and predict whether it meets the requirements or not. When you interconnect those components, there’s feedback between them, and those models  may not be stable,” Rosenbaum said.

In addition to Chen’s signal integrity problems  and the integrated circuit design being considered  by Rosenbaum and Raginsky, CAEML researchers  also study how machine learning can be used in security, reliability and data privacy.

Dale Becker, a senior technical staff member at  IBM who is part of CAEML, said that innovations in all those areas can be applied by chip designers. “Somewhere you have to make approximations, and with machine learning you’re able to quantify the uncertainty in your predictions and [be] able to handle a large mass of data. You need to find a clever way to get past these hurdles of this escalating complexity  as these multiple pieces need to be tied together.”

“If you brute force your way through all the calculations that would be involved in all of this,  you don’t have enough time or computers  in the world to do it.”

Corporate Connections

A monster example of this complexity is IBM’s own  z16 system, which was released earlier this year.  The platform includes an on-chip AI accelerator that lets clients in the banking industry analyze transactions for fraud at previously unheard-of scales – as many  as 300 billion predictions per day.

During their announcement of the z16 platform,  IBM pointed out that a system like this would reduce the amount of time consumers had to spend dealing with fraud alerts on their credit cards. It would also reduce the amount of revenue lost by merchants  and banks as they cover fraudulent purchases.

According to Becker, IBM used elements of CAEML research in designing the interconnect for the z16 platform. Because CAEML and other Industry-University Cooperative Research Centers like it engage exclusively in pre-competitive research, they can be assured that the work they do together can be applied to massive, proprietary projects like the z16 platform. CAEML’s industrial partners select the projects that will be worked on by the university partners, share ideas and develop general methods together. Then they  use those methods in a variety of ways.

“The more input you get, the more data you get,  then the more insight you get and the more value you get out of it,” Becker explained. “The precompetitive part is me and my competitors building the tools. Then we use the tools to design products. And the products are what compete. The algorithms themselves are not the competitive part of what we do. But having those algorithms available and the skills of the engineers  to use them is very key.”

As he said, the skills of the engineers who use  the tools are crucial. And CAEML delivers those too.

“On the research front, it’s a good opportunity for me as a faculty member, because we’re working on research that we know is going to be applicable.  There’s no doubt about it. These projects are defined  in very close collaboration with industry,” Chen said.

Of the three graduate students on Chen’s team, two interned with IBM last summer, and one was at Qualcomm. In the first five years of the program,  12 CAEML PhD students took jobs at member  corporate partners upon graduation.

“The biggest challenge is always the talent,”  Chen said. “There’s not a lot of people out there who understand both machine learning and hardware design. What industry wants from CAEML is the opportunity for us to train the next generation of, the next level of, talent that’s going to graduate from Illinois and enter the workforce, having a command of both machine learning and hardware design and how  they come together.”

Share this story

This story was published September 20, 2022.

Back to Limitless Magazine Fall 2022