Transcript: Engineering for Good with Vilas Dhar

8/10/2021

Written by

Engineering for Good with Vilas Dhar

Bill Bell: Hello. And welcome to Illinois Innovators, the podcast of The Grainger College of Engineering at the University of Illinois Urbana-Champaign. I’m Bill Bell.

Today’s guest is Vilas Dhar, President of the Patrick J. McGovern Foundation, a $1.5 billion global philanthropy advancing artificial intelligence and data solutions to create a thriving, equitable, and sustainable future for all.

Vilas is a technologist, lawyer and human rights advocate and a leading voice at the intersection of AI, technology, and social impact. He earned bioengineering and computer science degrees from The Grainger College of Engineering – and a JD from NYU and an MPA from the Harvard Kennedy School of Government along the way. Vilas has also been a leading contributor in the academic study of technology for good as the Harvard University Gleitsman Fellow on Social Change and Practitioner Resident on Artificial Intelligence at the Rockefeller Foundation’s Bellagio Center in Italy. His work has been published in the Harvard Business Review, Nature, Stanford Social Innovation Review, The Hill, USA Today, and Project Syndicate on how academia, CEOs, government officials, and civil society leaders can build a better, tech-enabled future.

Vilas, welcome and thank you for taking the time to talk with us today.

Vilas Dhar: Delighted to be here with you, Bill. Thanks for having me.

Bill Bell: Great. And I’m just going to dive right in. My first question is from right there in your bio that I just read. It talks about “advancing artificial intelligence and data solutions to create a thriving, equitable, and sustainable future for all.” From my seat, I don’t see a lot of foundations talking about AI period, and I don’t see a lot of tech companies talking about social impact in these ways. So it seems to be a fascinating, sort of uncommon space that you’re working in. So if you could lead off by telling us how the Patrick J. McGovern Foundation arrived here and why it’s important to be working in this space. 

Vilas Dhar: Bill, thanks for having me. I’m so delighted to be talking to The Grainger College of Engineering. It’s a place that was really important for me in my formation and now as I look forward into the future. 

Let me start by sharing my incredible optimism and enthusiasm for the future that lies before us. Artificial intelligence, data science, these tools are transforming almost every element of the human experience. And the choices we’ve made as a society reflect the incredible confidence we have in the idea that a better future is possible. We’ve taken incredible resources and equipped some really smart people with some great, gleaming, shining tools and asked them to build things that let us do everything from connect to people across the world in ways that reflect our creativity and ingenuity. All the way to some of the less visible stuff, the things that build logistic supply chains that get us a fidget spinner made in Taiwan delivered to US shores and into my hands 30 minutes after click buy on a website.

This is a pretty incredible start to a future ahead. But you asked how we as philanthropists and foundations think about the question, and there’s a flip side to that point. Even as we’ve equipped some really smart people with these great tools, we’ve taken another set of people – social change makers – and we’ve asked them to address some really fundamental and immediate problems. Hunger, climate change, the lack of access to education and health care. Unfortunately, we haven’t given them access to the same sets of tools. We’ve given them plastic shovels and plastic hammers, the kinds of things you build sand castles with, and asked them to build the institutions that create equity in the world. 

This is a fundamental and deep challenge, but it’s one that we can address quite easily. And the way that we think about addressing that is bringing technology into the philanthropic sector and asking institutions like foundations – that have for so long been asked to hold public capital and use it for public good – and say ‘Let’s add to that toolkit that they have.’ Let’s bring technological expertise. Let’s bring an enthusiasm and excitement about what technology can create in the world, and let’s pair it with the passion of incredible change makers to say we can bring the value of deep supply and logistics chains to people who are working on health care to rural, last-mile communities. We can bring the capacity of social networks and learning agents and conversational protocols to agents who help people when they’re experiencing some of the greatest mental trials they might feel in their life to have a place they can go to talk through some of the things they have gone through and to be triaged and to be supported and cared for. 

If this is possible, then there is also an impatience. It’s an impatience I learned as an engineer at Illinois. If we know we can design products that can actually make the world more equitable, more sustainable, more thriving place, then why aren’t we doing it now? And that’s where the Patrick J. McGovern Foundation sits: It’s to say ‘Let’s enable that kind of action in pursuit of a better future.’

Bill Bell: That’s great. Thank you. And that passion and that impatience that you described, we certainly still see it around here these days so that certainly resonates with us. But it also strikes me, in your description there, that these are big organizations and fields and areas that you are in the business of influencing, right? Whether it is on the technology side or on the civil society and nonprofit side. They’re used to doing most things in certain ways. 

So, as a relatively new foundation – I know y’all haven’t been up and running for that long – passion and impatience are important, but what else goes into the how of how the Patrick J. McGovern Foundation expects to do its work and is doing its work so far? 

Vilas Dhar: It’s a great question. We start from the fundamental conception that it’s not necessarily the wrong hands that are shaping our shared future, but we do think it’s not enough of them. What I mean by that is this: We have had technology companies and given them a great deal of independence in making decisions about the kinds of products that we’re going to build for the future. That’s a great thing. It’s led to some incredible ingenuity and innovation. But we’re now stepping forward to say when these decisions are no longer about a single website, a single product, a single platform, but affect our shared humanity, we now need to bring the voice of the civil sector to the table. 

You’re right in acknowledging that these institutions are large and they have a lot of inertia, but that doesn’t mean we can’t create a more inclusive conversation to make these decisions. We do it in a number of ways. First, we built an internal technology capacity, a robust set of engineers that sit inside of a philanthropic institution that can work directly in partnership with our nonprofit partners – those large organizations that are doing things like in the field of climate change, trying to aggregate geospatial intelligence and on-the-ground people reporting to build a better, more granular view of how climate change is affecting vulnerable populations. With organizations like that, we’ve stepped in to support the creation of global data labs, where they’re not only becoming curators and aggregators of these data assets but they’re also sharing the information and insights they’re getting from this work with the field at large.

This work, again, doesn’t happen in isolation. It’s not just philanthropies working with nonprofits. We need technology companies at that table as well, and we need academic institutions. Technology companies stepping in and opening up access to some of their APIs, allowing some of their tools to be used in ways that maybe were never envisioned by the creators of those tools but have been hacked by social innovators to actually provide deep support for the kind of change-making we need to do. And universities as well. Not just as fertile grounds for thinking about new applications but also as places where we’re training new generations of leaders who can bring those technical skills and match them against real-world problems as they come out of school.

Bill Bell: What are the opportunities for universities in particular? What should we be looking at? What should we be looking for? What should we be paying attention to?

Vilas Dhar: I start from just thinking about my own experience. I had really a set of formative experiences at the University of Illinois and at The Grainger College of Engineering. I saw very clearly not just how a technology education trained me to go find a job in technology but really allowed me to understand large-scale problems and break them down into constituent parts. I was able to apply that same framework to my work in law and in policy and now in philanthropy. Universities play an incredibly important role, not just in teaching skills but in teaching creative frameworks for how to think about addressing problems at scale. So I start from there. That universities really are the place where we can train generations of decision makers to understand the contours of how technology works, certainly, but also how to apply these new and innovative approaches to large-scale problems.

Universities also have a role in making sure that as we’re training young leaders, we’re also giving them a grounding not just in hard sciences or in the disciplines of building technologies but also in ethics. In a socially grounded framework that understands how these products we build and the ways in which we build them promote possibly more justice and equity in the world. As educational institutions, universities are right at the heart of the transformation that’s coming. 

But there’s more that can be done. There are questions of how research that’s going on inside universities can be applied to social challenges even in those formative moments. So much of the work that came out of the University of Illinois, for example, that was later commercialized was built first as trying to solve a social problem. The world wide web and NCSA Mosaic as a way of taking academic resources that were sitting on disparate servers and making that accessible to researchers, which then, of course, led to an incredible transformation. So one of the things we think and care a lot about is how do we ensure faculty are supported in identifying research projects and proposals that have dual purposes – both engaging and advancing the frontier of the field of knowledge but also along the way, using social use cases as a way to validate hypotheses and to build products that might support them. 

And finally we think about how faculty – this incredible resource of technological knowledge, that have enjoyed a deep partnership with the technology industry, spending time both in their labs and also working inside corporate R&D and their companies – might have the same privilege and experience of doing the same work with social change organizations. Being able to spend a sabbatical or a secondment or even stepping out of academia temporarily to work inside of a large organization that might be using the newest and most emergent frontiers of algorithmic machine learning to directly deal with problems like human trafficking or the misallocation of resources or wildlife poaching either in the oceans or on land. 

We love to partner with universities to think about how philanthropy might provide financial support but also structural and tangible connections between faculty and the organizations that are working on similar problems. 

Bill Bell: Do you have any cases or examples or best practices that you see in that area at this point?

Vilas Dhar: We’re seeing a transformation, I think, across academic faculties at all of the great institutions in America and across the world. I can give you a few point examples. At MIT, where we support a program called Social and Ethical Resources in Computing, we’re seeing the creation of case studies and new cross-curricular programs that bring questions of ethics and philosophy right into the heart of questions of how we code. 

At Illinois, I’ve seen student groups like ACM and Women in Computer Science building projects that display exactly how the implicit biases that might go into shaping programs actually lead to long-term effects and outcomes on the other side. And I understand faculty are beginning to put together a clinical programs, where students in the computer science discipline are no longer just sitting down to do their machine lab problems on something that might be as abstract as a better search algorithm and instead are being given real-world use cases where they’re saying let’s actually understand how those algorithms might to a question like tracking poachers’ shipping vessels.

So figuring out ways we can continue to tie together the advancement of the academic discipline with the real-world use cases that address meaningful problems creates both instant experiences but it also continues to drive the narrative that these technologies' primary and best use is to advance human interests at large.

Which is this tech for good concept that I’ve heard you talk about. Which, if I understand it and as I understand it, doesn’t just apply in your interactions with the university or in your vision for how students approach and come to understand these issues. But tech for good is much broader that as well. So talk a little bit about what you mean by that attitude with all of the foundation’s enterprises and opportunities you’re looking at.

Vilas Dhar: My passion in this space starts from a fundamental question: What would our digitally enabled future look like if we started with a totally different agenda? Instead of thinking about product development, we thought about social development? Instead of profit increases, we thought about equity increases? Instead of commercial potential, we thought about human potential? 

If we were able to reorient the way that we create technology, then we’d be in a place where we could look at technology innovation through the lens of people’s lives. I am super excited to share with you that I think the most innovative tech out there is actually the stuff we’re not hearing about in the news just yet. And I think it’s the kind of work that’s going to transform our world. Some of this is not happening in the traditional tech industry. It’s not happening with traditional actors. It’s instead a new generation of public-interest technologists. 

I give you examples like the International Waukesha AI Alliance. This is an organization that is bringing together cutting-edge natural language processing algorithms and an understanding of how we map languages across multiple domains to not just translate a problem that’s being solved between English, Spanish, French, but to apply that same methodology to dying and extinct languages of indigenous peoples across the world. 

Now, I share this not just because it’s an interesting application of a technology but what it means for peoples whose entire languages have been stripped away, where they might only have 50 or 100 native speakers left. To not just be able to capture them on tapes or to document them for future generations but to build living models that capture how people communicated through thousands of years of human history and to make them accessible to an entirely new generation of people so that they can understand how it maps to their existing languages.

I think of technologies like the ones being used by groups like I See Change, a small organization that is focused on the idea that climate change – as scary and as big a problem as it seems – is one that’s best addressed at an individual and a community level. It allows people to document how climate change is affecting them. When on their streets, when it rains heavily, floods are backing up public sewers or drains or are creating issues with rivers overflowing their levies. It allows individuals to report those events using their own cellphones and then has a robust AI infrastructure behind it that pulls all of those data pieces together and then creates useful policy recommendations for local city governments to address.

These aren’t big romantic pieces that you might see on the cover of a major commercial magazine, but for the people they are affecting, they are transforming not just how they use technology but how they interact with power structures, how they interact with the effects of climate change, how they affect their own vulnerability in the face of transformation in the world around them. 

These are pretty small examples I shared with you, but there are myriad of them, right? And they are happening all across the world in pockets of innovation where students from colleges are going out and supporting communities together, bringing technical skills in conjunction with interests in making a better future. 

This to me is tech for good. It’s applying the same skills and the same pieces of knowledge that we’ve applied to build all kinds of shiny things and bringing it right back down to the level of individuals and helping them live their best human lives.

Bill Bell: And that really opens a different view for me and I suspect for others who might be listening. Often when I hear issues of equity discussed, they’re discussed as issue of economic opportunity or as issues of things that effect infrastructure or people or communities in different ways. Whereas, with this Waukesha language project that you describe, and the general description that you gave, it’s about a lot more than that. It is not simply economic opportunity. It is about human dignity. It’s about our human place in the world.

Vilas Dhar: You know, I started off this conversation with how optimistic I am, and I will tell you I think of technologists and engineers as the ones who hold not just our products and our machines and our technologies, but they also hold on to the direction our culture, our stories, our songs, our dreams will go. In many ways, we are all becoming architects of a shared future, and we have to do that together. 

Bill Bell: Absolutely. And these are democratizing forces across all of those, all of those different categories of human endeavour, I think I’m hearing you say.

Vilas Dhar: It is essential we put a fine point on the idea that technology can be a democratizing force. But that requires an intentionality. And it comes to this second part of our story. It’s not just that the potential exists but that we need to come together in a shared intention to say: If we decide to continue to allocate so many incredible social resources to the creation of technologies, how do we ensure that the technologies being built are the ones that actually serve all of us? 

And there, I think, a new conversation emerges. I think we’ve all seen the existing discussions between technology companies and the regulators that are trying to control or restrict or limit, that are trying to put policy around them. And as usual, I say those two participants in the conversation are absolutely essential, but they can’t be the only ones in the conversation. Where are the voices of students who are innovating, of faculty who are building the frontier of what’s possible, and of civil society and the individuals who are being affected by these technologies?

It brings us to a conversation about technology and equity that requires new social structures and institutions. I’ll give you one example, I’m fascinated by questions of data equity. We are creating data in all of our behaviors, our activities, our actions. That data is being collected by institutions, often data intermediaries or technology companies. 

Well, we’re moving into a world and a future where it may be the case that we need a new conception entirely of data. Is it a matter of privacy? Is it a property right? Is it a thing that I own and I transfer? Or do we need a new social conception of the interests of the individual on what that data represents to them? These are conversations that, where in history they might have been in the school of law or they might have been for government policy-makers, but now the technological complexity of what we’re talking about means that we need to create leaders who both understand the technology and are capable of addressing the legal, societal, and ethical questions that are going to come out of it. When we think about tech and equity, that’s going to be a really essential part of our public conversation going forward. 

Bill Bell: In other conversations, I’ve heard you mention this Data and Society initiative that you’ve launched yet more recently, as I understand it, which must play a role in what you were just saying. So share a little bit about that if you could and why it plays a role in that paradigm shift you were just describing and that must well go with AI, data, and the other things that go with it.

Vilas Dhar: Our Data and Society Initiative reflects a fundamental hypothesis that in order for philanthropy to be effective in conversations about technology, we need to first build an internal competency and expertise in delivering not just policy around tech but the actual products and use of it. In a very novel maneuver, we’ve gone out and merged with another foundation. A foundation that was created by a technology company to help nonprofits use technology products and services. This is the first philanthropic merger of this size that we’ve ever seen in American history in the last 75 years, and it underscores our institutional view on building an entirely new playbook for how we operate in this space. 

With this merger, we brought on a core of a team that we intend to grow that essentially serves both as technology and strategy adviser to our significant partners in the nonprofit space. It means that we can actually build an internal competency that takes an organization that we work with from being able to identify their data, their assets, even when they might be quite messy or noisy; to clean and realign those data assets into a workable environment; and then to provide a menu of options in how they might segment, serve, analyze, and use that data to directly drive their strategy.

This is not traditionally the work of philanthropy, and in this, I think we are really exploring a new frontier of what’s possible. I alluded to this a little earlier. Think of institutions like foundations not just as pools of capital but really as holders of public trust that are adaptable to be able to deliver what’s most needed into the world right now. As we’ve learned today, as nonprofits go out to solve these problems, one of the big things they need most is direct access to a technological capacity that supports them and their ability to drive large-scale transformation. 

Bill Bell: So it is not white papers and podium speeches and policy recommendations any more. There is an explicit part of your team that is working with nonprofits and civil society on the nuts and bolts of AI and data.

Vilas Dhar: That’s exactly right. To borrow from the great Martin Luther King Jr., we are facing the fierce urgency of now. It’s no longer a time or space to think about what policy solutions look like. We need to get these tools into the hands of the people who are addressing our greatest challenges as quickly as possible. 

Bill Bell: What an optimistic perspective to put on it, as you promised when you were describing yourself that way. And, you know, I hope, on my best days, I’m optimistic in that way about technology and our future, as well. But I get a little wobbly when we talk about AI sometimes, and I think that’s a relatively common experience for people who are looking at these issues. There’s all of that promise, but there are pitfalls that go with it as well. 

Vilas Dhar: The conversation that we focus on is not just how to build more ethical AI but how to build a more ethical society that is guided and served by AI and technology. That frame leads us to a couple of intuitions and hypotheses. The first is: The problems we’re facing with early implementations of AI today – around racial bias, around accountability and transparency, around explainability and auditability – these will not be problems that we will simply solve, put away, and move on from. These are fundamental questions that should be asked every time we deploy new iterations of these technologies, and they should be core to the development and production, the direction that we take with AI. 

If that’s the case, then it strikes me that what we should be investing in is not just to take the conversations we’re having today to their logical conclusions but also learning from them and building a cadre of professionals whose role can be to continue to serve, to support, and, when necessary, to regulate and police AI. 

We want to upend the whole process. We don’t want to think about these questions as obstacles that prevent the creation of new AI. Instead, we think they’re absolutely essential additive building blocks that we need to build into the process. We have partnered with Arvind Krishna, the fellow alum of The Grainger College of Engineering, to launch something called the Global AI Action Alliance at the World Economic Forum that brings together hundreds of participants – leading technology companies, leading social activists, academics, and engineers – to advise on a new frame for how we might think about open source, ethical creation of technology. But that’s just one part of it.

On an ongoing basis, we are also curious how the development of AI will be informed, not just by technologists or ethicists, but by users and consumers. How do we make sure that vulnerable populations across the world are being heard when they talk about the implicit, unexpected consequences that the deployment of these systems are creating. There again, I think both philanthropy and academia have a role in making sure not just that we’re listening but that we’re providing amplification and microphones to make sure those populations are being heard everywhere. 

Bill Bell: And that defined the questions we face really well. Help me understand, help us all understand, how we operationalize that. You talked about wanting these issues and these considerations that you’ve described built into all development of all these tools and opportunities on an ongoing basis. That’s a tall order.

Vilas Dhar: Yeah. Luckily, this by no means is something we’re trying to do alone. So let’s reconceptualize this space. I think we know that institutions across the spectrum are recognizing not just the need to do this for commercial purposes – we’ve seen how Amazon, Microsoft, and others with facial recognition have seen the blowback that comes from consumers when they build bias into their systems. So they will certainly always feel that pressure of commercial reasons to build more ethical systems. But this isn’t really a question about technology. This is a question about a fundamental reconceptualization of stakeholder participation. 

Tech companies are already coming to the table, and they’re realizing that the ways in which they develop products need to be designed for long-term sustainability, need to be designed to support and lift up the populations they are serving. Government regulators are realizing that they need to up-skill their own policies and programs to understand what the kind of issues are that technologies are creating. And here too, we’re playing a role. We’re launching a program to help freshman congressmen really get a familiarity with and understand the nuances and language of technology at scale, particularly the emerging frontier. 

We think there’s probably a space for ongoing government intervention, not necessarily through regulation but through support, through the allocation of research and development dollars, through participation with academia in defining new research programs. 

And then I think we will continue to see an incredible popular grassroots movement that asks for technology creators, users, deployers, and distributors to be held to a high standard to ensure that when they’re making decisions about pushing this out that there’s real public accountability. The mechanisms of that are happening today. There are things like creating new audit mechanisms and institutions for AI algorithms. It’s creating open-source technologies so that even as we’re deploying these algorithms people know what’s behind them – they can look under the hood and understand how they work. I think we’re seeing new products in the space that aren’t coming out of the traditional tech players. While that’s still quite nascent, that’s a great hope of mine. That we get to see what 20 years ago or 50 years ago was seen as the hobbyist, hacker community – we’re just beginning to see that in the world of AI as well and seeing some pretty incredible new developments in the space. 

I’m struck by the fact that 70-odd years ago to even know that there were supercomputers at the University of Illinois required a top secret nuclear clearance. And 15 years ago, I could walk into the Beckman Lab where we had built an immersive 3D environment called the CUBE and be able to use Cray supercomputers – some of the, kind of, most advanced supercomputers of the time – and today that the average college student sitting in their dorm can begin to use building blocks around AI to put together new algorithms and machine learning to come up with uses we haven’t even thought of yet. 

You said earlier, and I agree, I’m quite an optimistic and hopeful person, but it’s these kinds of potential and developments that give me that hope. At the end of the day, the answer to this problem doesn’t rest with one institution or one individual or even one sector. It’s the fact that democratized access to these technologies mean that we can all lend a hand in shaping the future. 

Bill Bell: That’s a very broad spectrum of issues that you all are looking at. How do you go about it? And why do you go about it that way?

This is maybe an overused phrase, but I think never more applicable in human history than now. The idea that we sit at a moment of transformation. Not just individually, not just in the institutions we’ve created. But in some pretty fundamental questions of our society. If we’re going to deploy these technologies at scale – in ways that transform our economic preconceptions of how things work, our political, our social, our cultural ideas of how we interrelate with each other, because technologies can create some incredible new opportunities and surpluses for us – then we have to start from fundamental questions about what we value as a society.

Traditionally, the role of doing that has fallen in two places. First to our academics, particularly those who think philosophy and ethics and the human sciences. And then to philanthropists and civil society, as the ones who attempt both to protect the most vulnerable and to inspire that best possible future. 

Now, those are quite lofty principles that I’ve just shared with you. But if we translate that into the moment we live in today, it’s very clear that we’re on a path as a society that gives us a fork in the road. One of those paths lead us to a world where technologies drive the aggregation of capital, of opportunity, of power into the hands of the few. And another where we open the real possibilities of these technologies to create a truly better and inspirational future for everyone.

Our foundation is one of many, but I think that as a sector, and really as a society at large, we’re going to be faced with that choice in the very near future, and I feel quite confident that we’re going to make the right one. Maybe in order to get there, we need to do some prep work.

That’s the frame in which I think about what the Patrick J. McGovern Foundation does today. It’s to go out and test those hypotheses to ensure that technology really can support better solutions to major challenges. That the voice of civil society, participating with technology companies and governments, can lead to a more robust conversation about how we want to make shared decisions about the future of tech.

And then at the end of the day, institutions like universities, like foundations and civil society, put at the very front of our decision making the interests of the most vulnerable and the interest of our shared good. 

Bill Bell: What a nice to point to leave it at. Vilas Dhar, I appreciate you taking the time with us today. It’s always exciting to hear just what impact, and just what broad and both promising and crucial issues that folks who have been touched by and have touched The Grainger College of Engineering are making out in the world. So I really appreciate you spending this time with us today to describe the Patrick J. McGovern Foundation and your work and your team’s work there. 

Vilas Dhar: Bill, thanks so much for this incredible opportunity, and I’m excited to share, not just some of our work, but continue to track and follow the incredible work of alums and current students and faculty members at the Grainger College. Thank you so much. 

Bill Bell: Thank you.


Share this story

This story was published August 10, 2021.