SAS Data Ethics Director Reggie Townsend on Building Ethical AI and Why It Matters

reggie townsend SAS

What is ethical artificial intelligence (AI), why is it vital to the future of technology, and how can CIO’s and business leaders help foster it? We spoke to Reggie Townsend, Data Ethics Practice Director at analytics software company SAS, for his thoughts. Reggie brings a unique perspective to these questions: He serves on the National Artificial Intelligence Advisory Committee, advising the US President on AI-related issues, and on the board of EqualAI, a nonprofit dedicated to reducing bias in AI.

Q: What is ethical AI?

A: Let’s start with this reality: no one knows what AI is yet. We can certainly talk about AI in terms of statistical techniques,  algorithmic decision making and it’s predictive abilities. However, I see that as a present-day reality. The term is still morphing as languages does, and the broader society is still weighing in. So, when you start a conversation about AI with a layperson, you get everything from robots coming to destroy the Earth to weather forecasts. To me, that’s a firm indicator that we’re still figuring out what it means. At the end of the day, AI will be what society says it is.

Now, starting with the idea that ethics is about societal consent, then Ethical AI is the degree to which we accept how decisions are made for, with and around us digitally. This is why the concept of “ethical by design” is important. Our ethics must get baked into our AI systems so that they represent our values. 

Q: Why is ethical AI vital to our technological society?

A: Our role as technologists is to build platforms that can be generally understood as useful technology that won’t hurt people. The ethical component here is one of ensuring adoption based on trust. Low/no adoption by broader society risks the technology finding its way primarily into the hands of people with nefarious intent.

Q: How can organizations develop and use ethical AI?

A: When building AI, you must inform it with data. The data comes from our past and is built on the ideas of the past. And when that data gets ingested into present-day systems, that effectively perpetuates the past. So, we’ve got to rethink some of the past structural thinking — or some of the data, more precisely — coming into new systems.

Technologically, that might look like introducing a positive bias into the data. For example, if you want to make sure women have access to capital, then on a loan application, it doesn’t make sense to be gender blind. You need to know that a woman is applying so you can inject a bit of positive bias into the data to accomplish that social goal. Digital twinning also lets us take a data set and simulate future possibilities. If we’re not comfortable with those futures, we can take another look at certain variables to create a more equitable one.

Q: In your view, will the US follow the EU’s example on AI regulation?

A: I’ll stay away from whether we’ll follow the lead of the European Union (EU) and other things I’m not privileged to speak about. But I can say that the US and EU share similar values, at least historically, and nations with similar values will find ways to work together, much as we have on things like the Internet. Because AI, at the end of the day, is a set of tools and instruments, like every other technology in history.

For more information, visit

Copyright © 2022 IDG Communications, Inc.

Bing’s AI chatbot came to work for me. I had to fire it.