Ethical AI and the Power of Design

At Qindle we think it’s imperative to stay on the forefront of technology, we believe in the power of data to complement our creative services. Artificial intelligence (AI) is a fascinating field that will bring new business opportunities, however it also bears (ethical) debate. We spoke to Dasha Simons about her work as Business Transformation Consultant at IBM and how she is combining Design and AI, by bridging the gap between ethical AI principles and practice.

[IMAGE]

Dasha has always been interested in bringing the human heartbeat into technology development. An award-winning strategic design graduate from TU Delft, she puts her vision into practice daily – working at IBM as a business transformation consultant with a focus on AI and its ethical aspects.

Artificial intelligence is a fascinating field, bringing great business opportunities, however it also stimulates much (ethical) debate. Companies and humans  have learnt from trial and error that AI systems can be terribly unfair. Dasha explores how the risk of unethical AI systems can be reduced by design, by bridging the gap between ethical AI principles and practice.

You graduated from Delft University of Technology with the thesis “Design for fairness in AI” for which you were awarded as Best Graduate of Industrial Design Engineering. Can you briefly explain what your thesis was about and what design can do to prevent unethical AI systems?

As technology is becoming increasingly important in many industries, it simultaneously influences the design world. In particular AI, which is an emerging field that unleashes many business opportunities. What triggered my curiosity was the ethical debate around it and the tremendous social implications it can have. Such as the case of the Apple Credit card, which had a credit system that was biased against women. This means they give different credit levels to women compared to men, although they have the same characteristics and jobs. This is, of course, extremely undesired. This shows that, although AI is based on mathematics and statistics, the human decisions made throughout the process, and the data itself can be biased. As questions about ethics were becoming more and more relevant, companies started releasing beautiful principles and visions about ethical AI. But the translation to the practice of the day to day work of data scientists in AI teams was often missing. My ambition was then to explore how to translate those principles into practice, and how we could design for that.

“I think design and ethics are complimentary, they both solve challenging problems, but have different tools and methods to do it.”

For example, when you create a chair, you need to think about how comfortable it is, whether you want a sustainable chair, etc. Not all these criteria can be satisfied at the same time. And that’s similar with AI systems, it’s just less evident.

You carried out your thesis in collaboration with IBM, where you are currently working as a business transformation consultant. How do you apply your passion and expertise on AI ethics with a design background?

I’m a business transformation consultant at the cognitive and analytics department of IBM,  where my focus is on the ethics of Artificial Intelligence. What we do is support clients with their AI scaling strategies or implementation, both during the explorative phase, but also helping them move to actual products that are in deployment and at scale. One of my passions is using technology in use of social causes. One of the things we are working on is financial crimes where we use AI technology to find criminals, but also to combat money laundering. Using AI applications to combat money laundering is a very relevant topic and of course ethical considerations are imperative here.

However, I do believe that in every single AI project, ethics should be integrated – it should be there by default. Clients are increasingly interested in the topic, although I see there is still a strong need for education on AI ethics and on how to solve ethical issues.

As a designer, you can be the link between business and technology and you can help implementing ethics throughout the process by asking the right questions, including the users and stakeholders’ needs, visualizing complex problems and so on. I think it’s really about teaching our clients to grow their own ethical AI capabilities.

Do the ethics of AI keep you up at night?

It does keep me awake during the day, for sure. I really enjoy working on it and I think there’s still a lot of work to do in this area. I’m very happy to see that so many people are increasingly interested in the topic. That’s very good. Currently AI bias is one of the biggest barriers to AI meeting its full potential.  So, I think it’s of crucial importance to resolve it. Thus, I would like to challenge you to implement it more into  your daily work as well! Ask yourself questions about what indirect impact the product you are creating might have?

From your experience in IBM, what is the biggest challenge companies are facing today regarding AI ethics?

I think we can categorize these challenges in five key areas:

  • Fairness, and what does it mean in different contexts?
  • The value alignment perspective, how can you make sure that the values you embed into systems are the ones that you actually want?
  • Accountability, who’s responsible at the end of the day?
  • “Explainability”, how do we make AI more easy to explain?
  • User data rights.

If you look forward in 10 years, what does the world look like as AI is implemented in more aspects of our personal and professional lives?

I’m an optimist, so I believe that if we can implement and develop in a more ethical way, in the future we will have less repetitive tasks that take energy from us and we will be able to focus on what matters in life. If you think about sustainability, 20 years ago it was seen as burden, not as an opportunity. In the same way, I hope that AI ethics will not be seen as an afterthought, but as part of the process. To reach this future, though, we should act now. Regulations will definitely play a big role in this; however, companies also need to take their own responsibility.

This is an era where people are getting more and more aware of their data privacy. How are companies dealing with data privacy while ensuring ethical AI systems?

Indeed, there’s a big tension around it. Luckily, we have GDPR (General Data Protection Regulations) which is a step in the right direction. Companies need to have a reason to collect data, but also the consensus from users to be able to use data and so on. But you are right, there can be a tradeoff of making an AI system fairer versus making it more private. For example, if you want to check if a system discriminates upon gender, you need to have the gender column in your dataset to see if it discriminates upon it. Some people might say, how can you discriminate upon gender if you do not use it? Well it turns out, that other data might reveal your gender. In the end, that will give the same representation, but it might discriminate upon gender without the development team knowing about it. To prevent this, you will need to look at the gender data as well. This is something that needs to be resolved in a very context specific manner and discussed with the project team as well as with the users whose data is being used.

Do you think there is a way to ensure data privacy by design?

Good question. Privacy by design means integrating privacy through the entire process; from the design itself to the operations, the management, deployments, etc. Maybe you’re familiar with the privacy by design framework – what I think is interesting about it is that it is proactive instead of reactive. That means that you need to think upfront of all the different ways to ensure privacy and clearly set goals for it. But design can also help stimulate stakeholders to think out of the box and switch their mindset. One of the things that was also part of my thesis is the evil AI exercise, that triggers people to think of the most evil AI system that they can think of. These types of creative exercises might also help in privacy matters. Design can really help analyze the problem up front, think about the user and the stakeholders involved, but also about the process and the possible secondary effects.

We dived a bit into AI ethics, data ethics and data privacy, but have a background as a designer. What if we turn the question the other way around; how can data science enrich Design practice?

Yeah, I really love this question. I think design and data science could be integrated much more than how it is right now.

“I believe data science and statistics can support design in multiple ways, for example in understanding human behavior and patterns better, but also in understanding stakeholders and users more, especially with many data points.”

Data science, of course, is very good at analyzing big chunks of information, much bigger than what we can do through qualitative user research. So, I think design and data science are very complimentary in the user research part but can be also helpful when measuring the success of a product. We can say that data science can be helpful throughout the entire design process, from research to ideation, throughout product development (for example when testing prototypes) and in the deployment phase.

In your daily life at IBM how do you make use of data to improve your design processes?

So, as you said at the beginning, I am part of the cognitive and analytics team in IBM and my daily work consists of embedding design, specifically a design mindset and design principles in the data science process. I really believe that fusing data and design enhances innovation combining the quantitative data with qualitative research, interviews, or user research. At the same time design can help portray data in the most understandable way, based on stakeholders needs and goals. I think that’s one of the starting points you see with companies which are not that data savvy yet. It’s something that reporting and dashboarding can really help a lot with. Specifically, this can act as a trigger to search for more relevant data across the organisation.

“Design needs data as a powerful resource, but the same is true the other way around with tech savvy companies needing design to implement qualitative aspects.”

Design companies should integrate data in their daily work. The future will be infused with data and AI, with AI ethics being an increasingly imperative element to consider proactively, throughout the entire process.

 

Interview by Anna Filippi & Andy Carrera for Qindle, Amsterdam, October 2020

Qindle is an Innovation and data-driven design agency based in Amsterdam and working across the world. To find out more about how leveraging data in design, while balancing the need for intuition and conceptual practice, reach out to us!