Dr. Eva-Marie Muller-Stuler on The Importance of Adopting Ethical AI and Data Science Practices

We thank Dr. Eva-Marie Muller-Stuler from EY MENA for sharing her remarkable journey as a data and AI pioneer, several insights on ethical AI and data science practices, and valuable advice for aspiring professionals. The interview sheds light on the criticality of transparency, interdisciplinary collaboration, and the potential risks of not adopting ethical AI and data science practices. Aspiring professionals seeking to incorporate ethics into their work will find valuable advice and guidance from this visionary leader. Read on to discover her expertise and vision for the future of ethical AI.

Journey as a Data and AI Pioneer: Exploring the Path to Shaping the Future of Data and AI

Can you tell us about your journey as a data and AI pioneer and how you became interested in the field?

I studied Mathematics because I always loved logical thinking, discovering patterns, and how so many real-world problems can be solved by turning them into mathematical problems. But at that time, there weren’t many jobs for mathematicians, so I added business and Computer Science to open more job options when I finished university. 

I then started my career in Corporate Restructuring and Financial modeling, where you forecast the financial impact of business decisions on corporate financial performance. This shaped the way I think about Data Science and AI for businesses: There are two separate fields:

a) Is Data Science used to improve the business (revenue, cut costs, or manage the risk), or

b) is it research with long-term investment goals? When companies are setting up data science and AI teams, they need to clearly define the impact they want to generate from the beginning. Too many Data and AI teams are cost centers and do not positively impact the business. 

While working as a director of an investment fund in London, I noticed how simple the assumptions and input data were when built into financial models. So, I started to find more drivers and insights to help make better decisions and add more data and information to the models. I took this experience to KPMG, where we built Europe’s first Data & AI team that created “Always-on Machines” to make better business decisions.

What drove my career was that I surrounded myself with some of the most brilliant people on the planet and I found that I could connect the domains I had studied to data science projects.

There is no other job where you can combine the beautiful logic of mathematics with state-of-the-art technology development and apply it to every domain, such as human behavior and psychology, improving quality of life, or using assets more efficiently in retail, healthcare, energy, etc. 

What motivated you to focus on the importance of ethical AI and data science practices?

In 2013, we really got into building big data models. Because there was little understanding of data access and restrictions, we were able to get a lot of data (like personally identifiable phone information, as well as retail and health data) for free. We connected the data we gathered to build a connected ecosystem to improve our models. The impact of connecting all the information was so powerful that it blew us away. 

We also realized how biased our models could be and how we could explain the risk. There were many potential issues where the models could go wrong, but there was no awareness of that as Data Science and AI was still very small field and mainly anchored in research. A few years later, more and more voices were warning about the point of singularity or that AI could turn against humans. But I never saw that as the most pressing risk. Building bad AI is very simple, and every high school student can do that. But building trusted, fair, transparent, safe AI is a very different story. Too much of the AI coming into the market was not of the quality to be released to the public. As an example, most of the data available is from a white male demographic, so models built on this data are automatically biased in their favor. 

Therefore, I decided to raise the policymakers’ awareness that this is a risk we must look out for to ensure the adoption of Data Science and AI will not impact our lives negatively. That’s when I started working with governmental organizations and NGOs to raise awareness of the need for rules and regulations.

Leading Data Science and AI Initiatives: Unveiling Key Responsibilities at EY MENA

What are your key responsibilities in terms of developing and implementing complex data science and AI projects and transformations at EY MENA?

The role has two focuses: Firstly, internal practice and strategy, where I have to decide our offerings, go to market, and internal skills development. I’ve structured the practice around the main pillars: Data Governance & Strategy, Technology, and Architecture, and Use Case building. I also ensure that I have the right team composition and skills.

Secondly, the focus is to understand the client’s needs and where they are in their data transformation journey and then bring together the best of EY to move them forward. No matter how high up you go, you must be hands-on and involved in the delivery to ensure the quality of the work and to mediate in case of any issues.

How can organizations ensure that they are adopting ethical AI and data science practices throughout their operations?

Right at the start of a data science or AI team setup, there must be a clear understanding of the different roles and responsibilities of the team. With every project, the team has to get together and decide, not just plan if it is ethical but also how to implement it responsibly. Ethical and trusted AI is not just about doing the right thing but also about doing things right. 

The lack of trust and ethical concerns are the biggest hindrance we see in the adoption of AI. There are many different frameworks that are all very similar at their core. The fundamental principles in the development are similar in every framework and focus on human agency and oversight, technical robustness and safety, privacy and data governance, transparency, fairness, accountability, explainability, and sustainability. And in every project, these principles must be ensured and monitored from the beginning and all through the deployment of the models. That’s what makes trusted AI a complex process. It starts right at the beginning of the team’s set-up when it’s essential to ensure the team has the right skills, clear roles, responsibilities, a code of conduct, and escalation processes.

Most companies today are still unclear about who is responsible for ethical or legal compliance. Often there are no processes in place on how to set up projects in a responsible way, how to deploy, and how to monitor them going forward.  But when we talk about impactful AI, the impact can always be both ways: Every action that makes you money can also cost you money if it is wrong. Therefore, building trusted AI must go through the whole end-to-end process: From data governance to technical infrastructure to model monitoring and retraining. The outcomes of the models and recommendations can only be trusted if every single step in the process can be trusted.

Insights on Promoting Ethical Practices in the Industry

Could you share any specific examples where ethical considerations played a significant role in the implementation of AI or data science projects?

Every company taking data transformation seriously must tackle the ethical and trusted AI challenges. The rule of thumb is that the more people are affected, the higher the impact, and therefore usually the higher the risk. AI solutions are used increasingly for high-stake decisions, and when they are fully deployed, they can make many bad decisions at a very high speed. Areas usually considered high-risk are Personally Identifiable Information (PII) or biometric data, critical infrastructure, education, employment, law enforcement, etc.

For the development of AI solutions in these areas, it is essential to look at the potential risk and harm the solutions can cause and how to mitigate that risk. If it is impossible to ensure their ethical and trusted development, then the solutions should not be released to the market. These are usually cases about crime prediction, college admission or recruitment, salary structures, or health care. Entities interested in leveraging AI solutions in that field should also be aware that biased or inaccurate AI solutions could lead to very high financial, legal, or reputational risk.

How can organizations strike a balance between leveraging the benefits of AI and data science while maintaining ethical standards?

It’s not really a balance. Every solution that is not ethical and trusted is a mess and brings risk to the business. It might be cheaper to stitch together a quick model in the short run, but in the long run, the models will fail, and their issues will become so big that the companies are spending more money on hiding the issues than on their original AI investment. We see more and more cases where the mitigation of things gone wrong became extremely expensive. 

What role does transparency play in ensuring ethical AI and data science practices, and how can organizations achieve transparency in their AI systems?

Transparency focuses on having the appropriate level of openness regarding the purpose, design, and impact of AI systems. This includes both that people are aware that AI solutions make decisions, but also have clear documentation and fact sheets on what data went into the training of the model, the purpose, and the potential risks. The model description or fact sheet must be stored with the model so that if it is updated, changed, or reused for different purposes, the original purpose, the training data, and potential issues are still available. A good example is a healthcare model built for the US market, which might not be trained on any data of women over 60 in the Middle East or Africa. It must be clear that it might not work in different regions when that model is sold and pushed out into the market.

The best way to identify biases and fairness issues is by having a diverse team. The crowd error of diverse teams is lower than that of homogenous teams. A diverse team is more likely to see potential issues with developing the solutions or with the data right at the beginning of the project. In the development of the prototype, the training data needs to be checked for biases to catch potential issues. There are many tools and techniques to test models on their biases, and these tests need to be implemented continuously from the beginning, all through the development and deployment of the models. It is essential to monitor all models during the deployment, retraining, or updating.

What are the potential risks or consequences of not adopting ethical AI and data science practices?

The potential risk or consequences for non-ethical or not trusted AI can be very severe. When we say AI should have an impact, we should remember that everything that can have a positive impact can also have a negative impact. As soon as AI is not ethical and not trustworthy, we don’t really know what the systems are doing. So, the consequences can be everything from reputational risk, financial loss, legal risk, loss of IP, or risk to human life, depending on what the solution is used for.

The company is basically flying blind. For example, a car company that uses AI and image recognition to understand the speed limit on highways must ensure the system is robust and tamper-proof, for example, by changing the signs. There need to be measurements and safeguards in place so that the car does not suddenly speed up to 130 in a 30 zone. Even in safe trial environments, every time negative news about autonomous vehicles was published, it damaged the car company’s reputation. And that is the same in every single industry. Governments that used AI solutions to decide university admission of students during COVID, banks that had biased credit card or loan approval processes, and many more cases have led to financial reputational or risk damage.

How important is interdisciplinary collaboration, involving experts from various fields, in ensuring ethical AI and data science practices?

The development of ethical and trusted AI frameworks has always been a very interdisciplinary field. A lot of the frameworks were developed together across different fields, from the political, legal, and ethical professions; we were all trying to work together to decide on the minimal requirements.

Unfortunately, the technical community has often been underrepresented in the working groups, so there is an agreement on what needs to be achieved, but the technical interpretation, guidelines, and breakdowns on how to ensure compliance with the law are often missing. Companies are unsure about the steps they need to take to ensure compliance, for example, with explainable or transparent AI.

Future Insights and Advice

How do you see the future of ethical AI and data science evolving, and what steps can organizations take to stay ahead in this rapidly changing landscape?

The recent development in GenAI and the release of more sophisticated AI solutions has massively accelerated the awareness of and the demand for ethical and responsible AI, governmental rules, and frameworks. With more people able to access, experiment, and explore AI tools, they start to see the risk associated with probabilistic characteristics. This has brought ethical AI to the top of the agenda for many companies and governments. The direct link between ethical and trusted AI solutions, compliance with legal requirements, and the impact on the company’s financials and reputation has become more visible in the last half a year.

Therefore, I don’t see legal frameworks and requirements as a hindrance or a cost to companies. On the contrary, their implementation will reduce the risk and enable the success of a Data and AI transformation.

What advice would you give to aspiring data and AI professionals who are interested in incorporating ethics into their work?

I would start by reading up on the ethical frameworks and legal requirements that are important for my field and the country where they work. The core messages of the frameworks are very similar; they focus on understanding the risk, what can go wrong, and how to mitigate it.

The first step is understanding the difference between probabilistic AI solutions and traditional deterministic models. Much information or training is available from governmental and non-governmental organizations, for example, industry bodies like the IEEE. Once they have understood the intention and the requirements of the frameworks, I would then recommend that they work together with senior mentors on doing a current state assessment of the projects, start to classify their risk, and draft and implement a framework and process to ensure that all the projects are always compliant with ethical AI throughout their lifecycle.

About The Author

Scroll to Top
Share via
Copy link
Powered by Social Snap