×
Webinar

Framework for Building Trustworthy AI

Tuesday, 30 January 2024 – 11 am ET
share this
Register for this event

With the exponential growth in AI applications comes a reconsideration by the public and AI users regarding the fairness of systems, their robustness, and efficacy. Businesses are juggling the advent of a new industrial revolution while making sure attention is given to risks such as “can these systems damage our reputation? What are the regulatory requirements? And are there any financial risks to using such systems?”

In this industry-centered fireside chat we explore the intersection of two perspectives on how organizations can build frameworks that support trustworthy AI. In particular, we look at best practices for auditing and building trustworthy AI in finance and accounting sectors. Our talk includes:

  • Danielle Supkis Cheek, VP of Strategy & Industry Relations at MindBridge, a global leader in financial risk discovery.
  • Emre Kazim, co-CEO and co-Founder of Holistic AI.

If you couldn’t make it on Monday or want to revisit the topics we covered, you can also access the recording on-demand above as well as work through some of the questions posed in the event below.

Q&A


Finance and accounting, as highly regulated industries, have very thin thresholds for being wrong about your numbers (and severe penalties for mistakes). Put another way, there’s a high need for trust in finance. With this said within finance, many of the logical steps AI applies are tried and true in non-AI settings.Historically, SOC compliance has played a role in enabling trust and internal controls. As firms increased their processing ability and ability to handle large amounts of information with automation, they haven’t updated this control side. When we can explain in normal human speech how each one of these automated components work and walk through tried-and-true logical steps being applied by AI, this is where explainability is built. On the flip side, the execution of this automation behind the scenes can be very complex. It’s best to try to focus on simplifying down to the basic logical steps in our documentation and external comms. This translation from data science terms to logic and industry-specific terms plays a large role in explainability. Additionally, focus on the use case and business value aids in explainability as well.

In the world of financial and accounting controls, receiving 3rd party reports is normal to understand tests, processes, results, and so forth on systems. This extends across many finance and accounting use cases where teams are used to receiving and then assessing next steps based off reports. Providing similar reports on AI systems generally works for communicating with the larger team, particularly if there’s an emphasis on translating complex data science terms into more fundamental concepts.

Fines up to €35M or 7% of global annual turnover for breaches, subject to changes depending on the nature of the non-compliance and the size of the respective entity, have been outlined in the latest political agreement although these amounts are not yet finalized.

Conservative and risk-averse industries are still looking to automate things that are hard to do at scale. This is an opportunity to start small yet still provide efficiency gains to build trust. You can start with comparing options across a lifecycle of AI applications that are more or less transformative. Starting a point solution or gains in efficiency through automation in minute details of the process can build trust before more dramatic overhauls using AI are attempted.

Transparency around testing, audits, results, and methodology is a core component here. Often “good enough” surfaces where there’s a “human in the loop”. There are many ways and metrics for testing confidence in AI, it’s important to provide some overlap and differing views through testing the performance and safety of an AI system.  Layering many tests over systems removes a single point of failure and allows humans to determine which tests are most predictive of confidence. Human judgment still plays a very large role in parsing through testing and metrics to find the best proxies for explainability and trust.

How we can help

Highly regulated industries like finance and accounting have lower theshholds for error than many other industries. This means efficiency gains from AI and automation need exceedingly well thought through frameworks and processes for building trust. Once you’ve explored the perspective of industry leader Mindbridge through our fireside chat guest Danielle Supkis Cheek, be sure to reach out to the Holistic AI team to see what the the world’s first 360-degree solution for AI trust, risk, security, and compliance can do for your enterprise.

Our Speakers

Emre Kazim, Co-founder and Co-CEO, Holistic AI

Danielle Supkis Cheek, VP, Strategy & Industry Relations, MindBridge AI

Agenda

Hosted by

No items found.

With the exponential growth in AI applications comes a reconsideration by the public and AI users regarding the fairness of systems, their robustness, and efficacy. Businesses are juggling the advent of a new industrial revolution while making sure attention is given to risks such as “can these systems damage our reputation? What are the regulatory requirements? And are there any financial risks to using such systems?”

In this industry-centered fireside chat we explore the intersection of two perspectives on how organizations can build frameworks that support trustworthy AI. In particular, we look at best practices for auditing and building trustworthy AI in finance and accounting sectors. Our talk includes:

  • Danielle Supkis Cheek, VP of Strategy & Industry Relations at MindBridge, a global leader in financial risk discovery.
  • Emre Kazim, co-CEO and co-Founder of Holistic AI.

If you couldn’t make it on Monday or want to revisit the topics we covered, you can also access the recording on-demand above as well as work through some of the questions posed in the event below.

Q&A


Finance and accounting, as highly regulated industries, have very thin thresholds for being wrong about your numbers (and severe penalties for mistakes). Put another way, there’s a high need for trust in finance. With this said within finance, many of the logical steps AI applies are tried and true in non-AI settings.Historically, SOC compliance has played a role in enabling trust and internal controls. As firms increased their processing ability and ability to handle large amounts of information with automation, they haven’t updated this control side. When we can explain in normal human speech how each one of these automated components work and walk through tried-and-true logical steps being applied by AI, this is where explainability is built. On the flip side, the execution of this automation behind the scenes can be very complex. It’s best to try to focus on simplifying down to the basic logical steps in our documentation and external comms. This translation from data science terms to logic and industry-specific terms plays a large role in explainability. Additionally, focus on the use case and business value aids in explainability as well.

In the world of financial and accounting controls, receiving 3rd party reports is normal to understand tests, processes, results, and so forth on systems. This extends across many finance and accounting use cases where teams are used to receiving and then assessing next steps based off reports. Providing similar reports on AI systems generally works for communicating with the larger team, particularly if there’s an emphasis on translating complex data science terms into more fundamental concepts.

Fines up to €35M or 7% of global annual turnover for breaches, subject to changes depending on the nature of the non-compliance and the size of the respective entity, have been outlined in the latest political agreement although these amounts are not yet finalized.

Conservative and risk-averse industries are still looking to automate things that are hard to do at scale. This is an opportunity to start small yet still provide efficiency gains to build trust. You can start with comparing options across a lifecycle of AI applications that are more or less transformative. Starting a point solution or gains in efficiency through automation in minute details of the process can build trust before more dramatic overhauls using AI are attempted.

Transparency around testing, audits, results, and methodology is a core component here. Often “good enough” surfaces where there’s a “human in the loop”. There are many ways and metrics for testing confidence in AI, it’s important to provide some overlap and differing views through testing the performance and safety of an AI system.  Layering many tests over systems removes a single point of failure and allows humans to determine which tests are most predictive of confidence. Human judgment still plays a very large role in parsing through testing and metrics to find the best proxies for explainability and trust.

How we can help

Highly regulated industries like finance and accounting have lower theshholds for error than many other industries. This means efficiency gains from AI and automation need exceedingly well thought through frameworks and processes for building trust. Once you’ve explored the perspective of industry leader Mindbridge through our fireside chat guest Danielle Supkis Cheek, be sure to reach out to the Holistic AI team to see what the the world’s first 360-degree solution for AI trust, risk, security, and compliance can do for your enterprise.

Speaker

Emre Kazim, Co-founder and Co-CEO, Holistic AI

Emre Kazim, Co-founder and Co-CEO, Holistic AI

Emre Kazim is the co-founder and co-CEO of Holistic AI, the world's first 360-degree AI governance, risk, and compliance platform. Emre and Holistic AI have played a pivotal role in empowering large organizations to confidently embrace AI while avoiding regulatory, reputational, and financial risks.

Emre has a well-established track record publishing peer-reviewed articles on AI ethics, AI governance, and policy as well as facilitating conversations with state and industry leaders. An alumnus of UCL and King's College London, Emre holds a MSci, MA, and Ph.D. in Philosophy.

Danielle Supkis Cheek, VP, Strategy & Industry Relations, MindBridge AI

Danielle Supkis Cheek, VP, Strategy & Industry Relations, MindBridge AI

Danielle Supkis Cheek is the Vice President of Strategy and Industry Relations at MindBridge.

She is a member of the Auditing Standards Board and is an at-large member of AICPA Council. Danielle also sits on the Technology Experts Group for the International Ethics Standards Board and similar Task Force for Professional Ethics Executive Committee. She was also the first woman to receive the AICPA’s Outstanding Young CPA of the Year Award in Honor of Maximo in 2016 and is a 5-time Most Powerful Women in Accounting by CPA Practice Advisor and AICPA.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo