QuanCon2025

AI, Data, and Risk Insights: Cutting Through the Hype

Join industry leaders Clare Barclay (Microsoft), Dr. Nicola Hodson (IBM), and Bina Mehta (KPMG) as they delve into the realities of AI beyond the hype. This session explores the democratization of data, AI-driven risk management, and regulatory readiness. Discover how organizations are navigating the complexities of governance, risk, and compliance (GRC) with AI and data services.

Read transcript

0:06 Get ready for a bold discussion on a is true impact, the democratisation of data and how organisations are navigating AI driven risk management and regulatory readiness.

0:23 What a fabulous start to the evening.

0:25 It reminds me I probably have one of the best jobs in the industry, taking that technology to market with one of the best partner ecosystems.

0:33 So thank you, everyone.

0:35 I'm thrilled to moderate today's discussion on AI with industry trail Trail Blazers from Microsoft, IBM and KPMG.

0:48 Whoo, I hear you say, another panel discussion on AI.

0:53 Well, this is different.

0:55 What we're going to do is cut through the hype.

0:58 We're going to speak plainly and we're going to understand what's really behind AI data and risk.

1:09 So let's dive in and let's ignite some game changing insights.

1:19 Claire, you're warmed up.

1:21 I'm going to start with you.

1:24 AI is often positioned as a transformative force, but enterprise adoption remains stubbornly uneven across all industries.

1:35 A McKinsey report says that only 1515 percent of AI projects are bringing meaningful bottom line impact.

1:46 Given Microsoft's role and scaling AI for global businesses, where do you see the biggest disconnect between AI expectation on the one hand and real world adoption on the other?

1:59 Yeah, I'll go there.

2:00 So I'm, I'm just going to start by saying like how wicked it is to be on the panel with all females.

2:04 So just start, just start with that, you know, in breaking the in breaking the trend, it doesn't happen very often.

2:11 So I would, I would just start with that.

2:13 So listen, I think if you, if you roll back to 18 months ago and the advent of this technology, many, many organizations when they saw what some of the potential was going to be, I think went from, you know, went from, Oh my God, is that really possible to let me think of the hundreds and thousands of use cases that I can come up with.

2:36 And it sort of went into this smorgasbord of like every possible idea.

2:43 And you know, many of them would be good in there, but it was not done in a very coordinated way.

2:47 And so I think if you take the data point about 15% being successful, if you go about with like hundreds and hundreds of use cases and you haven't really thought about what the outcomes are going to be in your kind of POC ING it and playing around with it, it's sort of not a surprise that only 15% of those that were successful.

3:06 I think what I've seen particularly in the last six months, I think organizations are now becoming a lot more practical for the way that they're actually, there's actually strong belief beyond the hype actually about what the potential is going to be.

3:20 And I think some of the the demos that were shown by the team here are great examples of that.

3:26 And so organizations are now trying to think really carefully around what problems are we trying to solve?

3:33 What's the way in our business process there's that we're going to get that solved.

3:38 What's the, what's the change management that I'm going to go about doing it?

3:43 Have I got the right governance model for the choice of AI models that they will go after?

3:47 How do I think about enablement and readiness for people?

3:50 How would I think about security and compliance?

3:53 And they're running it in a very, if they're running it in a very structured program.

3:57 So if I think of an example of an organization that I've had the pleasure of working quite closely with is, is Vodafone.

4:04 I mean, Vodafone have played a, they, they, they had a lot of kind of chat bots, et cetera, that they were doing as many organisations do for their customer service.

4:13 But they've, they've thought really holistically about all the different use cases.

4:17 They happened to have rolled out a copilot to all of their, all of their work, 70,000 of their workers.

4:24 But that's not just for kind of summarization.

4:26 They've, they've actually thought how do they get the business processes right?

4:30 So if they think about, you know, the finance department, they need API connections into the SAP system.

4:36 They've got to think about the process flow for their finance functions.

4:39 If they think about the marketing professionals and the work that they do with Adobe, how does that link in, in terms of the business flow?

4:45 If they're thinking about compliance and risk, how do they go about that?

4:48 So they've thought about a very structured program and it's with that as an example that organisations are therefore able to kind of test and see is that a use case that is going to is going to yield return or not.

5:01 And actually, if I just go back to the work that we talked about with RSA, you know, doing that where you're trying to be really specific around what it is, you've got great innovation and technology that come from context, Sir, and then you've got a real return that comes from that.

5:15 So I think organisations need to take a step back.

5:18 They need to think about what problems they're solving.

5:20 They need to put the right framework in and they've got to do that all important change in adoption management.

5:25 Otherwise that statistic will stay at 15%.

5:27 It is a real ecosystem approach.

5:29 Thank you.

5:31 I want to follow up with you, Doctor Hudson.

5:34 IBM has been deeply involved in enterprise AI deployments, particularly in regulated industries.

5:40 An IBMCEO survey states that 71% of GNAI projects are stuck in piloting.

5:48 So from your experience, how do you see organisations successfully bridging the gap between their AI ambition and actual execution?

5:57 It's just first to say that the survey is from May last year and there's a new one due anytime.

6:03 So watch out for that.

6:04 But, and we'll see if the stats moved move forward any.

6:08 But you know, there's no doubt we're moving from hype into reality.

6:12 In terms of deployment, there are $16 trillion of economic opportunity at stake by 20-30.

6:21 There are six 600% increase in the number of AI companies just in the UK in the last 10 years, very significant.

6:30 So there's no doubt that we're all under pressure to get AI deployed and quickly and for us in the tech industry to be able to help do that safely and ethically.

6:41 So I thought it would be worth just giving a few examples of where we're seeing AI deployments at scale.

6:48 Very much like Claire said, organisations have taken a step back and started to pick on the one or two use cases where they can see real value quickly.

6:58 So the ones I wanted to share, WPP advertising and marketing giant, they've trained 1000 of their people in AI and they are using AI across the entire buying journey for B to B marketing to identify who are the buyers, how do they get to them in the most cost effective way and how the how do they drive that sales performance for their clients.

7:22 The second probably people from NatWest in the audience.

7:26 NatWest has deployed Cora Plus Plus is a Gen.

7:30 AI enabled chatbot.

7:31 It's got three large language models backstage.

7:35 It's already delivering very significant benefits in terms of customer services, 94% digital in terms of people using their app and 150% improvements in in the way their contact centre is deployed.

7:50 And then the third one is the University Hospital Coventry in Warwickshire, completely different space, really simple use case, bit of processing, mining together with some AI has yielded across their outpatients wait lists, 700 extra doctor hours a week and they've scanned in 18 hours the amount of patient letters it would previously have taken them four years to scan.

8:19 So very significant improvements all delivered in eight weeks.

8:23 And at IBM, we've delivered 3 1/2 billion dollars of savings to the bottom line.

8:29 That's all underpinned by AI use cases, looking at everything as an AI use case that's across customer services, HR, finance, supply chain, procurement operations.

8:42 So real value delivered on use cases that you know can drop quickly to the bottom line.

8:48 So that's how we see the the market unfolding with a few different sector.

8:53 So there is hope, There is definitely hope.

8:56 Let's move to our next topic, somewhat related democratisation of data.

9:02 I'll stay with you, Doctor Hudson, if you don't mind.

9:04 Organisations are increasingly pushing for broad access to data through AI tools.

9:11 AI is underpinned by data and that's to drive innovation and decision making.

9:17 However, balancing this democratisation with governance security compliance remains a real challenge, especially in industries like banking where regulatory requirements are stringent.

9:28 Given IB, Ms.

9:29 leadership in enterprise AI and cloud, how do you see organisations again striking the right balance between enabling broad data access while still remaining robust governance and risk plans?

9:42 Yeah, it's look, it's like any other disruptive technology.

9:45 You've got to look at the trade-offs that come with adoption and balance the value that AI can deliver and the investment it demands against the risks that you introduce in in doing those deployments.

9:59 There are vast amounts of data being consumed from multiple sources into to train AI models.

10:08 And that that means business leaders increasingly have to think about AI and data governance as strategies for value creation in terms of how they grow their business and that they have to discuss at board level, IE not just in the IT department.

10:24 So that's thing one, I think 2 organisations have to ensure that the models and the the use cases have transparency and fairness throughout.

10:35 So it's really important to think about AI ethics as a top topic for the board.

10:41 We have an AI ethics board.

10:43 I know many of the companies here today have the same.

10:46 We have principles of trust and transparency and a governance framework that underpins the development of AI all the way through to its deployment.

10:57 It's really important to have those in place and to have that transparency around the large language models and what data they've been trained on.

11:07 There are then some simple things we can all do as we deploy AI models.

11:11 Simple day-to-day principles, taking accountability for the AI solutions in the real world, no matter what role you're in.

11:19 Making sure you're sensitive to a wide range of of cultures and cultural norms, not just the ones you know.

11:26 Making sure you work with your teams to address the biases and make sure you're inclusive.

11:32 Ensuring that humans can perceive, can detect and can understand the AI decision process, and then making sure that we enhance the ability of as individuals to be, to own and to be responsible for our own data.

11:48 So they're just some simple things you can put in place today.

11:52 That means they are the whole X column in a business has to be pretty savvy around AI governance.

11:59 And that means a lot of training needs to go on.

12:02 You have to govern the whole system end to end, not just the bits.

12:06 And then you have to have an empowered leader who's going to take control of AI and data governance.

12:11 So that's the second thing.

12:13 The third thing very quickly is to make sure you've got a platform and use technology to provide an AI governance layer.

12:22 And that can make sure that you mitigate some of the AI risks that you're that you're having as you deploy these systems.

12:30 So we built something called what's an X dot governance?

12:34 It gives the entire governance of the AI life cycle.

12:37 It makes sure that you can manage risks, stay compliant with regulations, stay compliant with your own policies and procedures and that you can explain the data where it came from, what decisions have been made across across that data.

12:53 So that gives you some trust and some transparency as you deploy those AI models.

12:59 So pretty much every day, I think we're all reading on LinkedIn new opportunities for AI.

13:05 We've got to make sure that we look at when AI is deployed, how do we do that in a responsible way so that we can preserve trust.

13:15 You mentioned board level conversations, Vena, and I'd like to bring you in and get your guidance on this topic from a board governance and risk oversight perspective.

13:26 What are the key considerations that leaders should keep in mind as they work the scale, AI and data access responsibly?

13:34 Yeah.

13:34 So I think first of all, I would say that board governance is evolving too as, as we're all beginning to really sort of lean into it.

13:42 So I would say the very first thing the board boards and the leaders I speak to just want to be really clear where the responsibility and ownership of AI sits.

13:51 And it's easier in a smaller business because we we would, I'm really delighted that we're growth partner of the year, but we work with smaller business all the way through to larger corporations.

14:00 So having clear ownership, and the reason I say that is as an accountant, I can say this, we need to have an inventory of all the tools that we're using, the problems we're trying to solve.

14:12 So more importantly, the opportunities that those tools bring in an organization.

14:17 And so AI is everywhere in large organizations and the adoption rate in smaller organizations is still growing.

14:23 So we know that's that.

14:25 So that's the first thing is actually knowing who, who has that data and who actually owns the AI tools.

14:32 The second thing is, in an organization like ours, the power of AI in a knowledge business is in the hands of our colleagues.

14:40 So actually equipping users to use it safely, correctly and ethically is a really key piece.

14:46 And you talked about enabling, right?

14:47 So it's everything we've talked about.

14:49 But from a board perspective, you want to know, you want to understand what you're dealing with, and you want to know that it cuts across your business.

14:56 So what does that mean?

14:57 So then you go down to policies and procedures.

14:59 Are they adapting?

15:00 Are they being creative to ensure that we're really clear what the guidelines are for AI development and deployment?

15:06 So we talked about ethical principles, we talked about, we talked about data privacy, data access, data lineage.

15:11 There's so many layers and that is about making sure that the board understands that the right mechanisms and controls are in place.

15:19 They're talking about controls risks.

15:22 So risk, every organization monitors risk at different levels.

15:26 So you know, it can be strategic, financial, operational, technical, legal, etcetera.

15:30 AI cuts across all of those.

15:32 If we talk about multidisciplinary AI in our organization, for example, we bring our models into a closed environment called Ava so everybody can play around with it.

15:43 So from a risk perspective, if AI risks are integrated into the risk framework that we have for businesses, then it can be mitigated or managed and monitored alongside this tapestry of complex interconnected risks that every business is grappling right now.

16:00 So just talking about the approach to to AI from boards, what I would say is boards are adapting and evolving and that that's because we're evolving how we use AI in organizations, whether it's for ourselves or for our clients and we're having to equip ourselves and we're learning.

16:18 The one thing I would say is if you're as boards, if our touch points aren't changing as we speak about AI throughout a year, then we question why, right.

16:31 So we don't want the same people because it is evolving.

16:33 And then equally what I hear from other boards is if you have, if you're not progressing on I, I, then the question is why?

16:40 Because where's the opportunity?

16:42 So it is actually looking at the opportunities as much as it is about managing risk board context.

16:49 Thank you.

16:50 I did talk a lot about risk there and I'm going to move on to the next topic of GRC and AI driven risk management.

16:55 I'm going to stick with you if you don't mind and explore some of this.

16:59 So AI is increasingly being leveraged to strengthen GRC, but we're being told that it introduces new challenges, range of model bias, explain ability to regular scrutiny.

17:13 As someone who advises boards and senior executives on risk oversight, what do you see being the real risk factors that can negatively impact organizations when applying AI?

17:25 And how does that differ from the status quo today without AI intervention?

17:29 So it's interesting you talk about GLC.

17:31 So we recently, only about four weeks ago, we had an event at KPMG with heads of internal audit.

17:36 And in advance of that, we did a bit of a survey.

17:39 74% of them are using AI already in their roles.

17:43 And that's only one element.

17:44 It's not, it's not, but it's just interesting that, you know, people are really trying to adopt it and use it to leverage.

17:50 But in terms from a board perspective, you've got to step back as business stakeholders.

17:55 I would say there are probably three things.

17:56 The first thing is awareness of how and where AI is being used.

18:01 It's really connected to what I just said earlier, but really understanding how and where it's used in an organization.

18:08 And the reason I say that is because there's a risk of duplicative investment and there's a risk that it's misaligned with objectives.

18:15 So let me give you an example.

18:16 One of our clients that we work with is absolutely critically clear from their risk appetite that they do not want to use AI or models or data for client from a client lens.

18:28 They will only use it from internal perspective.

18:31 And that's just a risk appetite thing.

18:33 So understanding that's really important.

18:35 And then actually it also applies for 3rd party providers Contexta right?

18:40 No, it is, it's about how how our providers are using AI and how that impacts us and that has legal and procurement implications.

18:47 So from a board level perspective, you know, we're looking at across the risk spectrum to see what does that mean and are we comfortable that that's been embedded into that.

18:56 It's not surprising data quality and explain ability is a risk and we've talked about it.

19:01 So I won't go into that one.

19:02 And then the third one, I think is the sheer pace of change.

19:07 We can manage risks for a particular outcome or a particular purpose.

19:12 But technological change and model change, we've just seen it today, right?

19:15 2 great models.

19:16 It's changing so fast that the nature of risks is shifting and the capabilities that you need in the organization to manage those is also changing.

19:25 So coming back to the point, I said, you know it's evolving, so you have to be on that.

19:29 So for me, that is a really big thing.

19:31 The second part of your question, which was I think about AI using AI, not using AI, Yeah.

19:39 So, yeah, so I'm not technologist, but AI on the way.

19:43 This is how we look at it in the board.

19:44 AI operates in a system, there is a risk of over reliance on the product, which removes human skepticism, right.

19:53 So let's just accept that as a as a place with you when you don't have AI, where there are gaps or inconsistencies.

20:01 Humans tend to fill it with experience and judgment.

20:04 Doesn't mean it's right, but that's how we tend to fill it.

20:07 So there is an increased risk of bias, but there's possibly transparency with that.

20:13 But and there's also an explainability.

20:15 We all understand logic when we explain the logic we've used, whereas you can't see it, you can't see that.

20:20 But one thing I would say is humans interact with many systems and we're also really good at gaming systems.

20:27 That's all.

20:31 Great answer, Claire.

20:33 Claire, I'm going to follow up with you if you don't mind on this one.

20:36 Microsoft has been at the forefront of responsible AI development.

20:41 How do you see enterprise, other enterprises operationalizing AI principles to mitigate risk again and with the potential of much wider adoption of LLMS as cost models and new players become more favorable?

20:58 Yeah.

20:58 I mean, we, we are proud of the work that we've done on kind of leading on responsible AI for, for for many years.

21:04 I mean, I think actually Nicola did a nice job of summarizing many of the things that other organizations are doing.

21:10 And I think you you would look at many, most of the tech companies will have, you know, frameworks in terms of, you know, principles of fairness and inclusion and security and all the rest of it.

21:20 I think as the models, you know, when the kind of the evolution of AI started, you know, what you've seen is a bifurcation of models with many new models, whether that be Llama or Mistral or DeepSeek or you know, like there's all these new models are coming.

21:36 And I think in a way that just shows the pace of innovation that's coming and will continue to come.

21:42 And so I think for organizations when they started, you know, when they were doing this kind of thousands of use case thing that I said earlier on, they were they were in the mindset of all, which one am I working with?

21:54 You know, well, will I work with open AI, am I going to do another thing, etcetera.

21:57 Whereas increasingly we are now seeing organizations thinking about have they got the right governance framework set up for the choice of models and am I choosing the right model for the right purpose.

22:12 So we started with the principle of large language models.

22:15 You know, there are many instances where large language models is just not appropriate.

22:20 It doesn't need, you don't need to spend that level of money in order to do it.

22:23 And this sort of iteration of small language models which are which are right for the right task.

22:30 One of the investments Microsoft made and announced earlier on this year was was AI Foundry, where it's principally an AI platform that will put the right security and governance around all of the all of the language models, you know, including after the DeepSeek announcement came, I think we announced it in our models 24 hours later.

22:50 So it just gives you the sense of, but you know, ensuring if you are wanting to take leverage of that kind of model that you're doing it with the right security compliance and infrastructure wrap around it.

23:01 So I think organizations now are trying to, again, they're trying to focus on the things that they want to get done.

23:08 They've obviously got have in depth knowledge of which models they're using for what, but they've also got to have the right governance framework in order to make the right decisions of for the use case that they're going after.

23:21 Are they deploying that in a way?

23:23 Because otherwise it could become like the Wild West, you know, in terms of how they think about it.

23:28 And so, you know, similar to the advice I think that Nicola and have given, I always encourage organizations just to really think about what it is they're trying to get done.

23:39 I would just maybe reiterate the points that Beena made.

23:42 I mean, this is definitely a conversation for the executive team, but it's increasingly a conversation for boards.

23:48 And if you imagine even even as an executive leader in Microsoft, it's quite hard to stay on top of everything that's going on.

23:55 So if you imagine for a non-technical executive team, nevermind a non-technical board, it's extremely high, it's extremely hard to stay on top of it.

24:04 And that in lies the job of all of us actually to make sure that we're kind of educates them.

24:09 I personally hosted many boards in our facilities in the US and in and in London, you know, and actually they they're asking me a lot of questions about what are the risks that I should look at.

24:19 And I often sort of talk about this sort of balance of opportunity and risk and how do they make sure that they're getting themselves equipped and maybe thinking about, you know, the more regulated industries, particularly given, you know, there's a bit of an FS flavor to tonight.

24:32 The other audience that I think is really important from an education standpoint is the regulators, because if you think about appropriateness of risk versus opportunity, you know and you're operating either for governments or financial services etcetera, I think you've got to think about how you're working.

24:51 We are all working with the regulators to make sure that they stay on top of that and that's something that we take very seriously with all the regulators globally.

24:58 I won't get into what's going on geopolitically right now, but it but I think it's going to be an important thing as we think about, you know, certainly for the UK, Europe and other global entities, like how do we make sure that we use AI to advance opportunity both in businesses and for countries and society.

25:17 You mentioned DeepSeek and I would love to ask more questions about Microsoft's reaction when that move came out.

25:23 But that's one for the future.

25:24 OK, talking of the future, the future of AI and regulation is just ask one question because we're we're running short in time.

25:32 Doctor Hudson, I will go to you.

25:34 With AI regulations rapidly evolving, particularly frameworks like the EUAI Act and whatever emerging US policies come out, compliance is becoming a critical priority for enterprises.

25:47 Drawing from IB M's experience and helping clients navigate complex regulatory landscapes, what proactive steps should organizations take to future proof their AI government strategies Look at in the interest of time?

26:01 I'll, I'll keep it brief, but certainly at IBM, we believe in smart regulation, which means you've got to provide the guardrails for society at the same time Createspace for innovation.

26:13 And that means the regulation's got to be interoperable.

26:17 It's got to be flexible and it's got to regulate risk rather than the technology itself.

26:23 And Vish mentioned the World Economic Forum regulation was a big, big topic there this year.

26:30 So EU i.e.

26:31 UAI act absolutely critical.

26:35 We were very, very involved in that, but it just took to kind of cut to the chase in terms of what what you were asking around how organizations can future proof, proof their governance strategies.

26:48 I think there are three things that organizations can do today.

26:52 Firstly, make sure you understand the provenance of your data.

26:57 Is the data the right data?

26:59 Where did it come from?

27:01 How did it evolve?

27:02 Do you understand the data flows?

27:04 Are there any discrepancies in those data flows?

27:07 Secondly, make sure you've got explainability around your AI models.

27:13 We've heard it multiple times in this panel.

27:15 They need to be transparent.

27:17 They need to be explainable.

27:19 That helps us to reduce any harmful disinformation or deceptive content that might come up in the models.

27:28 They are, they're not just a static thing.

27:30 They require maintenance and throughout that life cycle to make sure that you know how how the decisions are being made.

27:37 And then thirdly, I mentioned it before, but making sure that you mitigate bias and making sure that you can then use AI on a broad basis with some level of confidence that that bias is, is not there.

27:50 You talked about it, Claire, in terms of regulated sectors, financial services, healthcare, we all need to make sure that any decisions taken are not creating unintended consequences, which can have fairly significant repercussions for people.

28:05 So and tolerance as we know is very low for any any misadventure in that space.

28:11 So it's really, really important that we take those three steps.

28:16 And I think for regulated industries, there are lots of software and and infrastructure solutions now that are designed for those industries that have built in controls that help you meet the regulatory requirements.

28:30 So with that, I think I will just wrap by saying the trust is so, so important.

28:37 We've all talked about it tonight and making sure that we can comply with regulations and support regulators to be really smart so that we've got the guardrails, but we can all innovate at the pace we need to.

28:51 Thank you and thank you, everyone.

28:53 That wraps up our discussion for this evening.

28:55 So thank you to our esteemed panelists, FINA, Doctor Hudson Claire, for your brilliant insights and for sharing that.

29:05 I hope everyone got as much out of that as I did.

29:09 It's it's a real honour for me to share the stage with such industry leaders such as yourselves, especially as we look forward to celebrating International Women's Day later this week.

29:20 So thank you, thank you everyone.

29:22 Have a great evening.

About the speakers

Clare Barclay

Clare Barclay

President, Industry & Enterprise, Microsoft EMEA

Microsoft logoMicrosoft logo
Nicola Hodson

Dr. Nicola Hodson

Chair, IBM UK

IBMIBM
Bina Mehta

Bina Mehta

Chair, KPMG UK

KPMG logoKPMG logo
Derek Brown

Derek Brown

Chief Revenue Officer, Quantexa

Scroll back up