QuanCon2025

How Advances in AI Could Transform the Public Sector

Dr. Laura Gilbert, Head of AI for Government at the Ellison Institute, explores how data and AI will reshape public services. Discover the potential of AI to grow the UK economy, improve public services, and ultimately enhance the lives of citizens. Dr. Gilbert shares practical insights on leveraging AI to drive efficiency, productivity, and better decision-making in the public sector.

Read transcript

0:06 Good evening.

0:07 Thank you very much.

0:08 Can you hear me?

0:10 Wow.

0:10 Well, I tell you what, this is definitely one of the most beautiful audiences that I've ever seen at a tech conference.

0:16 But you're also one of the quietest and I am supremely jet lagged.

0:21 So I'm going to need you to help me out here.

0:24 I'd really love you to help me keep my energy up.

0:27 So I'm going to ask you to do me a favour, just a little favour, not a big ask.

0:32 And what it is.

0:32 I'm going to ask you a question and I'd like you to answer that question with a lot of energy.

0:37 It's going to look like this.

0:38 The answer to the question I'll tell you now is yes.

0:40 So I'm going to, I'm going to ask you the question.

0:42 Then you're going to throw your hand in the air and you're going to shout yes.

0:46 And everybody downstairs is going to wonder what we're up to.

0:49 Hope you're up for this.

0:50 So please just do me a favour, join in.

0:52 Right here comes the question.

0:53 Are you looking forward to my talk?

0:55 Yes, thank you so much.

0:58 Much better.

0:59 I feel immediately very energised and engaged.

1:02 Really appreciated.

1:03 My name's Laura.

1:05 I've just started a new job, very exciting.

1:07 I'm the head of AI for government for the Ellison Institute and the Tony Blair Institute.

1:11 We're doing a, a joint programme and it's really expanding on what I was doing before this, which is trying to get governments to make better decisions.

1:19 And we're doing that with AI, which is one of the really good use cases for that.

1:23 And I can't talk much about the work because it's just started.

1:26 So it'll be a little bit early, but I'm still an expert advisor to the British government and to my old AI team there.

1:32 So I'm going to talk a little bit about what we've been doing there and we're going to carry on and do it bitter, bigger and better.

1:38 So I joined Downing St.

1:39 in September 2020, sort of mid pandemic as the first director of data science.

1:45 I didn't really know I was a data scientist.

1:47 That actually hadn't occurred to me.

1:48 I was originally a particle physicist.

1:51 I did that for about 8 years.

1:52 I then went into quantitative finance.

1:55 I was a quant working in hedge funds, three hedge funds actually, two of them collapsed entirely and one of them lost a lot of value in that time.

2:04 So curious statistical anomaly, it's not been explained.

2:07 And after that, I went off and started a Med tech company with, with a, with a colleague and the two of us sort of took it through over nearly 10 years through SME to acquisition and exited on the 2nd of March 2020, which was a very interesting time to start thinking about what you're going to do next.

2:27 So COVID, of course, immediately hit.

2:29 And a few months later, I was sitting down with, I've had a couple of glasses of wine and I saw this job director of data science in Downing St.

2:36 and I thought I'll Chuck ACV at that, that'll be fine.

2:39 And was really, really very surprised to get the job, very surprised to get the job.

2:44 And so I went in and I'll never forget it, really.

2:47 I went on my first eight, black door, lovely, shiny.

2:50 And I went in to sit down with my new boss, the principal private secretary.

2:54 And he said welcome.

2:56 And I said thank you.

2:58 And he said, I've got one piece of advice for you.

3:00 And I said, oh, brilliant, what is it?

3:02 And he said, you have six months to have an impact.

3:04 And I said, oh, great, what would you like me to do?

3:08 And he said, oh, I don't know.

3:10 And so I had to figure out what to do.

3:13 The good news was I had a plan.

3:14 I was pretty sure that what we needed to do was to get some data and evidence into government decision making.

3:20 And what a great place to do it right next to the Prime Minister.

3:23 So bounced in, you know, a few young people to help me.

3:27 And there are big decisions coming up.

3:29 We're going to get you the data.

3:31 We're going to help inform you.

3:33 And decisions are going to get better.

3:35 And so we went up to people and said, we've got some lovely data, would you like it?

3:38 And they said, no, not really, we're very busy, go away.

3:42 Not true of everybody.

3:43 There were people who were really quite excited.

3:46 But it, it, it set me back slightly because I'd assumed everybody would be as excited about this as I was.

3:51 And they're not because they're human normal people.

3:53 And, and not everyone's quite as keen on data as I am.

3:56 So I started thinking, well, why do I like data as much as I do?

4:00 And the reason I like it is because it helps you to make decisions.

4:03 And researchers tell us we make about 35,000 decisions a day, which is a lot.

4:09 So having evidence to do that well can be really, really impactful.

4:13 It also taught me that data isn't really just about those numbers.

4:16 A lot about what we're interested in with data and evidence is actually how people operate and how they interact with evidence and information.

4:24 How do you get somebody that isn't necessarily that invested in your evidence to listen to it?

4:31 And I started looking at evidence around there and I learned a lot.

4:34 And I won't go into it in great detail now because I'm going to talk to you about AI, but I will tell you one trick that's really useful.

4:40 If you want people, if you want people to do you a favour, you give them a compliment.

4:46 You Beautiful people, lovely.

4:48 And what's really good about that is it works very, very well if the person you know that that you're complimenting, if they like you and they think the compliment's very sincere.

4:57 It also works if they don't like you and they don't think the compliments sincere.

5:02 It's a heuristic.

5:03 It's an almost automated response in people that if you give them a if you give them a compliment, they feel like they have to do you a favour.

5:11 It's interesting, isn't it?

5:12 And we've all sat at our desk when someone's come by and going, Oh my God, I love your shoes.

5:15 Could you just see this report?

5:19 Yeah.

5:19 And as they walk you away, you think I hate that guy.

5:21 I don't know why I've agreed to that.

5:25 The other really good.

5:26 The other really good thing is that if you can get people to commit to something, even a really small commitment, if you can get them to commit to it, they'll think that it has more value.

5:38 So if I can get you to throw your hand in the air and shout yes for my talk rather than just sitting there, you'll think it's better than it really is.

5:45 Isn't that good?

5:47 So I learned a lot doing this job and I've been there for a couple, couple of years when chap GPT hit.

5:52 And of course, we're data scientists, so we were doing machine learning and, you know, varieties of AI.

5:59 Anyway, this wasn't particularly news.

6:02 We were already using sort of basic versions of large language models.

6:05 We were looking at maternity, maternal deaths.

6:08 Actually, I think the number of women that die on childbirth is still unacceptably high.

6:13 And we were trying to use large language models to read incident reports and try and work out why that was and also why 5 1/2 times more, many more black women die on childbirth and white in the UK.

6:24 So it wasn't a completely new topic, but all of a sudden it was cool.

6:30 We started getting invited to the parties.

6:33 Lovely.

6:34 So you know, what, what, what happened was everybody suddenly became interested and it started to initially, you know, initially very exciting, but very quickly it started to really worry people.

6:46 And I, you know, had a very frightening conversation with Geoff Hinton actually, who, who sort of explained his views on the risks and started to really be quite concerned about that, started to internalise the way this could really work and the kind of misinformation and disinformation.

6:59 And, and a, a group of us was sort of a little bit worried about all this.

7:03 We thought we'd better do something.

7:04 And so the government sort of got itself together and there was the safety institute set up and there was a summit and there was a new resilience directorate that were, you know, going to put mitigations in place.

7:16 And I was invited to a lot of conferences, a lot of meetings, a lot of round tables.

7:20 And everybody's asking over and over and over, you know, what are the risks?

7:24 What's the biggest risk?

7:25 What are these terrible things that could happen?

7:27 And some of them really did look really terrible.

7:29 But the more I sat on those panels and listened to people, the more I started to think very much as you've probably been thinking, that the biggest risk might be that we just don't do anything and we don't grasp any of the opportunities.

7:43 And that is very, very harmful.

7:48 So it sort of comes in two spheres.

7:50 One, one of them is sort of a security aspect.

7:53 And of course, if you're particularly if you're in finance, but if you are in an organization where you've got to have data security and you're at risk of being hacked and it would have really bad outcomes, what you do is you go and find the best hackers and you try and get them to work for you, don't you?

8:07 Let's get the best hackers, bring them into the organization, point them at the problem and and Will will therefore prevent anyone else getting in because they're the experts and government needed experts.

8:18 We needed the best AI people who could come in and tell us and inform us and build around the kind of threats that we were facing.

8:25 But we're also in a world where public service is very important, the National Health Service particularly.

8:31 It's incredibly expensive.

8:33 We have an ageing population, you know, the the process by which you give people benefits and manage the population and give people public services.

8:41 Often on infrastructure, there's aging and maybe it wasn't built for exactly that purpose.

8:47 That is something that if we don't really grip and embrace and power forward with data and AI, we will not be able to sustain and we won't have these kind of level of public services that we want people to be able to expect.

9:00 And so therefore I went to build this team incubator for AI 90% engineers.

9:07 That's very unusual in government.

9:09 Even the safety Institute has a staff of 200 and I think about 30 of them are engineers.

9:13 So this is very tech heavy.

9:15 And we went and stole them from all of your companies.

9:18 Quite a lot of the people on stage, sorry.

9:20 And we got them to come and build and we knew they knew how to do that because they've been doing it in, you know, in industry and doing it really, really well.

9:27 And it was very exciting.

9:28 It's an exciting mission.

9:30 We decided to go a little bit different to normal civil service, ignored a lot of the red tape if we're honest, but we were adopted mantra, which is radical transparency.

9:40 Tell everyone what we're doing, put the code out into the open.

9:44 And by doing that you can get 2 two things.

9:47 First of all, you know, people can come and look at it.

9:49 They can tell us if we're doing something wrong.

9:51 They can sort of challenge and there's also this big piece about trust that becomes very, very important.

9:57 And one of the first things that happened when we announced it was actually the then Deputy Prime Minister announced the Syncopator was that everybody pitched up on Twitter and they largely said two things.

10:06 One of them was government's not competent to do this.

10:10 And the other one was, oh, we bet you're going to do bad things with it for us.

10:14 You're going to take our benefits or reduce our services and, and it's going to be not, not good for us.

10:20 So we wanted to put everything out in the open and say, no, we really are not trying to sort of make things worse for you.

10:27 We're trying to make things much better.

10:29 ai.gov.uk, you can go there now.

10:31 You can download code, you can see what what's being built.

10:34 You'll see interviews and blogs and all sorts of things.

10:37 And we wanted to tell everybody that this is what we're trying to do.

10:41 We're trying to make government more human using artificial intelligence, not less.

10:47 And that's still my mission now.

10:49 I think it's incredibly important and it's incredibly exciting and it's across the public sector.

10:55 We need to use this to actually, and we've had examples earlier to make things more human.

11:00 A lovely example in the Department of Work and Pensions.

11:04 A lot of people write in thousands and thousands of handwritten letters when they need help.

11:09 And you know, a lot of those people maybe aren't super tech enabled and a lot of them really need help.

11:15 And a lot of them are desperate.

11:16 Not, not not a majority, but some.

11:20 And it takes 50 weeks for somebody to read that letter and be able to do something about it.

11:25 There's not the resource there.

11:27 And in 50 weeks, somebody who's desperate may not still be here.

11:31 So now the AI will read it immediately it arrives, it will obviously transcribe it, and it will do a sentiment analysis and try and figure out whether the person who wrote that letter might be vulnerable and might really need help.

11:44 And then someone will phone them.

11:46 You've made a service more human.

11:48 And we need to do that across the system.

11:51 Teachers, 50% of all teachers leave within four years.

11:56 They don't feel they can do the job.

11:57 It's not the job that they thought it was.

11:59 There's paperwork, there's admin, it's, you know, it's difficult, it's challenging.

12:03 Can we give them tools where they can do that job and spend a lot more of their time working with the children, which is I would suggest that why most of them do that job.

12:13 Similarly with nursing, lots of public sector work, we really want to be working towards a place where people are safer and they get that human touch and you get more of it, not less.

12:22 And I've got one example of that I brought in, which is my favorite called Caddy.

12:27 We have lots and lots of tools.

12:28 You can go and read about them.

12:29 This one is very, very simple and we built it with Citizens Advice.

12:33 And the question you're trying to answer is, can you make the advice better for the public?

12:37 We don't want to give an AI to the public to replace that human supporting them because people who call Citizens Advice, they might be in trouble.

12:47 They need some human support, but we do want to help the advisors to do what they, you know, a better job to, to give out the help and be more successful.

12:55 And So what Caddy does is it's, it's integrated in your teams or your meet or your Slack and you can chat to it the way you would a colleague at Caddy.

13:05 You know, my, my member of the public has had a Section 21 notice.

13:09 The landlord says this, is it legal to do that?

13:13 And Caddy will reply and give you a fully referenced answer.

13:16 No hallucinations linked to the source, etcetera.

13:19 It runs past the supervisor just to double check because not something you want to mess up.

13:24 It's very, very important.

13:26 And we ran a lovely evaluation on that with, with, you know, working very closely with Citizens Advice to design it.

13:32 And we had to do it twice.

13:34 The first evaluation made a little bit of a mistake.

13:36 We thought clever.

13:37 We'll run a randomized control trial.

13:39 And the way we'll do that is you'll spot the floor in this.

13:44 The way we're going to do that is that when, when somebody asks for advice, we'll either give it to the AI or we'll send it secretly to the human.

13:52 And then we can evaluate, you know, which advice is better.

13:56 But of course, the, the, the people, the advisers, they could tell the difference and they really liked the AI advice.

14:02 So they just kept answering the question over and over until they got the AI.

14:06 So you had to redesign that study a little bit, but you know, that's quite validating.

14:11 And the results came back and there was a 60% increase in the number of people whose problems were resolved and advisors felt 2 1/2 times more confident, you know, response times were shorter and so on and so forth.

14:25 So it's just a lovely, very simple piece of tech that you can build very easily on this sort of modern world to make, to make your services feel better for people without taking away that that sort of human quality that we think that people really, really appreciate.

14:41 And so we've been sort of doing that across the board in many ways.

14:43 There's pieces to pick up prescription errors that we're trialling at the moment.

14:49 Academics tell us that up to 22,000 people a year are killed by their prescription profiles, and it costs about a billion pounds a year as well.

14:58 We have a lovely piece of AI that I'm a very big fan of.

15:01 It's called Lex.

15:03 It actually generates legislation.

15:06 Legislation is very, very difficult.

15:07 It's really hard to read, you know, lots of bits stitched together from various adjustments.

15:12 And this will go and let you draft legislation.

15:15 And people in government are using it to draft legislation, you know, that is really robust and does exactly what you want it to do.

15:22 And it comes with a lovely hook for the user.

15:26 And it's great when you've got a hook for your user, which is that when you've drafted your legislation, does an analysis of how MPs feel about it, and it will predict whether the vote will pass and it's looking really positive.

15:40 It will tell you why people don't like it.

15:42 Maybe you can adjust the legislation.

15:43 You've got a better chance of meeting people's needs.

15:45 So there's all these sort of really interesting pieces of work that you can do with AI.

15:49 And what you're probably thinking, if you give them a certain thought at all, is that how do you handle all the data problems?

15:56 Data problems in government are massive.

15:57 And we've worked very hard on it.

15:59 The data science team and #10 went from really no automated data at all.

16:03 In fact, when we, when we started, there was one data feed which was a weekly screenshot of a table.

16:12 Won't tell you which statistical body centers that, but that's where we started lots of Excel spreadsheets and now there's about 8000 live data feeds going in and we have free again, it's on actually on this the GitHub repo for the incubator for AI, the lovely free open source API that's, you know, there's a no code interface and all sorts of things.

16:32 And so we've been implementing that throughout government.

16:34 But still there is a problem.

16:36 Lots and lots of data just isn't available.

16:38 Some of it isn't collected or maybe it's not collected pretty well.

16:42 It's the standards are often not there.

16:45 And at the moment, actually, that doesn't really stop us.

16:48 And it will continue to not stop me in my new job where I'm trying to do this for many, many governments.

16:53 Because in some ways, governments are actually so retro that if you can't do this or this or this or this, there's still thousands of opportunities to automate something useful somewhere else.

17:05 You know, there's other pieces, there's ways to work around and maybe we can use open source data, but that won't last forever and it's not the fastest way to do things.

17:13 We're quite opportunistic.

17:15 Let's find an area that we can do this.

17:17 We do really need that transformation.

17:19 Quantex, of course, works for the Cabinet Office on the public sector fraud and very successfully.

17:26 I know they're very, very happy with that relationship.

17:28 We really do need to get across the piece where everything that's very important to government that involves the way we serve the public has this standardized, automated data where we can build these sorts of tools on top.

17:40 So for anyone who's looking to engage in government, now's really the time.

17:45 There will be a continuously investment in this.

17:49 More and more people in government are really starting to see it.

17:52 They've got the hands on it.

17:53 They can pick up a tool and use it and they start to understand from that hands on experience with AI what the benefits look like.

18:00 So it really is now's the time and those standards they need to sort of push out.

18:06 Internationally, we have a lot of work around regulations.

18:10 Different nations are approaching regulation differently at the moment.

18:13 EU KS approach is to sort of keep it in the existing regulators.

18:17 If you're regulating healthcare, you should be able to regulate AI in Healthcare is the sort of idea.

18:23 But, you know, if we're not careful internationally, we'll set up regulations that are conflicting in different areas or that it's very difficult to work in.

18:31 And in some places, we're already seeing regulation arbitrage where people are housing their their companies in countries with sort of easier to work with regulations.

18:40 So there's a lot for the industry to do here and to really explain to people why you need the data and why you need to worry about the context in which you're using the data as well to drive better public outcomes, which is the thing I care about.

18:53 We do, of course, have assurance.

18:55 There are some assurance statements that are fairly cautious and they're exactly how you'd expect them to be.

19:02 But what actually matters to me when we're building these tools is the ethics.

19:08 And generally, I think the community of technologists and AI is fairly aligned on what looks ethical AI looks like.

19:15 And we have a number of examples in governments where we've done this wrong.

19:19 And it's, again, very, very important that we get the message across about how to do this right.

19:25 Generally, I think these are the accepted standards, transparency, What are we doing?

19:30 What are we doing with your personal data?

19:31 And we've got this wrong before.

19:33 You'll all remember the GCSE results algorithm where people's data was used to give them a grade that could impact their future.

19:42 And we weren't transparent about the way that was happening.

19:46 We don't think that should happen again.

19:47 We think we should tell people.

19:49 And that way, of course, if you're getting something wrong, someone will spot it and tell you.

19:53 We still think explain ability is incredibly important.

19:58 Everybody will know that a lot of AI algorithms work better if they're not explainable.

20:02 Black box algorithms can be much more efficient.

20:05 Much faster and more effective.

20:07 But if you go to see your doctor and you say to them and they say to you, well, we think you've got cancer and you have cancer of the lungs, and you say, gosh, what?

20:16 How can you tell?

20:17 I don't know.

20:18 The AI just says, I mean, it could be wrong, of course, but that's not the human experience, it's not what we're looking for.

20:26 So we'll use a slightly less efficient algorithm where we're going to constrain it.

20:31 So I can explain.

20:31 We still think that's very important.

20:33 Fairness, this is very interesting.

20:35 I find this area fascinating.

20:37 Fairness, equality, equity.

20:40 It's really obvious we've made mistakes.

20:42 Before.

20:42 There was a passport photo checking algorithm and it would tell you if your passport photo was going to pass automatically.

20:50 Love the idea.

20:51 It rejected twice as many black women's faces as white women's faces.

20:56 That's a statistics problem.

20:57 Someone, you know, you really have to think about how you're doing the statistics behind this.

21:02 We really can't do that.

21:04 We lose public trust.

21:05 So we need to do a lot of effort as a community when we're working with and internally and externally with government to make sure that doesn't happen again, to make sure that we're really testing effectively our input data and our output data.

21:18 But I will tell you, probably about 15 years ago, I was building an early version of, you know, the GP patient video chat.

21:28 And with a lovely little bit of Python, you can tell somebody's heartbeat from the laptop screen even then just from a webcam, because when your heart beats, your face changes color a tiny amount.

21:42 We can't detect it as humans, but on a webcam 15 years ago, you really could, so long as you're quite pale.

21:50 Doesn't work for dark skin at all.

21:53 And of course, there have been various, you know, pulse trackers on wristwatches that have done the same thing, don't work on darker skin.

21:59 But what you do, imagine if that technology had been incredibly valuable and it would have saved 10 million, £10 billion a year on the NHS, what would you do then?

22:11 Because we need that money, we need that money to treat people.

22:14 But then it wouldn't be fair.

22:15 So sometimes you have to find other ways to give people an equally good service or an equally good outcome that aren't directly technological.

22:24 So there's there's something really in this about thinking how we approach that and making sure everyone gets a really good outcome.

22:29 And finally auditable horrible case in the States, everybody knows about this where there was an algorithm that was supposed to decide whether or not people were let out on parole.

22:40 And again, it was a professional failing really.

22:42 The data scientists decided that if they deleted the row, they said ethnicity, that it wouldn't be racist.

22:50 That doesn't work at all.

22:52 If you are, particularly in some parts of America, you know, if you're black, you're not treated equally to a white person from before you're born.

23:01 Your nutrition is different.

23:04 Your access to healthcare might well be worse.

23:07 You're probably born in a lower income area, in a lower income family.

23:11 You, your parents are more likely to be intercepted and and stopped by the police.

23:16 You're more likely to be arrested if you are stopped.

23:18 You're more likely to have a longer sentence.

23:20 You're less likely to be released if you're black.

23:23 And that is baked into the data.

23:25 So just deleting that line didn't work and you could have predicted that.

23:28 But what was worse about that was actually sort of the contract terms, because people spotted it and they went back and it was actually contracted that built the algorithm and the software.

23:38 And they said, well, look, you know, we think this is racist.

23:41 We want to have a look at the data and see, we want it proved, you know, is this really how it's working or not?

23:46 And the company says, well, our contract doesn't say that.

23:49 You have to be able to look at the data.

23:52 That's a pretty major failing.

23:54 So when we are building these things, and particularly my interest is in public sector, we have to be very, very careful about this.

23:59 And a lot of it comes on industry.

24:02 You know, governments don't have that many technologies.

24:04 We need you as a community to continue to really embrace and push forward these standards and make sure that you're helping public sector to deliver a really good outcome for people.

24:16 And I think that's just incredibly important.

24:18 So what we are definitely doing in governments going forward, efficiency, productivity, reduction of repetitive tasks, that's definitely happening, cost savings, major driver already started going intelligent teaching systems.

24:32 There's huge investment in this.

24:33 Can we can we get kids to have more equal access to education?

24:38 Can we support them individually and can we build a welfare wrap around and stop kids getting hurt or left behind the affordable and targeted preventative healthcare?

24:48 Save money before we spend it, Be able to continue to fund our health services by doing just a much better, more tailored job of keeping people healthy and safe earlier on safer roads and workplaces.

24:59 I mean, we've all seen this.

25:00 AI is better at detecting a crack in a bridge than a human that's been going for a while.

25:04 Better decision making.

25:06 Get the information to the person that's making the decision.

25:09 Get it there faster.

25:11 Find out who has to call someone from the department work and pensions to help them and then hopefully automated space travel because why wouldn't you do that?

25:18 Fun, very exciting.

25:22 Also some bad things.

25:23 We all know this as well.

25:24 It is easier for fraud and security breaches.

25:26 We need to do a lot of work to prevent things getting worse.

25:29 Similarly, diss and misinformation and manipulation, particularly online, of people.

25:35 People selling us things that are harmful for us in ways that we don't want.

25:39 People changing our social structure, People using AI to change the way we communicate, the way we think about other groups.

25:46 Very, very harmful.

25:47 If we're not careful, we need to work against it.

25:49 Obsolescence of human workforce is really interesting.

25:52 I didn't ever think that the first people to lose their jobs would be photographers and software engineers.

26:01 That was a real surprise.

26:03 So we need to really think about how we as a society adapt to sort of changes in the workforce and the way we skill people.

26:10 There is potential for catastrophic outcomes.

26:12 We haven't seen much of it yet, which doesn't mean we won't.

26:16 The biggest one for me is the potential to widen inequalities.

26:20 We can use tech to make things more equal.

26:23 We can actively choose to use technology to give people better opportunities, to reduce wealth gaps, to give people better and fairer access to treatment, less racist access to treatment, you know, more balanced.

26:38 Find all the people that are currently missing out and, and, and close that gap.

26:43 Or we can let it get widened and all the people have the money, have all the cool tech and everybody else is left behind.

26:48 And that really worries me.

26:49 And we don't really know what the societal impacts will be on people's psychology going forward.

26:54 We know, for example, that kids, teenagers on average, would rather speak to an AI psychiatrist that they know isn't judging them than a human 1.

27:02 And that's very different to the way that my generation would view that.

27:06 So we don't quite know what will happen next.

27:09 This is my last slide and it comes back to the point at the beginning.

27:13 And I tell everybody this all the time because I think it's incredibly important.

27:16 So bear with me and I'm going to tell you this as well.

27:20 I went into government and I tried to give people data and not all of them were very keen.

27:24 And the reason is largely because they're experts.

27:30 Philip Tetlock did this lovely study.

27:31 If you don't know it already, have a quick read.

27:34 But it's actually in 1984, he started and he found 284 political experts, and he asked them 100 questions about the future.

27:44 And if you think about it, predicting the future is the basis by which we make decisions.

27:49 So you say, well, I think that if this, then that, and therefore I should do this.

27:53 And he got them to predict the future.

27:56 And he wrote a paper.

27:57 I waited 20 years to see if they're right.

27:58 Very simple.

27:59 You know, in this area of policy, it's currently like this.

28:01 Do you think it'll be more or less all the same again?

28:04 And in 20 years, he went and reviewed all of their answers, checked it against the real world.

28:08 And he wrote a lovely paper.

28:09 And Daniel Kahneman summarized it.

28:11 He said the policymakers did about as well as monkeys throwing darts at a dartboard.

28:17 Tiny bit better than random, Worse than what?

28:20 What it's fancifully called in the paper.

28:22 Minimally sophisticated statistical model, which means take the average, draw a line through it.

28:28 They did better outside their field than in If you were an educational policy specialist, you are more likely to correctly predict the future of defence than education, and they were phenomenally confident that they would be proved right.

28:43 So this is actually very important.

28:46 People come up to me all the time and they say what's next for AI?

28:49 Like, I have absolutely no idea.

28:52 You'd be better off asking the binman.

28:53 Clearly I'll do worse, right?

28:56 I'm a technology expert.

28:57 You shouldn't ask me.

28:58 We know this is a really bad idea.

29:01 So what this tells us is that we really have to be prepared, getting at what the future holds and being confident and building our businesses and our society to be resilient against that thing.

29:15 But you know, study around the same time, put expert technologies, predictions are 80% wrong.

29:20 And it's probably worse than that now because it's harder.

29:23 So we have to be prepared.

29:25 Don't assume you know what the future looks like.

29:27 Don't build that.

29:29 You need to build the Canaries in the coal mine.

29:31 You need to watch the industry.

29:32 You need to be ready to adjust to changes in technology and data, in risk landscape, in geopolitics and be able to adjust accordingly.

29:43 So that is my slightly depressing warning there, but it's also a really good way to build for the future and we need you to do that really well.

29:53 Thank you.

About the speaker

Dr Laura Gilbert

Dr. Laura Gilbert

Head of AI for Government, Ellison Institute

EITEIT
Scroll back up