126: Professor Scott Donald – Should Trustees Use AI?

05/01/2026 40 min Temporada 8 Episodio 126
126: Professor Scott Donald – Should Trustees Use AI?

Listen "126: Professor Scott Donald – Should Trustees Use AI?"

Episode Synopsis

In episode 126, Scott Donald, Professor at the Faculty of Law and Justice of the University of New South Wales, breaks down how artificial intelligence is reshaping the work of superannuation trustees. Efficiency is the big draw, but legal and ethical risks mean trustees are moving carefully. AI is already embedded in parts of the finance sector, from document summarisation to risk management, yet its tendency to hallucinate and behave inconsistently remain serious hurdles. Scott explores where AI can genuinely add value and discusses its application to investment strategy, compliance and even private-market valuations, while stressing the need for strong human oversight. Enjoy the show! Follow the Investment Innovation Institute [i3] on Linkedin Subscribe to our Newsletter Explore our library of insights from leading institutional investors at [i3] Insights Overview of Podcast with Scott Donald, Professor at UNSW 02:00 Using AI as a trustee is a little bit different because you are managing money of somebody else 04:00 AI can be applied where a trustee knows what information to look for, but just asking it to go and look for something can be quite dangerous. 07:30 Trustees have an obligation under the SIS Act to form an investment strategy. I think it would be very dangerous to use AI here. 10:00 Risk is where you don't think to look; AI can help with that 12:30 AI models don't really hallucinate. They don't seem things that are not really there, because they don't care about the truth. 14:30 In contrast to a fund manager, a trustee often has to answer to the Australian Financial Complaints Authority (AFCA) and they will ask you to justify your decision. 'The machine said it', is not an answer that is going to work. 18:00 How human interaction with an AI model occurs is actually quite crucial and we haven't really grappled that to the ground yet 24:00 Should trustees use AI at all? "I think they should consider it, because it can drive down costs" 30:00 Most of the AI systems out there are trained on datasets that are massive, compared the data in a super fund 37:00 As investment and legal professionals, we have to be aware that some of the skills that got us to where we are now are no longer worth the cost to us to acquire Research paper: Donald S, 2025, 'Artificial Intelligence and Super Fund Trusteeship', Company and Securities Law Journal, 41, pp. 137 - 157 Full Transcription of Episode 126 Wouter Klijn  00:00 Welcome to the [i3] Podcast. I'm here today with a return guest, Scott Donald, who is a Professor at the Faculty of Law and Justice at the University of New South Wales. How did you come to research this topic?  Scott Donald  00:24 Look, it's very difficult to avoid the issue of AI. It comes up everywhere in the news, talking to trustees about what they're doing, the plans they have for next year, and so on. So for a lot of Trustees, it's a really important issue. Trustees typically don't have enormous resources to spend on things, and they've got an enormous list of things they've got to get through. Yeah, so it's, it's a natural place for them to look for efficiencies and ways to get things done quicker, more rigorously, perhaps cheaper. So just hearing it on the on the grapevine, that they were really interested in this, but, but also a little bit nervous. Yeah, you know, what were the risks? How, what, what, from a legal perspective, might be some of the issues. And so that was really how I started to get engaged in this is to think, Well, we know trusteeship is a little different. Yeah, it's not just about managing your own money. You're managing money for someone else, and that that does change things a bit. So that's how it came about. Wouter Klijn  01:22 So did you find that they were already dabbling in AI, or were they more curious?   Scott Donald  01:27 I think most of the big financial institutions are well down the track of thinking about how they can employ AI in different areas. And so the trustees that are part of those big institutions were hearing things or being told that they should consider different ways of organising their operations. But just generally, even at conferences, you'd see people talking in groups, or maybe the presentations from people who are spruiking the advantages of AI. So they were coming across it in lots of different ways, and there'd be very few boards, super fund, boards, managed investment scheme, boards that aren't think, haven't thought about, haven't discussed, how might we use this? Could we do that? Could we do this? Or could we do that? But it's hard to get independent advice on it, because the expertise in the area is so much in the hands of those who are selling the various products that you know you're sitting there as a trustee with lots of other concerns to do with the administration of the trust, to invest and so on. And now you've got, well, hang on, what do I do with AI? It's, it's, it's not an easy area to get into, yeah.  Wouter Klijn  02:32 So what are some of the obvious applications of AI for a trustee? I could imagine that, you know, they get large amounts of documents to read. A core of AI, at least with large language models, is sort of the summarisation capabilities. Can they just chuck it in AI and tell them what they should know?  Scott Donald  02:52 Well, sometimes that's always very tempting, and sometimes it's a better idea, and sometimes not so much. I mean, AI is very good at collating information. You can send out to search things, find things. It's helpful if you know kind of what the lay of the land is beforehand, because it's not quite so good at disclosing that it hasn't actually found something, or it hasn't known. And we'll maybe talk a little bit later about sort of hallucination and the way it can create apparently information. So I think certain types of summaries, it's very useful for, I think some of the trustees that we've been speaking to who are using it to in very controlled environments, so places where or circumstances where they know what type of information is likely to be relevant, they know what type of information they need to bring to bear on a decision, then I think it's much safer. I think just asking it to go and look for something could be kind of dangerous. I don't know if you've ever tried to plan a trip somewhere using it, or tried to find the date for something, it quite happily will come back and tell you the wrong answer, and not really. You've got no way of second-guessing whether that's right or not. Yeah. So, so I think that that kind of summarisation is useful, but you need to be really sensitive and thoughtful about is this an area where we know what sort of information we're likely to find. So, you know, the insurance companies have used AI very effectively to understand better the nature of the risks and the sorts of information they need to work out what the underwriting risks are around around their book, but they're looking for specific things. They're interested to find patterns that they might not have seen otherwise. That I think works quite well. I think if you were just trying to do it in an area where you know, perhaps you didn't know what sorts of answers you wanted to find. You might be a little bit more. Might be a bit riskier for you. Wouter Klijn  04:49 Yeah, so I think in your paper you list a few areas where it could potentially be applied, and I picked out a few that I thought were interesting for. Sort of the investment environment, and one of them is investment strategy, mandate, compliance and private market valuations. Now those sound kind of complex topics. Is that sort of an obvious area to apply? Scott Donald  05:16 Ai, let's start with the investment area. I think, for instance, if you're a superannuation trustee, you actually have a responsibility under the under the SIS Act to formulate an investment strategy. That particular act is one. I think AI, you'd be very nervous about using AI for, because the actors said you as trustee, and therefore, you know your directors have a response, a legislative responsibility to do something, which means that if it's ever comes into question as to how that was done, you need to be able to demonstrate how you might have gone about it. So the highest level investment strategy story that that, I think you need to be careful about how you might use it. But beneath that, you could fund funds and investment managers, investment managers and investment firms have been using AI for some time to go looking for trading strategies, to go looking for risks, to go looking for other sorts of things that maybe wouldn't be apparent using normal kinds of analytical techniques. And I think it can be tremendously powerful there, because at that point that's really kind of a flow down from the overall strategy. And I think you can, you can certainly use it there. And hedge funds and investment managers and others have been using it there quite productively for some time. Yeah, and that makes perfect sense.  Wouter Klijn  06:33 I think the area of private market valuations, there's this a lot of focus on that by the regulator. They want to see more of it, potentially more look through. How can AI be applied there? Scott Donald  06:47 Look, one of the nice things about listed markets is you've got lots of people looking at the data from lots of different angles, asking lots of questions. And so any finance theorist will tell you that an efficient market hypothesis, you know that all that information is going to come to bear in private markets, you don't have that quite so much. And if you're trying to work out what the value of a property is, and all your all the leases and all the rest of it, and drawing all that information together can be incredibly time-consuming. People make mistakes, not just machines, and so, you know, it's error-prone as well. So I think some of that data collection, again, where you're looking for particular things, where you're looking maybe through lease terms and so on, trying to work out if there's anything unusual in any of those that might give you concern about risks. I think it could work very well. Ultimately, though, in something like a valuation of a private asset, again, you're probably going to want some kind of expert human looking at it and going, Okay, does this make sense? But maybe looking having that person look at what the AI has come up with to say, Well, why is this coming up with a different answer? Or what's what sorts of signals that are coming through the analysis that an AI model can do that I wouldn't have seen otherwise. Wouter Klijn  07:57 So it's sometimes that third opinion, second opinion, to just have a look at things and maybe come up with areas that have been overlooked. Scott Donald  08:06 I think so. I think there's, I mean, one of the challenges in risk management generally, and it has been for as long as I've been part of the industry, which is now approaching 40 years, is risk is where you don't think to look right? You can measure the all the data that's coming out, but that will tend to tell you things that you already sort of sense. Occasionally, you'll see something what you really the real risk, the one that really kind of keeps you awake at night is the one that you haven't seen. And AIS go about their tasks slightly differently. They can find things that don't jump out at you as an analyst, or perhaps don't get picked up in the econometric models or the linear types of analysis that typically we do. So I think there's a real opportunity there. The challenge, though, again, is, is that those risks have to be tied down to something. So you have to have some way of understanding why might that be a possibility? Because, I mean, we can all, we can all think of risks that are wild and outlandish, but that's not really the point. It's actually trying to work out how it might play out. What might, what might the cause be? How might it play out in my portfolio? How is this something that I should be really concerned about? And some of that modelling actually is something that, you know, it takes a bit of time to do that, but an AI can do that quite well.  Wouter Klijn  09:23 Now, you mentioned earlier that part of hallucinations, and I think that is sort of the key part to tackle with bringing AI forward. Now, I had recently a conversation with Alistair Barker at AustralianSuper, and he's looked a lot into AI, and he sort of made an interesting remark where he said, You need to allow AI to hallucinate, because otherwise it will never come up with novel answers. You just have to manage that. What is your take on it?  Scott Donald  09:50 I'm going to take a peculiarly lawyerly approach to this, so it might be a different one might come up slightly different, and that is that as law. Is we care about why things are decided the way they are quite a lot. And so for us, process is very important. Yes, it's good to get the right answer, but mostly the situations where a lawyer gets involved, the wrong answer has been arrived at, and we're trying to work out whether there's been a breach that is culpable in some way, that somebody has acted carelessly or in some other way, and that's where the issue of whether something's a hallucination is actually quite important. And as a lawyer, and if I was a philosopher, I'd say actually, aI don't hallucinate. When you hallucinate, you see something that nobody else sees, but you believe it to be true, and the AI models don't do that. They don't care whether it's true. That's not part of their register, right? They all they're trying to do is identify patterns. The semantic truth of something is not something that's important to them. Yeah, and that's really a big problem as a lawyer looking at a decision system, because I don't want to be on the defending side trying to say, Oh yes, Your Honour. I We employed this. We've employed someone who doesn't care about the truth. You know, we wouldn't employ a person who said that. They just look for a good answer. They don't care about the truth. You always. So I think, I think that's important. And so if you think about the different types of errors that can get made, the hallucinations, one where the person thinks it's true, but nobody else you know the objective truth tells you it isn't a lie, is where you're trying to you know it's not true, and you're trying to convince somebody else that it is true. The law doesn't like those but the AI model isn't doing that. Typically, it's doing something else. And I hope you'll excuse the French here, but it's BS'ing, right? Because it's saying something, and the truth, or otherwise of it is not of relevance. It's been given a task to find something, and it's and it's projecting that back to you, but actually, the truth of it isn't, isn't the key, and that's a problem that really is a problem in certain circumstances, for a trustee, particularly, but also for a judge, actually. And this is why this, this is why you're seeing the kind of frisson that you're seeing through the court system at the moment and through the legal profession, is actually truth matters, and a desire to not tell lies matters. So the nature of the of this is actually quite important. And so I would be very nervous if I was defending somebody in the end, and it doesn't care about the truth, it just tells us what it thinks is the best answer, and we go with that. Worse still, if I can't explain how the model got to that answer, and that's so the opacity and the non linearity also pose challenges, and what we've got to remember is that often trustees is different from an investment manager, but for a trustee, you'll be up answering questions to AFCA (Australian Financial Complaints Authority) or to a court, where they will quite they will actually expect you to justify the decision. So the first thing that AFCA does is to say, Okay, there's been a complaint made against you, Mr. Trustee, or miss trustee, could you please explain your decision? The machine said it or the machine, the machine said no, or machine said yes, is not an answer that's going to work very well in that context. And so to the extent that trustees of super funds are using these machines tests to assist their decision processes. They always need to have in the back of their mind that if it's a type of decision that could be challenged at some place like Africa, so it could be an insurance decision, TPD, insurance, or some other kind of it needs to be something that they can be really confident, that they can defend, because otherwise you're just gonna be writing checks to disgruntled members. Yeah, and that, and that doesn't seem sensible, so I think, I think an awareness of where the risks lie in different models, and there'll be different risks, is really crucial. Wouter Klijn  13:55 And sort of comes back to the idea that maybe a machine helps with a decision-making process, but you're still, the end, responsible for their decision. Trustees, in particular, they carry that responsibility. They can't just hand it off to a system. Scott Donald  14:11 That's true. I mean, I still think there are things that trustees can use them for quite, quite intensively. The one of the issues we've talked my colleagues and I've talked to trustees about at some length, because it's a bit, it's a bit subtle. Is okay? So what is the kind of human input? Where do you where do you have that occur, and what's and how thick is it, if you like, because it seems to me, you know, the human in the loop argument, oh, well, that's right, we're gonna run the machine. Then somebody's gonna, Scott's gonna sit down. He'll have a look at it, and he's an expert, so he'll work out whether it well, hang on, a second process of review has to have different criteria than the original decision process. Otherwise, it'll just come up with the same answers, right, right? So the nation of notion of a review is inherently that you're looking at something from a slightly different perspective. So what is that perspective? Why haven't you already built that into the into the. Into the system, because it's a system that you can programme how you like. So you could, you could add that in there. So that's the first thing. The second thing is, if Scott's going to spend only 20 minutes or 10 seconds or 15, you know, I have a really short time looking at this thing, what's how much is he going to catch that the machine that's been probably spending less time, but actually using a lot more rigorous analysis has come up with. So you don't want it to be too narrow, because it's gonna you're gonna still let through lots of problems. On the other hand, if it's too thick, not only do you have cost implications, but you also have then the possibility of going before a court or tribunal and saying, Oh yes, you know, we run, we run the machine, and then Scott spent an hour reviewing it, on average. Over the course we dedicated two people full time to review these decisions. If it's too thick, the court may say, Well, okay, so the model doesn't matter, then I just want to understand what those people did, and we're right back where we started, except you're probably spending those people probably spending less time doing what we now call a review than they did making the original decisions. And so it becomes, again, much more vulnerable to legal challenge. So I think thinking about how human interaction with this model can occur is actually quite crucial. It could be quite subtle as to exactly how you position it, how you explain it, how you track the circumstances where human intervention occurred or didn't occur. And I think, I think we haven't really kind of grappled that to the ground yet in terms of all the different types of ways in which we could use these models, because it could make it could actually make quite a difference. You could end up undoing the good work that you've done by not being able to explain it, or by not really, kind of getting that human machine dynamic, yeah, established properly.  Wouter Klijn  16:47 And I think there's a number of issues with sort of answers that AI systems come up with, because there is definitely a tendency for the machine to want to please you, so you're not necessarily going to get a whole different perspective on a certain matter. But what I also found is that it can be quite inconsistent, and I'll give you a little example, because I've been playing around with it, but I use a lot of images for my newsletter, and sometimes I get very blurry, low resolution images, so I thought I'll take one and see if it can sharpen it up a bit and give me a better quality picture. And surprisingly, it did it perfectly, and I suddenly had this really clear, high res image. This is great. So I did it again the week after, and it told me, Oh no, no, I can't manipulate images of real people. I'm like, you just did that last week. No, I didn't had this argument with this AI system, but it just came up with a completely different answer. But what do you think of that in the context of, you know, if you having to justify yourself in front of a court, and it can be wildly inconsistent responses or decisions from these systems. Scott Donald  17:55 It does rather undermine the kind of argument that you're using it, because it brings rigour and consistency to your to your decision making. It's which, which often is why people want to use it and quite rightly, right? So that that that can be a problem, I think another problem that people haven't quite I think the technicians understand it, but people in our industry, perhaps the less across it is some of the value in what we're asking these models to do is if we can get them to actually learn organically from their from the experience, so that they evolve as more information, either in the local in the kind of the local environment, or more generally, in the sort of the world environment can be, can be brought to bear on it. That means that the decision that you make today is going to be different than the decision that you might make in two weeks time or whatever, and we're kind we kind of used to that, like we as human beings. We make decisions in that way. But it does actually make it perhaps difficult to go back and actually, if you haven't tracked exactly why it was a decision was made by the machine on day one, day 15, and day 30 and day 200 it'll come up with different decisions, and you won't necessarily be able to track back. So actually, tabling, tabulating, kind of how we got to particular decisions is going to be really important. We can't ask the it's how. I suppose maybe we could ask the machine, what were you thinking two two years ago, or two weeks ago? Or, you know, when this particular event occurred or this particular decision was taken, I think we're much more used, certainly as lawyers, we're much more used to interrogating people about those sorts of things and getting a sense of and accepting the uncertainties and ambiguity in witness evidence than we would be trying to interrogate a machine to tell us that particular machine that we've already said doesn't really care that much about the truth, and just one the truth and just wants an answer that fits the the rules it's been given as best as possible. So yeah, I think it's a real issue.  Wouter Klijn  19:51 In the paper you you mentioned the aspects of risk and where you look for risks. And I kind of had to giggle a little bit, because. It describes that risk is mostly found in situations where a lack of accuracy has negative consequences. And I thought that's pretty much everything.  Scott Donald  20:09 So if I, if I employ an AI, an LLM, to write a newsletter to my to my members, and I choose a word that's not quite the best word that it could have been chosen or that, then, it might not be as clear. It might be ambiguous. I haven't done the best job of communicating with my members, but I can probably live with it if I send in a report into APRA describing my portfolio position and my AI has decided to fill in a gap that wasn't because the data wasn't there. That's what I mean by that being a problem or, like, there's real consequences to that. I was recently in a situation where I had to submit some things on a particular date for my university. In my university, gar, we had a particular day I had to do. So I was really struggling to find what date it was for this particular year. So I just went into Google thing, planning actually to go right through to the University website. But I won't mention the name, but a very well known AI gave me a date. Luckily, I didn't believe it, because it was eight days later than the date that I was supposed to actually submit. That would have had terrific consequences, as in, terrifically bad consequences. So I think even even in certain simple things, the concept, the consequence of it being wrong, a greater whereas in other in other circumstances, they're probably not. If I get an AI to rewrite an email that I've already half drafted so I know what I want to say, and maybe it doesn't do a terrific job, and that's not a big risk. I've already thought about what I want to say. I can look at it and I can either correct it or go with what's there, and it may be better than I originally came up with, but if I'm going to use it to try to value a complex derivative, and it ends up in some weird place, yeah, that could end up costing me a lot of money. Yeah. So I think there's, I think there's a sort of sense that, again, it's knowing what the risks are with the model that you're using and thinking very carefully about the institutional context and setting that you're going to be using it in, and trying to think, well, is that risk a problem here? A model that generates the wrong outcome in a hedge fund may not be such a big issue. There's not, there's a lot of proprietary trading going on. You know, it's some, some of that's going to work. Well, some it's not. If I've written something very carefully in my PDS that goes out with my manager investment scheme that says I will do X, Y and Z, and for some reason, I've done X, Y, Z and Q, again, context matters in terms of those sorts of risks.  Wouter Klijn  22:44 So do you think that trustees should use AI at all for important tasks and decisions? Scott Donald  22:51 I think they should consider it. I think, I think the consideration of it is really, is actually really important. I think there'll be places where they can they really can drive down certain costs and also improve certain types of interactions. I think some of the member interactions that they can have can work very well. You know, one of the challenges we always have in the superannuation industry is providing information to people in a timely way that's helpful to them in a format that they can understand and I think carefully directed. I think AIs can help us to have those sorts of interactions, or at least in a different set of interactions that maybe complement what we already do. Would I be having them answer questions from a regulator where the where the accuracy is absolutely crucial? No, I certainly wouldn't be doing that because that that could have all sorts of adverse consequences.  Wouter Klijn  23:45 So when we talk about setting sort of guidelines around how to interact with AI and where to use it, I sort of earlier spoke about sometimes it can be quite inconsistent. Is it possible to sort of develop a general set of guidelines around this, keeping in mind that this space changes so rapidly. Some functions that were there a couple of weeks ago are no longer there. There's new functions. There's changing functions and applications are the guidelines always behind the ball. In that perspective? Scott Donald  24:18 I think they all regulation, to some extent, is likely to be chasing, trying hard to catch, to keep up, and something to some extent that's to do with the design of the regulation, as well as, to what extent do you want to try to discourage certain types of conduct? Or it's not, it's not all about punishment. It's often about trying to encourage certain things and discourage others. So I think regulate, regulating anything is hard, and regulating something that moves at the speed that AI moves at is particularly hard. What I would be really keen to see people do, though, is sort of look inside that label, because there's so many different things AI gets used for. And. Think you really need to understand, okay, what is this? What? How are we going to use this particular piece of technology? And if it's just an LLM to improve our responses to questions at the annual members meeting, for instance, then that implies a certain set of guide, you know, the certain things that you would certainly set up as guidelines around for that. If it's an investment strategy thing, it's slightly different. If it's something that's going to be informing or even powering your determination of insurance claims from members, then I think it's different again. So I think I'll be pleased, I'll be happier when I see people stop talking about AI models so much and really focus on the specific tools and apps and models that they're using, and think about them a bit more carefully, because I think there's big differences across the across that diversity.  Wouter Klijn  25:49 Yeah, I was just about to ask you about different AI models, but more so on the side of security. So there's, of course, the systems that we're familiar with, but they tend to be open to the public, and you have to be careful with what type of proprietary information you put in that. And you can also build a closed system where it doesn't go outside the organisation. It only deals with the information that you feed it. But there was a well published study by MIT earlier this year that looked in how successful pilot programmes were with AI, and surprisingly, they found that the companies that use off the shelf systems tended to do better than systems that they were trying to build themselves, and whether it was due to costs or complexity, but they found the off the shelf systems work better in terms of that trustee environment. Do you think that there's Do you lean to any particular model?  Scott Donald  26:48 Again, it depends on and depends on how you're using them. When you start to look at what AI is doing, potentially for you, and just in kind of very abstract terms, and particularly where it's embedded in some kind of decision process. We know it costs money to train AI to do something, and you either, you either pay that money up front and you're aware of it, or there, or there's some other way in which the money is being paid. It's like when you go to a poker game, if you don't work, if you don't know who the Patsy is, it's you, you know, if you don't, if you can't work out how they are no nothing is free. So if you can't work out where the money how you're paying for something, then be careful, because it's coming from somewhere. So you're paying you're paying for this technology somehow, and you're typically paying for it, more or less in sense as what a micro economist would call a fixed cost. And you would do that if you thought that the incremental costs of every decision, the marginal cost was very low. And so it's kind of this classic efficiencies of scale, economies of scale, right? It works really well on a cost basis, if it's a, if it's a sort of problem or process that you're going to run often, and it's a relatively stable process. So you don't have to keep training it. You don't have to keep throwing in, oh, hang on. Say, Oh, hang on a second, there's just been a new law passed, or there's just been a new regulation, or a new whatever. So you don't have to keep modifying the model from outside. So there's certain types of decisions I think that work better on that efficiency if you only make the decision infrequently, and it's in a volatile environment where the conditions change all the time in ways you have to then it's going to cost you a fortune to do so. I think it's, again, it's when you look at the sorts of decisions that superannuation trustees, for instance, make, which is sort of where the paper is mostly looking at, there are some decisions that they make very frequently on very similar bases, and that's the sort of place where you might go. Actually this could, this could work quite well there. Now we just need to be careful, though, because when we talk about training systems, most of the AI models are trained on data sets that are massive compared to the sorts of data sets that we have within a super fund. And particularly if you're going to say, well, for reasons of confidentiality and privacy, we can't really allow our member details to get beyond our own walls, right? Our biggest funds only have 2 million members. 2 million data points is nothing for a big AI to churn through. And if we're talking about decisions made with respect to say, insurance. It might only be a few 1000 of those each year. We don't really have the data sets that are going to be very effective in generating really sophisticated, rich models. So how do we get around that? Can we find some other way that we can work with some others who might have some data? We certainly know we can't go to public data because that's going to that's going to breach the confidentiality that we have. I mean, one of the things, and again, this is probably too detailed, but one of the things that you know we all do when we sign up to invest in a super fund or a managed investment scheme is we find sign the application form, and on there it says, you know, you consent to your data being used in particular way. Not all. Those consents actually work in terms of allowing people to use the information that's there for AI. So if I'm a trustee, some of some have now woken up to this, and so they've amended what they write there to try to ensure that they can use data in certain ways. But if I haven't, the very real risk that I'll get sued by someone who says, I just found out that you put my data into this model, and that's now informing some other data set that I didn't give you authority for. And that's the sort of stuff that regulators love, because it's really easy to prosecute so so even very practical things like what's what's written on the application form, you know, something as mundane and as prosaic as that is affected by these sorts of considerations.  Wouter Klijn  30:45 Yeah, yeah. So what's next for this research? Have you been talking to people about establishing guidelines or using this for a basis for that?  Scott Donald  30:53 Look, I mean, UNSW has a programme that's public, that's announced with with one of the big fund providers, and that's and that's giving both both parties an opportunity to learn from each other. My objective in in raising this was not so much purely to to inform that process, but just generally, to alert people who were, you know, hearing that from management and from other sources that they had to be thinking about how they might use AI and that trusteeship and really were starting to get a bit nervous about what that might mean and where the risks might lie, and to and to encourage them to actually think very carefully about what sorts of processes, what sorts of uses they might put this Technology to some of them in there's some very, very smart people working at the universities who really understand how these models work. I've seen some of our, some of our and UNSW, I've seen some of the people there. I've seen them on television and thought, wow, that's really insightful. What they don't know, though, is how that engages with business or with different types of decision context, institutionally. And so I really encourage people to try to draw on that and not just take the kind of headline marketing speology that you get from some of the providers, but to actually really drill down and go, Okay, how might we use this to make a difference? What really are the risks? How might we, how might we address that? How can we talk about this, to the people that we have, our stakeholders that we have to talk to? And, you know, once you start doing that, you start to realise it's not just, you know, there's something involved to it. There's actually, it's me costly to go through that process of thinking about it. Take some time as well. So, but I think, I think you have to, I think at this point, certainly, when I'm speaking to my students at law school or speaking to trustees, there's a sense of, we can't stick our head under the sand and pretend this is going away. It's actually there. It's actually has some really important applications, and the fact that it wasn't here in quite the same way five years ago. Is not a reason not to we just have to invest in our knowledge and understanding.  Wouter Klijn  33:06 Yeah, do you use AI in your own job every day, every day? Absolutely, what sort of tasks?  Scott Donald  33:14 Well, it's embedded within a lot of programmes where I don't even see it really. I mean, a lot of the Google searches have that sort of stuff in there. And from time to time, I'll be interested in understanding something that I haven't come across before, and that's a first cut. Sometimes I'll chase things down. I don't very often use it to large language models to run over my writing. I find I'd do enough writing as it is that I'm not too worried about that. But yes, in a pro inappropriate ways. I've used image generation models as well for Occasionally, when I'm doing presentations and I can't find the image that I want to sometimes I'll use it for that. That's fine.  Wouter Klijn  33:51 I've heard of an interesting application where students are using AI in novel ways, and not so much of what we hear about. They write an essay for you, but more in terms of learning and changing the way they learn about certain topics. So one application was where, basically, they trained in AI to be a specialist in a certain particular topic that they were studying, and then had a Q and A session, essentially with this artificial expert. And by that, it reiterated certain content that they were trying to memorise. Have you seen some of that? Have you seen other ways of students using AI?  Scott Donald  34:29 Oh, yes, we I mean, for quite a number of years, I've done things like given students an essay question and then shown them what what a simple AI answer would look like, and then get them to critique the AI answer, and so it that's really helpful, because that really forces them to think critically, as opposed to just descriptively about a particular asset question. So So yes, we do. I think that that sort of stuff can can work very well. And I mean, I still remember when I was I was young. About 30 years ago, researching for my master's degree, I used to go into the bowels of the London School of Economics, and if I found two or three cases that were relevant to my thesis at that point, that was fantastic, and it was a great day, and they were dusty, and I probably ended up with all sorts of mites and other sorts of things from books that hadn't been opened for decades. These days, students can find that same, not three cases. They can find 300 cases, if they exist within seconds. So the skill that I had and the perseverance of being able to go in those particularly dingy corridors is no longer worth anything. I think we need to realise that actually some of the stuff that we as slightly older, more experienced investment professionals, legal professionals, governance professionals, some of the things that have got us to where we are now are no longer worth what they cost for us to acquire that we're going to be able to do certain things, like rephrase an email very quickly, and there's no value in doing that anymore, because a machine can do it, can Do it far quicker and far better than we can. So why would we Why would we do it? Why would I still go down to the bowels of the LSE to find a case when I can sit at my desk and type in a few keywords into one of the legal databases, and I've got everything there.  Wouter Klijn  36:14 Yeah, as long as it hopefully doesn't hallucinate any cases. Scott Donald  36:18 Well, that's where you then would sit and check and go and go through. So we learned to do different things. We learned to think critically. We learned to check the veracity of the data. I never checked whether any of the little leather bound books that I found in the basement of the LSE were actually the real copy or some, some spurious version. I just assumed that, because they were there, that they were right. Maybe I should have, but I never tried to cross check it or anything. But now, obviously verifying sources is very important, because the potential for not just AI, but for all sorts of alternative versions of facts to come, to come to light is much greater. So, yeah, yeah, for sure. So I think, I think it changes this. It'll change what analysts need to do, security analysts, investment analysts. It'll change what, what we all need to be able to do, because we'll need to be able to do something different and better than what the machine can do.  Wouter Klijn  37:18 Yeah, yeah, for sure. Well, plenty of food for thought there, Scott, thank you for coming to the offices and thank you for the discussion. Scott Donald  Thank you very much.

More episodes of the podcast Conversations with Institutional Investors