Mike Keating on AI policy development at Art Fund

Mike Keating on introducing user-centric practices, sharing knowledge, and the collaborative development of policies.

Towards the end of 2024 I had a podcast conversation with Art Fund's Associate Director of Digital Experience, Mike Keating.

You can listen to the whole podcast conversation below, or wherever you usually get your podcasts.

But if you prefer to read, then I've summarised the key parts of our chat.

Tl;dr

  • Senior digital roles at more digitally mature organisations are (necessarily) shifting and broadening. But these changes can only happen successfully as part of wider changes that address related roles and structures, process, and culture.
  • Embedding user-centric practices into your organisation will require persistence and you will encounter resistance. But when you can showcase the reality (and benefits) of working in this way, you might be surprised by how quickly people want to get involved.
  • Mike broadly followed these main steps and princples when developing the AI policy at Art Fund:
    • Understand how people are already using AI tools (Art Fund surveyed their staff). Share the results of this work, and recognise these tools are probably already being used by lots of your colleagues in their work.
    • Be clear and specific about what you are, and aren't, comfortable with AI tools being used for. Art Fund will never use AI generated assets in marketing campaigns, for example.
    • Be clear and specific about which AI tools are good for what, and what their limitations are. Some are useful for supporting reseach, others for idea generation, others for image work etc. Ensure people understand about issues such as hallucinations and (at a broad level) how these tools work.
    • Be clear and specific about the impact of your choices. For example embedding AI tools into certain processes may impact people's jobs or responsibilities.
    • Encourage experimentation and familiarisation within a safe environment.
    • Be clear about what you don't know, and acknowledge that this work will need to change and develop over time - this is still a nascent area of technology.

The evolution of his role at Art Fund

Mike: "...certainly in the last year and a half my role has really broadened from being something that was overtly around like digital projects, building a user-centered design culture, and bringing in a lot of expertise around analytics, user research and really understanding what people were doing and why.

And now it's really broadened out to include content, social, brand and creative.

But all of those things I was just talking about that were applying quite overtly to digital can be applied to all of those things too.

Right, you want to understand how your content is performing and why and then making decisions based off that"

When I worked at Substrakt I did some work with Art Fund around how they structured their digital responsibilities.

Part of that work was looking at broadening what was previously quite a clear but limited remit.

It's good to see that that 'broadening' has happening, but I think that this shift was only possible because Mike had done the hard work prior to that of starting the conversation around being more user-centred, and building momentum around the organisation's digital work. That foundational work around culture, priorities and mindsets was essential.

The joy of being user-centred

Mike: "You want to know, basically, why people are doing what they're doing and how you can use that information to make stuff better.

And I think that's really empowering and quite disconcerting, but also really interesting, because users tell you loads of mad stuff all the time that you would just never expect.

My favorite thing about my job is being wrong. You might launch a new section of a site, you change the design of a page subtly, you run a split test or you try some content that you've never tried before, and that feedback is out there and waiting.

I love it when that feedback tells me something I don't expect."

It was really encouraging to hear Mike talking about his experience of moving Art Fund towards a much more user-centred approach.

He spoke directly to the nervousness that many organisations feel when first embarking on this sort of work, "there was definitely a lot of trepidation within the team around are they [users] going to rip apart everything that we've worked really hard to create?".

But he was unequivocal about the benefits, to the whole organisation, of working in this way, "once you get over that hill and you do the sessions, everyone was saying, oh, that was brilliant. And then they start to talk about it and people across the organization are like, oh, wow, that sounds pretty cool. How do I apply something like that to my job?"

Lessons learned on developing Art Fund's AI policy

Mike: "...remember that a lot of people are afraid of this technology and actually will really resist ever trying to get involved in it, and you need to do something for the people who are afraid but interested, and the best way to do that is by being specific.

You can use it to do this, you cannot use it to do this. You can use this tool for copywriting, you can use this tool for research.

Because that's what breaks down the barriers.

If you just tell people, oh yeah, there's this thing called AI and here are some websites, that's not going to reduce people feeling like they're going to do it wrong.

I'd say to take that as an initial focus - what can you do to make people feel like they aren't doing it wrong?"

This ideas of being specific and de-risking the idea of people at least familiarising themselves with these tools has come up again and again when I've spoken to people about AI policies and guidelines.

The discourse about AI has swung wildly between generalised hyperbole and predicitions of the end of the world. Encouraging people to experiment with these tools and to get a feel for what they, individually, might actually choose to use them for (and not) can be a really effective way to start to bring specificity to the conversation in your organisation.

On this, I also think a slight shift in language could be beneficial. The National Library of Scotland's Rob Cawston suggests "I would consider ditching the catch-all term "Artificial Intelligence" to think instead about "automation tools" - to surface what is being automated, who is doing it and who may gain or lose as a result.".

This idea of being clear about what you're doing, why you're doing it, and what the impact of those decisions actually might be is also something that Mike spoke about:

Mike: "The other thing that I think would be helpful to people starting out in this is to really understanding what your main area of focus is going to be.

For some organizations that's going to be we want to use this tool to save money, and that's fine.I don't necessarily agree with that, but I can understand why different organisations might want to do that or might need to do that, and if that's your primary goal, then that gives you a very clear set of parameters around what kind of tools, what kind of things you might want to be doing with those tools.

If it [your focus] is to save people time, that gives you another very clear steer. Or if it's to make people's jobs better quality, that's also another like clear area of focus.

I think you can start moving in all three of these directions, because you're trying to please a lot of different stakeholders. I think the thing that will make this easier for everyone in the long term is understanding which of those three areas is most important to your organisation and moving in that direction and trying to leave the other two behind.

If it's a venn diagram of saving money, saving time and making people's jobs better, decide which one is most important to you and move that way."

Understanding how your teams are already using AI tools

The first step that the team at Art Fund took when exploring what their AI Policy might need to do, was to understand how people at Art Fund were already using AI tools.

Mike: "I think doing that survey and finding out how people were using it [AI tools] initially and then hearing about different use cases in internal teams, that really helped make it real, because a lot of what I was saying there about what your focus might be, all sounds kind of theoretical and, ultimately, if that doesn't align with how people are actually already using it then that's a problem [...].

that [the survey] is your user research. That's showing you what people are really doing and you want to create a policy that makes the most of the cool ways people are probably already using it"

The value of sharing knowledge

Mike: "this AI stuff sort of happened to us, but it has come at a really interesting time where, like you said, lots of organisations are looking at this stuff. We have done this slightly before quite a lot of other organisations, and I personally think if someone has done some of this work already then we should be sharing that as far and wide as possible.

So I'm really interested in and open to sharing this pretty broadly and people being able to pick up their version of it and then do what they will with it.

I think when I was in the more traditional charity sector, that was a really big part of the culture of that sector. Everyone would know who everyone was in digital roles everywhere. You could pretty reliably know someone who knew someone who did that job and it was very easy to reach out. And I think in our sector covid maybe broke a lot of that and for lots of reasons, lots of people either left the sector or changed jobs and those networks haven't been rebuilt super effectively yet.

I think if there's one thing I would like to do with this AI policy work is not necessarily rebuild all of that, but show that you can do this and then you can give it to other people and that's not only just fine, it's good"

Subscribe to Ash Mann

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe