Peter Campbell from Techcafeteria.com on managing AI risks at your nonprofit.
Peter Campbell is the principal consultant at Techcafeteria, a micro-consulting firm dedicated to helping nonprofits make more affordable and effective use of technology to support their missions. He recently published a free download powerpoint on Managing AI Risk and had time to talk with Carolyn about his thoughts on developing AI policies with an eye to risk, where the greatest risks lie for nonprofits using AI, and how often to review your policies as the technology changes rapidly.
Listen to Podcast
Like podcasts? Find our full archive here or anywhere you listen to podcasts: search Community IT Innovators Nonprofit Technology Topics on Apple, Spotify, Google, Stitcher, Pandora, and more. Or ask your smart speaker.
The takeaways:
- AI tools are like GPS (which is itself an AI). You are the expert; they are not able to critically analyze their own output even though they can mimic authority.
- Using AI tools for subjects where you have subject expertise allows you to correct the output. Using AI tools for subjects where you have no knowledge adds risk.
- Common AI tasks at nonprofits move from low-level risks such as searching your own inbox for an important email to higher-risk activities more prone to consequential errors, such as automation and analysis.
- Common AI risks include inaccuracy, lack of authenticity, reputational damage, and copyright and privacy violations.
- AI also has risk factors associated with audience: your personal use probably has pretty low risk that you will be fooled or divulge sensitive information to yourself, but when you use AI to communicate with the public, the risk increases for your nonprofit.
How to Manage AI Risks at Nonprofits?
- Start with an AI Policy. Review it often as the technology and tools are changing rapidly.
- Use your own judgement. A good rule of thumb is to use AI tools to create things that you are already knowledgeable about, so that you can easily assess the accuracy of the AI output.
- Transparency matters. Let people know AI was used and how it was used. Use an “Assisted by AI” disclaimer when appropriate.
- Require a human third party review before sharing AI created materials with the public. State this in your transparency policy/disclaimers. Be honest about the roles of AI and humans in your nonprofit work.
- Curate data sources, and always know what your AI is using to create materials or analysis. Guard against bias and harm to communities you care about.
“I’ve been helping clients develop Artificial Intelligence (AI) policies lately. AI has lots of innovative uses and every last one of them has some risk associated with it, so I regularly urge my clients to get the policies and training in place before they let staff loose with the tools. Here is a generic version of a powerpoint explaining AI risks and policies for nonprofits. “
Peter Campbell, Techcafeteria
Presenters

Peter Campbell has a background as a technologist that stretches back to the mid-80’s, when he started out managing technology for law firms in San Francisco. By 1999, he had established himself as a knowledgeable IT Director. At that point, Peter made a pointed move to the nonprofit sector, looking to practice what he had learned where it would do the most good.
With IT Director, VP IT, and CIO roles at Goodwill Industries of San Francisco, Earthjustice, and Legal Services Corporation, Peter deployed a broad range of technologies while developing a technology executive skill set that gives him ample insight into how organizations can successfully manage change and digital transformation. He brings that full skill set to his work at his consulting firm Techcafeteria where he is the founder and principal.
Throughout his 30+ year IT management career, Peter found that the best consultants did what they were good at, as opposed to learning on the job; listened to and partnered with their clients, focusing their efforts on achieving the client goals; and didn’t undervalue their services or overcharge for them, understanding that billing for every email and phone call inhibits the critical communication and camaraderie needed to sustain a healthy, collaborative relationship. Techcafeteria’s mission is to help nonprofits use technology to advance their work. Peter tailors his advice to fit the mission, strategy, culture, and available resources of his clients.

Carolyn Woodard is currently head of Marketing and Outreach at Community IT Innovators. She has served many roles at Community IT, from client to project manager to marketing. With over twenty years of experience in the nonprofit world, including as a nonprofit technology project manager and Director of IT at both large and small organizations, Carolyn knows the frustrations and delights of working with technology professionals, accidental techies, executives, and staff to deliver your organization’s mission and keep your IT infrastructure operating. She has a master’s degree in Nonprofit Management from Johns Hopkins University and received her undergraduate degree in English Literature from Williams College.
She was happy to have this podcast conversation with Peter Campbell about managing AI risks at nonprofits.
Ready to get strategic about your IT?
Community IT has been serving nonprofits exclusively for twenty years. We offer Managed IT support services for nonprofits that want to outsource all or part of their IT support and hosted services. For a fixed monthly fee, we provide unlimited remote and on-site help desk support, proactive network management, and ongoing IT planning from a dedicated team of experts in nonprofit-focused IT. And our clients benefit from our IT Business Managers team who will work with you to plan your IT investments and technology roadmap if you don’t have an in-house IT Director.
We constantly research and evaluate new technology to ensure that you get cutting-edge solutions that are tailored to your organization, using standard industry tech tools that don’t lock you into a single vendor or consultant. And we don’t treat any aspect of nonprofit IT as if it is too complicated for you to understand. When you are worried about productivity, change management, and implementation of new technology, you shouldn’t also have to worry about understanding your provider. You want a partner who understands nonprofits.
We think your IT vendor should be able to explain everything without jargon or lingo. If you can’t understand your IT management strategy to your own satisfaction, keep asking your questions until you find an outsourced IT provider who will partner with you for well-managed IT.
More on our Managed Services here. More resources on AI for nonprofits here.
If you’re ready to gain peace of mind about your IT support, let’s talk.
Transcript
Carolyn Woodard: Welcome, everyone, to the Community IT Innovators Technology Topics Podcast. My name is Carolyn Woodard. I’m your host. And today, I’m excited to be with Peter Campbell from Techcafeteria, who is going to be telling us a little bit more about assessing AI risks at your organization. So, Peter, would you like to introduce yourself?
Peter Campbell: Sure. My name is Peter Campbell. I’m a technology consultant. I work strictly with nonprofits. I have a background of over 40 years of technology management. I started my career in commercial law firms, but after a while, I said, wait a minute, I want to do better things in my life and move to nonprofits.
I’ve been working either for nonprofits or with nonprofits for the last 25 years. At Techcafeteria, I do assessments, CRM or ERP selections, or fractional CIO work. I help out however I can with my personal mission to help nonprofits use technology better to support their missions.
Carolyn Woodard: Which is one of the reasons I wanted to have you on the podcast, because Community IT also believes strongly in that the IT is available, and it’s hard to achieve your mission without any IT at all. So, helping that function, I think, is so helpful to nonprofits.
Peter Campbell: What I see is that technology really, it can either propel your mission, or it can really get in the way.
Carolyn Woodard: Totally.
Peter Campbell: And in my work, I see examples of both, but I try to work it towards the former, not the latter.
Carolyn Woodard: Exactly, exactly. And so, you just recently posted a PowerPoint with, I don’t know, half a dozen slides on managing AI risks. I just hear from nonprofits, from the sector, from our clients, AI is such a buzzy, trendy word right now.
And a lot of nonprofits are trying to figure out how it fits into their technology and what they’re doing. They have FOMO, fear of being left behind if they don’t use AI.
It’s coming out in new tools. You get the update and now you’ve got an AI helper. And you’re like, I didn’t even want that.
I was really intrigued by this set of slides because I thought you laid out pretty clearly and concisely how to think about AI risks as a nonprofit.
We can give people the address for the slides. You can download it. That will be in the show notes and on our website. But I wondered if you could walk through the slides and how to think about AI risks.
Peter Campbell: Absolutely.
Carolyn Woodard: That website is techcafeteria.com, T-E-C-H, the word cafeteria.com. And at the moment, the PowerPoint slides are right on the homepage. So, just scroll down and you can find that for free download.
If you’re listening to this podcast later and it’s no longer on the homepage, the full link is techcafeteria.com/?p=3649.
Peter Campbell: In my work, I do a mix, I mentioned some of the things that I do, but I also do security consulting. So, security assessments, that type of thing. And I help nonprofits develop that standard set of policies everybody should have.
When AI suddenly took over the mindset and became the thing we’re all thinking about, I immediately had the concern that there aren’t policies in place for this. I’ve now worked with four or five clients to develop those policies. And in developing them, I just thought, okay, a policy is a very dry document to read, but getting the message out was important to me.
So that inspired me to put together this presentation, so that in addition to providing my clients with a policy that they can incorporate into their handbook, they also have a good tool to communicate to staff exactly why this is important and what the risks are.
I really do have some concerns that even if management has not approved AI in the workplace, AI is in the workplace, it’s being used everywhere I go. Microsoft just keeps bringing more functionality right into their applications. Google has theirs, it’s there, as you said.
Because it’s there, we need to really understand what is safe, what isn’t.
I would say that there is nothing that you can do with AI that doesn’t have a certain amount of risk to it, in that we’re human beings, rational human beings, who can think critically about the information that’s in front of us. AI can be taught to simulate that. But an AI does not think independently. An AI does not have the critical eye on the output that it’s creating.
So right off the bat, there’s certainly a lot more risk in creating something with AI, an image, a report, something like that, than there is using it just to search your email or something like that.
But even if you’re searching your email, you have to understand that you’ve told the AI what to search for, it’s interpreted your query in the way that it’s going to interpret it, and the results that it gives you may not be exactly what you were looking for.
Carolyn Woodard: I had a friend who made the analogy that if you remember when we first had Alexa and Siri, and you would ask it something really basic and it couldn’t understand what you were saying, it would come back with something completely off the wall. It just hadn’t understood the language.
In some ways, I think a lot of AI prompts are similar to that. It’s like still learning, and Alexa and Siri have gotten so much better now. Sorry for anyone who’s listening to this near their device, and I just said it twice. I think that it’s learning, but it makes sense to have some guardrails in place or at least be talking about it.
You said policy can be so dry. We have several of our clients who said, our policy is just that people can’t use AI at our organization, and that’s not what you want either. It’s such a productivity tool, but also, I guarantee you people at your organization are using it, even if your policy is no.
Peter Campbell: It’s just not realistic. I remember around 2009 or 10, being with a group of nonprofit CIOs for much larger organizations than what I was working for at the time, and all of them saying how they ban Facebook and Twitter in their workplaces. I said, you know, these aren’t business tools today, but they’re going to be tomorrow. Which was absolutely happening. I always kind of argue, we have to be aware of the risk in all of the technology we use, but there’s a great risk in just banning it and thinking you’ve solved the problem. It’s just not realistic.
What Are Nonprofits Using AI to Do?
Okay, so the type of task that I’m seeing nonprofits use AI for, I’m just going to go through my list of them.
One is searching, whether it’s searching your email or the web or your network.
Another is summarizing meetings. Absolutely. You know, all the video tools now have that built in, or you can buy a third-party one for that.
Doing drafting. There are things that AI is good at drafting, like policies and procedures or how-tos. These days, if you want to do a how to do something in Word, the AI is really good at that. Drafting emails, generating graphics, videos and flyers, generating letters and memos.
Coding, you know, big use for AI right now.
Generating reports, articles, studies. Attorneys are using it to generate briefs, and we’ve seen some interesting headlines about that.
Automation and analysis.
And my list here was in order from the most benign to the most dangerous uses of AI, as we’ll see.
What Are the Risks in Using AI for These Tasks?
The risks associated with these types of different tasks include
Inaccuracy, that anything outputted by an A tool might have errors.
Lack of authenticity. I don’t know if you’re using Outlook like I do and have the new Outlook, it is kind of frequently popping up and saying, why don’t you let the AI help you write this email? And my big concern is that the AI is not going to write in my tone of voice. And it is important that when somebody receives an email from you, they don’t feel like it’s, it doesn’t seem like it’s not from you.That authenticity is really important and something to really consider with AI.
More seriously, when you’re releasing an AI product, such as a report or a graphic, then reputational damage could occur. If there’s errors in what you’re outputting, and that’s going out to anywhere outside of the organization, that’s dangerous.
And finally, copyright and privacy violations. One of the things that keeps me up at night is that some of these tools, you would purchase them, and they will go out and scan your network and use what it finds as the sources for the AI models that you’ll use to generate output. I can tell you that 95% of my clients somewhere have an employee personnel review that the manager didn’t think to put in a secure folder.
I do security assessments and constantly find credit card numbers and social security numbers that are not secured on the network. So really, unless you are 100% sure that you have all of that down on your network, which I don’t see often, then it’s very dangerous what the AI tool will go wild and not really curate those sources.
So those are the types of risks.
And then there’s the severity of the risk which correlates to the audience. If you’re using it for personal use, if you’re just using it to search your email, there’s very low risk.
Maybe it’s internal to the organization, maybe it’s only going out to a couple of people outside of the organization, trusted people, and that’s fine.
But the more people who are seeing outside of the organization, where you’re getting to a medium level of risk, if it’s going out to large audiences or to the public, then you’re getting to very high risk.
I do have a graph in the presentation. If you go get it, it visualizes the types of activities and the level of risks correlated with the audiences.
Carolyn Woodard: Yeah, it makes a grid of what you’ve talked about, the public to personal, and then the other set that you talked about, the search all the way up to analysis and correlates the level of risk.
Peter Campbell: And it’s important to put these three things together. The thing you’re doing, the type of risk, the severity of the risk.
Carolyn Woodard: Yeah.
How Can Nonprofits Protect Themselves from AI Risks?
Peter Campbell: With all of that in mind, then we want to think about how do we mitigate these risks? What are the things we can do to protect ourselves?
AI Policy
Right off the bat, every organization should have an AI policy and ideally have an AI policy in place before people start using the tools, but we’re probably already a little too late for that if you don’t have one yet. This is what I’ve been really working with all of my clients on, saying you do need to have this policy, you need people thinking about how they’re going to use this in ways to protect the organization’s reputation, the organization from legal issues, that type of thing.
Carolyn Woodard: I think AI is changing so quickly. Even if you wrote an AI policy in January, six months ago, you probably need to review it frequently because the tools that you’re using, the new releases, the new things that AI can do, you may have some questions, or they may have implications for your policy that you need to include.
Peter Campbell: Yeah. It’s interesting because I know that Community IT has released a template AI policy. This is public. They did that about a year ago. There are a couple of other technology-focused organizations that I work with. I would say that I think the more recently written ones might be better. Maybe Community IT should be looking at that policy and thinking maybe we should update it.
Carolyn Woodard: Yeah. We went back over it and made some small tweaks to it about four months ago. But yeah, it’s definitely something that we’re looking at frequently, and you should be looking at it frequently if you have an AI policy because it’s just unlike other types of policies.
If you have a data policy, you can probably review that annually because it’s just not moving as quickly, but the AI is just moving so fast.
(Community IT also has done webinars on AI and policy, including an Ethics Framework and a 5-Question Framework to creating an AI Policy in How to Nonprofit AI.)
What to Have in Your AI Policy
Peter Campbell: The four things I think are critical to have in an AI policy are
Requirements for the review of both the data sources and the output of the AI tools. Those need to be looked at. There needs to be a requirement that they are considered before anything is created and shared.
The standards for transparency regarding the use of AI. We’ll talk a little bit more about transparency in a minute.
The identifying what are the acceptable AI tools to use. It does make sense for organizations to standardize on one tool, if not just for cost reasons, but also so that if everybody is using the same tool, they can support each other better. And there have been concerns about some of the tools and whether the companies that made them are ethical with them, which ones are more biased than others. There are things considered there.
And then finally, establish some kind of organizational oversight. A good policy might include that the company have an AI committee that reviews things that are going out to the public before they go. And ensures, you know, much as if your organization is subject to HIPAA, then you know that there’s required training, there’s required HIPAA coordinator in the organization who is looking after these things. I would say using that same kind of standard for AI makes a lot of sense.
Carolyn Woodard: And can I just ask you to pause for a second, Peter, and talk about, do you think that that is something specifically for the nonprofit sector, or is this something that you think is going to become more widespread, for for-profit companies and government entities as well, that having that oversight committee or someone in charge of AI at your organization?
Peter Campbell: Yeah, I think that recommendation is to everybody, not just nonprofits. Yeah. Of course, I’m more concerned about the nonprofits because they’re-
Carolyn Woodard: Well, and I feel like this whole question, nonprofits are really concerned with AI and with the ethics of the AI as well, because their relationship with their constituents and their missions. They often have some of an extra layer of concerns as well that they want to make sure are addressed.
You Are the Expert, Not the AI
Peter Campbell: One of the commonsense things that I say about AI is to remember that you’re the expert, not the AI tool.
I always give the example of GPS as an AI, always has been. I know that when I use my GPS, it will every now and then take me to make a left turn on an incredibly busy street where there’s no traffic light. And I know better. And if I know the streets better, I know that I can just ignore the GPS and go up the street that I know has a traffic light. I’ll be able to make that turn in a couple of minutes instead of waiting 10 or 15.
Everything you do with AI, you need to apply your judgment and think of it the same way you might override your GPS, because you do know better.
And right along with that, a good rule of thumb is to only create things in AI that you are knowledgeable about, so that you can really assess how well the AI did. If you are creating things that you are not so knowledgeable about, then make sure somebody else who is knowledgeable is reviewing them.
Carolyn Woodard: That’s great advice.
Peter Campbell: Yeah. Those are my most commonsense part of it.
Transparency and Authenticity
I mentioned transparency earlier, and I’ll just reiterate that you have a tone and style of writing that people are used to. People you communicate with regularly know. They know what you sound like even in email.
They might be shocked to find something that doesn’t sound like you at all coming from your email address and possibly not trust you and have issues with it.
A good rule of thumb here is to the same way it’s typical to have a signature on a mobile phone that says sent from my mobile, which apologizes for any typos, mention it if you used AI in creating an email, or just have this thing in your signature saying that sometimes AIs help with my emails, that type of thing.
Carolyn Woodard: Yeah. Oh, I love that too. I think I have noticed that this is also a little bit of a generational thing.
I think younger people know what AI is and sounds like really quickly. They are very quick to pick up on, that email sounds like AI, or that poster looks like it was made by AI. There’s just little tells that they’re a little bit more clued into.
So even if you think it looks fine, maybe run it by a young person at your organization because they might want to tweak it further.
Curate Your Data Sources
Peter Campbell: And then I think this is just incredibly important, and I specifically use this word, curate your data sources. I think of AI, curate is a great word for it. Treat it like a museum. Your data sources are going to determine the output of the AI tooling. You need to make sure that they’re solid.
And solid means that, as I mentioned earlier, you’re not pulling things off of your network or in there that shouldn’t be in there, such as personal identifiable information or copyrighted information.
And then you’re really making sure that your data sources don’t lead to any kind of bias. And we’ve seen in the news reports about, I mean, long before the recent one with Elon Musk’s AI suddenly sounding like it was from 1939 Germany, Microsoft’s had very similar issues with AI as they’ve created going back years. And it’s not like these things are necessarily intentional, but if you’re not careful about the data you’re telling the AI to work with, then you can create biased products with the AI.
And in the nonprofit sector in particular, we need to be very careful about that.
Carolyn Woodard: Yeah, we don’t want to do more harm to populations and communities that we’re working with that may already be excluded from some of these data sets, etc.
Final Thoughts on Managing AI Risks at Nonprofits
Those are all great, great tips. Do you have any final thoughts you want to leave us with on adopting AI at nonprofits and mitigating for risks?
Peter Campbell: AI is already in use at your nonprofit, and it’s only going to grow. I do look at it and go, I’ve lived through some, as most of us have, some disruptive technology changes in the environment, and I think this one is going to be huge.
There are already a lot of things about AI that I think that society as a whole really needs to consider, like the energy and resources required to support it, and the impact that it may or may not have on the workforce. It really does end up replacing a lot of jobs, and we as a society need to think, how do we change?
Carolyn Woodard: The education is changing. Will there be English departments in five years? If you don’t want to learn how to write, they’ll just write it for you. So, lots of changes.
Peter Campbell: My message boils down to, it’s out there today, and we want to use it responsibly and effectively and safely. So, think about the policy. If you haven’t yet, great time to adopt one.
Carolyn Woodard: Excellent. I think that’s a great note to leave it on. Peter, thank you so, so much for your time today. I know you’re super busy, and I just appreciated this PowerPoint so much when I saw it, and I’m really glad that you could come on and talk about it.
Peter Campbell: Okay. Let me just say this, that if you want to see these slides, if you go to techcafeteria.com, there’s a link to them right from the front page. So https://techcafeteria.com.
And Carolyn will include the direct link (https://techcafeteria.com/?p=3649).
Carolyn Woodard: Thank you again. Thanks a lot.
Photo by Clark Van Der Beken on Unsplash