The pace of new AI tools and uses on the market is like a rocket. Does your nonprofit have questions about cybersecurity, opting out, or best practices when using AI?

Microsoft released a new AI Governance Framework for Nonprofits joining many institutions putting out guidelines and studying the impact of AI on nonprofits and the ways nonprofits are using AI. Many nonprofits are approaching AI very cautiously, or may not have developed policies yet to use AI securely. Or your nonprofit may have ethical questions about using AI at all.

How do you even attempt to opt out of AI tools that are packaged with an update of tools you already use?

How do you communicate with your staff the ethics your organization expects staff to use when using AI?

Nonprofit Cybersecurity expert Matt Eshleman shares his thoughts in this podcast on the importance of AI Artificial Intelligence at nonprofits. Some key takeaways:

Listen to Podcast

Like podcasts? Find our full archive here or anywhere you listen to podcasts: search Community IT Innovators Nonprofit Technology Topics on AppleSpotifyGoogleStitcher, Pandora, and more. Or ask your smart speaker.

How is your nonprofit approaching AI?

Presenters

Photograph of Matthew Eshleman, a cybersecurity expert at CommunityIT, providing insights into cybersecurity resources.


As the Chief Technology Officer at Community IT, Matthew Eshleman leads the team responsible for strategic planning, research, and implementation of the technology platforms used by nonprofit organization clients to be secure and productive. With a deep background in network infrastructure, he fundamentally understands how nonprofit tech works and interoperates both in the office and in the cloud. With extensive experience serving nonprofits, Matt also understands nonprofit culture and constraints, and has a history of implementing cost-effective and secure solutions at the enterprise level.

Matt has over 23 years of expertise in cybersecurity, IT support, team leadership, software selection and research, and client support. Matt is a frequent speaker on cybersecurity topics for nonprofits and has presented at NTEN events, the Inside NGO conference, Nonprofit Risk Management Summit and Credit Builders Alliance Symposium, LGBT MAP Finance Conference, and Tech Forward Conference. He is also the session designer and trainer for TechSoup’s Digital Security course, and our resident Cybersecurity expert

Matt holds dual degrees in Computer Science and Computer Information Systems from Eastern Mennonite University, and an MBA from the Carey School of Business at Johns Hopkins University.

He is available as a speaker on cybersecurity topics affecting nonprofits, including cyber insurance compliance, staff training, and incident response and is a frequent podcast guest. You can view Matt’s free cybersecurity videos from past webinars here.


Carolyn Woodard podcast


Carolyn Woodard is currently head of Marketing and Outreach at Community IT Innovators. She has served many roles at Community IT, from client to project manager to marketing. With over twenty years of experience in the nonprofit world, including as a nonprofit technology project manager and Director of IT at both large and small organizations, Carolyn knows the frustrations and delights of working with technology professionals, accidental techies, executives, and staff to deliver your organization’s mission and keep your IT infrastructure operating. She has a master’s degree in Nonprofit Management from Johns Hopkins University and received her undergraduate degree in English Literature from Williams College.

She was happy to have this podcast conversation with Matt Eshleman to talk about AI Artificial Intelligence at nonprofits.





Ready to get strategic about your IT?

Community IT has been serving nonprofits exclusively for twenty years. We offer Managed IT support services for nonprofits that want to outsource all or part of their IT support and hosted services. For a fixed monthly fee, we provide unlimited remote and on-site help desk support, proactive network management, and ongoing IT planning from a dedicated team of experts in nonprofit-focused IT. And our clients benefit from our IT Business Managers team who will work with you to plan your IT investments and technology roadmap if you don’t have an in-house IT Director.

We constantly research and evaluate new technology to ensure that you get cutting-edge solutions that are tailored to your organization, using standard industry tech tools that don’t lock you into a single vendor or consultant. We don’t treat any aspect of nonprofit IT as if it is too complicated for you to understand. When you are worried about recovering from a cybersecurity incident, you shouldn’t have to worry about understanding your provider.

If you have questions about AI Artificial Intelligence at nonprofits, cybersecurity, and change management for nonprofit IT, you can learn more about our approach and client services and contact us here.

We think your IT vendor should be able to explain everything without jargon or lingo. If you can’t understand your IT management strategy to your own satisfaction, keep asking your questions until you find an outsourced IT provider who will partner with you for well-managed IT.

If you’re ready to gain peace of mind about your IT support, let’s talk.


Transcript

Carolyn Woodard: Welcome to the Community IT Innovators Podcast. My name is Carolyn Woodard, and I am the Outreach Director for Community IT. And I am so happy to have my friend, Matt Eshleman, here with us today to ask him some questions about AI adoption.

So Matt, would you like to introduce yourself?

Matt Eshleman: Thanks, Carolyn. It’s good to be here talking with you. And as you mentioned, my name is Matthew Eshleman.

I’m the Chief Technology Officer here at Community IT. I’ve been at Community IT and working with nonprofits in a range of technology solutions for quite a while. And it’s always great to see the new technology solutions and innovations that have really impacted the IT space over the last 20 plus years.

It’s a very different world than when I first joined Community IT.

Carolyn Woodard: Right? Our company used to help offices where people were in an office and needed to have hardwired outlets for their PCs. We’ve come quite a long way. There’s always a lot of changes. 

I wanted to have you on today because I know at Community IT, you are one of the AI enthusiasts. So artificial intelligence as a tool and an exciting new tool. And we’ve all heard that AI is really fantastic. And there are a lot of use cases that nonprofits are using. 

AI Downsides? 

But I wanted to ask, are there downsides? Are you hearing from nonprofits that are not as excited about AI?

Matt Eshleman: Yeah. I think it’s a good question. And as you rightly identified, I am a fan of technology and what the new technology solutions can enable us to do.

I remember whenever I first started at Community IT, the big questions were, “Do we need to have email addresses for everybody? This seems like kind of a distraction.” 

I don’t know if AI is kind of going down that same road. It is relatively new, this new tool, and are we going to be able to use it productively?

I think the AI technology adoption, and I think technology adoption in general, just really has sped up. The pace of AI and new technology adoption is just kind of on a rocket path in terms of how quickly it gets out there. And I think at this point, it really feels inevitable. It’s impossible to avoid AI. 

One of the things that we do for our clients is provide lots of cybersecurity services, and we are protecting them against malicious traffic on the web. And as part of that, we’re monitoring and blocking malicious sites. And through that, we get some insight into where clients are going. And we see right now that probably about 80% of organization staff are already going and using AI tools for their work, whether that’s sanctioned or not. 

I think AI is here. It’s in our organizations. And I think even if there’s some reticence to adopt or engage with these tools, I think the reality is that the staff at your organization are already using it. Even the tools that you’re using already are incorporating AI elements into the service.

If we think of AI as like, oh, this thing that we need to go out and procure and build and provide insight into, you know, that’s not really the case. You know, those tools are already here. They’re already in our organization.

Carolyn Woodard: Yeah. In the updates, you know, they’ll come out and say, “oh, now you’ve got this AI part of a tool that you already used.” And there’ll be a little pop up that’ll say, here’s your tour of how to use this AI helper or something.

It’s funny that you mentioned youth, because I think there is a little bit of a generation gap. And I know I’ve told this story before, but I was at a conference where someone older was saying quite authoritatively, “We just don’t allow anyone at our organization to use any public AI. We’ve just locked it down.”

And my reaction was, are you never going to hire anyone in their 20s again? Because I guarantee young people are very familiar and they’re exploring and experimenting with what AI can do. And if you have a young person in your life, just ask them some of these really awesome and interesting ways that AI is being used for fun, for entertainment as well.

And it also reminds me of when social media started. Nonprofits had all these debates over, should you allow staff to be on Facebook during office time? Should your nonprofit have a Facebook page or presence? Should you be on LinkedIn? Should you be on Twitter or TikTok? And often it’s the younger people who say, “of course you should be, that’s where we’re finding our information.”

Opting Out of AI

I think there are nonprofits that maybe as part of their mission, they have a reluctance to contribute to AI. I’m thinking particularly of maybe environmentally oriented nonprofits that are concerned about environmental changes and climate crisis and might have concerns about AI itself just in general using so many resources.

Can you talk a little bit about if you are a nonprofit that wants to be cautious or wants to opt out for maybe ideological reasons or value alignment, how do you go about that?

Like we just said, the tools are prompting you with these AI updates. Are there steps that you can take to opt out?

Matt Eshleman: Yeah. Well, I think it’s an interesting view. Like is there anything you can do? I mean, I think it’s kind of like the water we swim in has AI infused in it, right?

The analogy I think that came to me is learning about racism, right? Racism is just, it’s part of our environment. It’s in the water we swim in, and we need to figure out how to acknowledge it. And we can’t, like with racism, we can’t say, well, it doesn’t exist, or I’m not going to be influenced by it, right? It’s just AI is there.

It’s part of our environment, and it’s part of that water we swim in. Unless you’re going to be incredibly disciplined about your users and your data, you’re going to be intersecting with these solutions.

I think it’s helpful to understand, as I talked about in the beginning, users, in the absence of guidance, are already there, and they’re already interacting with these tools.

And many tools themselves are operating on a principle where you need to explicitly opt out as opposed to opting in.

Solutions like Slack, they’ve incorporated AI into their platform, and if you want to opt out, it’s not just a checkbox. You actually have to go through a process and email them. They’re making it difficult. And we will share the link out here to Slack in terms of their documentation and how they provide those instructions. Manage Slack AI Settings for your Organization.

And again, the same thing with Google Gemini. If you’re in Google, that is a tool that is enabled by default in your Google Workspace environment. You can opt out. Gemini Apps Privacy Hub.

Frequently Asked Questions about Microsoft Copilot

Zoom is incorporating AI assistance into its platform, and you can opt out.

As persons that are managing technology and these solutions, understanding, coming back to some of the fundamentals of what good IT management is, you need to know what platforms your organization is using, how your staff are interacting with them, and then really getting out ahead of being intentional about the use and adoption of AI tools in your organization.

There’s lots and lots of great AI frameworks that are out there in terms of how to adopt that governance. 

Organizations really need to exercise that discipline and be proactive because in the absence of making choices, all of these technology solutions are adding it in to the mix and the product that you are already consuming.

Carolyn Woodard: The links to some of these resources will be in the transcript. And I want to put in a plug as well that we have an AI Acceptable Use Policy template on our website that you could download and adopt for your own organization. 

And as you said, there are some AI use frameworks out there for nonprofits specifically to be intentional about it and to think through what you’re using, why you’re using, how you’re using it.

And then as you said, the cybersecurity implications of using those tools. 

Cybersecurity Aspects of AI Tools

From a cybersecurity standpoint, particularly, do you have advice on being intentional about what AI you use? If it’s a tool that you’re already using and they’re offering an AI upgrade bot helper, does that make you more insecure or is it perfectly secure if you’ve already secured that tool?

Matt Eshleman: I think one of the benefits of AI, maybe it’s not a benefit, right? But because it is so expensive and because it requires so many resources, only really big companies are able to do it and execute. And those companies, I think, really are incented to at least have the public perception, and I think the reality, that they are protecting your data. 

I think Microsoft, Google, Salesforce, right? They have really big campaigns and pushes to say, “you can trust us, and here’s all the things that we’re doing to safeguard your data and protect it.”

I do think for most organizations, turning on the AI voice transcription service to summarize meetings and provide action items is a low-risk and high-benefit decision to make. Being able to use those productivity enhancements, which I think again is what most organizations in the small and mid-size space are using AI tools for. They are getting meetings of recordings, summarizing things, providing action items, automating a lot of the tedious things that either don’t get done well or don’t get done at all. I think for a lot of organizations, those productivity enhancements are really beneficial and worth engaging with AI. 

And I think I’m probably not qualified to provide answers in terms of some of the bigger picture existential questions in terms of, okay, so we’re beyond getting meeting summaries and having rewrites of our fundraising letter. And okay, now we’re going to look for data analytics about our clients that we are serving, and that is providing information into an AI tool and that data may be surfaced somewhere else.

I think the scale is such that I think most organizations are going to fly under the radar of any substantive AI data leakage risks. I think big organizations that are doing massive research are probably at a much higher risk than a small to mid-sized nonprofit doing work on their case management staff.

Carolyn Woodard: That makes sense. And probably those bigger organizations that have more budget to spend on a bigger research data lake are also going to have stronger cybersecurity protections around that. 

“Private” vs Public AI Tools

I’m wondering if you would help us make the distinction between what feels to me like those more internal tools vs external and public tools.

If it’s Microsoft helping you with grammar or it’s Zoom helping transcribe, give you a transcription that’s more internal versus more public tools. We always say any platform can be used insecurely. If you give someone your password over Zoom, then you have now used Zoom insecurely.

But would you say in general, you would advise against nonprofit staff going to more public sites? Or like I was just saying, for teens and 20-year-olds, going to the more entertainment-oriented AI uses from a work computer, would you discourage that?

Matt Eshleman: Yeah. I think making good choices about what and how you’re using and adopting AI tools is important. I would say, organizations should probably be using their “corporate AI solutions” as opposed to going to the free or more open version.

For those that are Microsoft 365 customers, you already have access to, even without paying, to a private version of Chat GPT. The interactions that you have with the tool don’t contribute back to the model. Those queries are private.

It’s kind of a use this, not that. If you’re a Microsoft 365 customer, use Copilot under the umbrella of your organizational account as opposed to going straight to Chat GPT. 

I think the same would apply for Google. Use it under your Google Workspace account, not the public Gemini.

I think some choices like that can help provide some additional integrity or protection around how organizations are interacting with those large AI solutions.

Intentional Adoption of AI Tools

Carolyn Woodard: I’m hearing a lot from people who are AI enthusiasts, and some of the advice if you’re interested in it, or just getting started, is to just go play around with it. Just try it and find how it can help you with your productivity. It sounds like what you’re saying is that someone at your organization might want to do a training on use this, not that, from your work laptop, and go over some of those issues that are involved.

You may need to explicitly help people who may be not as tech savvy to understand how they can go within your own Microsoft environment and use these tools rather than going out to the web and finding them that way.

Matt Eshleman: Yeah, for sure. 

I think that concept of a working group or a collaboration team that’s really focused on using, adopting, sharing what they’re learning is a great way to go about it. 

We have Microsoft Copilot Adoption Guide, and that’s one of the concepts that we talk about there. I think Copilot is great. I think a lot of organizations would get a lot of benefit from it, but at the same time, you don’t need to go and spend $360 per license to license everybody in your organization on day one. 

If you are going to opt into AI, be intentional about it, make those decisions, and get those five people or ten people that are really excited. Get them licenses, and then have them work together to figure out what are our good use cases, how are we going to use this, what do we get the most value out of, what are things we need to be aware of?

You’re being intentional about opting into those AI solutions and getting value out of it, as opposed to just having it be the Wild West and people using free tools that maybe don’t have as tight control in terms of service around the use of the data that’s being put into it. 

Some of the more commercial corporate solutions really are incented to protect your data because they want to continue to retain and sell services back to you. If it turns out that that data that you’re feeding into the system is being used in other ways, I think that’s really damaging to trust. And that’s going to be really hard for these organizations to reclaim trust.

Carolyn Woodard: Yeah, those larger tools and companies have a lot of tech journalists who are probing and trying to find out what they’re using the data for. Misuse would come out more, it’s more likely that would come out with a larger organization and larger tool than a smaller tool that you might just be playing around with.

Is the AI Revolution Net Positive or Negative? 

Matt Eshleman: I have some thoughts on the overall concept of, “is kind of this AI revolution an overwhelming good or a negative? What are the things to be aware of?” And I have, I don’t know if it’s totally a perfect analogy, but when I think AI adoption, I’m drawing a little bit of analogy between that and phones for kids.

I think it went from being about, “hey, phones for kids are great, gives you a way to stay in touch, like they can get connected with their community. There are these great tools that allow connection and community building.” And now I think we’re starting to see the swing the other way, to say, “oh, well, maybe it isn’t great that they have these devices that are just sapping their attention all day.”

Carolyn Woodard: Phones are so addictive.

Matt Eshleman: Yeah. I think in the same way, I think AI is also going through a hype cycle. I haven’t talked about the Gartner hype cycle, but we’re seeing a little bit of the swing back into, “oh, well, maybe all this AI stuff isn’t as fantastic.” Having these big AI models built on these large language models that just have basically scraped from reddit, like maybe this isn’t the way to build knowledge that we want to generate solutions from. What’s the reaction to that?

Carolyn Woodard: That it really just is such early days that we’re all helping them improve their product, right? I do get the sense that eventually you’ll be able to ask an AI bot something and get a reasonably good answer back, but we’re not there yet. And it does kind of remind me of the early days of Alexa and Siri, where they wouldn’t understand you. if you got an answer, it wasn’t at all what you had been asking for. And now it’s so much better.

Matt Eshleman: Yeah.

Carolyn Woodard: Oh, we did that, right? We trained it to be better.

Matt Eshleman: Well, I think another big takeaway is that technology improves and it’s going to get better. Use it, stick with it. I think in general, I’m still a fan of AI. I think AI tools are technology enablers. And, you know, like the mechanical harvester, using technology that makes our work easier is a good thing.

Overall, in the long run, those efficiency improvements are going to drive adoption. This is something that we’re going to be living with. Unless you’re ready to be Amish and completely reject all modern technology, I think we have to find a way to live with the technology as opposed to rejecting it outright.

Carolyn Woodard: But I think it’s worthwhile to acknowledge that change is hard, especially dramatic changes. It’s funny you said the Amish because I was just watching a period drama set in the 40s. And one of the characters was walking through this barn and saying, just 10 years ago, this barn was full of Clydesdales because all of the farming had to be done with horses.

And it was really not even a generation, it was within four or five years that everyone was using tractors. And then what happened to all of the horses? The character in this show was very wistful about the horses. You don’t need them anymore. It was a huge change in a very short time. 

And I think, it’s not bad to feel nostalgic about the way things were. While still saying, this is a change. Clearly, there’s economic and information and technology reasons to make the change to use AI. Adapting as we go along, I think, is useful.

Well, I think that’s all just great advice. We will put a bunch of links in the transcript to this podcast and have some more information on our site. Thank you so much, Matt, for joining me today.

I really appreciate your time and thanks for making us smarter about AI.

Matt Eshleman: Great. Thanks, Carolyn. Happy to be here.

Photo by Daniele Levis Pelusi on Unsplash