AI and Cybersecurity at Nonprofits, Hosted by Jitasa University
Join Matt Eshleman, a cybersecurity expert, in a short presentation on AI, new AI tools for nonprofits, some use cases, things to watch out for, new cybersecurity fraud and scams enabled by AI, and ways to protect your organization.
In this video, you will learn to
- Learn about AI and what nonprofits need to know about AI
- Use cases for AI at your nonprofit
- New cybersecurity fraud and scams enabled by AI and how to prevent becoming a victim
This video is designed for financial professionals in nonprofits, from CFOs to accountants. If you want to know how AI is going to impact you and your nonprofit, from your work life to cybersecurity, please review this introduction to AI and cybersecurity at nonprofits, hosted by Jitasa University.
View Video
Subscribe to our Youtube Channel here
Listen to Podcast
Like podcasts? Find our full archive here or anywhere you listen to podcasts: search Community IT Innovators Nonprofit Technology Topics on Apple, Google, Stitcher, Pandora, and more. Or ask your smart speaker.
What is Jitasa?
Jitasa is a network providing accounting services, CFO services, and nonprofit tax advice across the US. Jitasa partners with nonprofit associations and federations, to provide bookkeeping and accounting services to members of their network.
Jitasa also partners with vendors, consultants, and companies like Community IT to provide resources on financial topics and support to the nonprofit community, including a free resources area of their site.
Community IT is always looking for opportunities to provide nonprofits technical tips and insights from our 20+ years of serving this sector. Matt Eshleman was happy to sit down with Jitasa to introduce the ways nonprofits are using AI and answer questions. Learn more on AI and cybersecurity at nonprofits in the transcript below.
As with all our webinars, these presentations are appropriate for an audience of varied IT experience.
Community IT is proudly vendor-agnostic and our webinars cover a range of topics and discussions. Webinars are never a sales pitch, always a way to share our knowledge with our community.
Presenter:
As the Chief Technology Officer at Community IT, Matthew Eshleman leads the team responsible for strategic planning, research, and implementation of the technology platforms used by nonprofit organization clients to be secure and productive. With a deep background in network infrastructure, he fundamentally understands how nonprofit tech works and interoperates both in the office and in the cloud. With extensive experience serving nonprofits, Matt also understands nonprofit culture and constraints, and has a history of implementing cost-effective and secure solutions at the enterprise level.
Matt has over 22 years of expertise in cybersecurity, IT support, team leadership, software selection and research, and client support. Matt is a frequent speaker on cybersecurity topics for nonprofits and has presented at NTEN events, the Inside NGO conference, Nonprofit Risk Management Summit and Credit Builders Alliance Symposium, LGBT MAP Finance Conference, and Tech Forward Conference. He is also the session designer and trainer for TechSoup’s Digital Security course, and our resident Cybersecurity expert
Matt holds dual degrees in Computer Science and Computer Information Systems from Eastern Mennonite University, and an MBA from the Carey School of Business at Johns Hopkins University.
He is available as a speaker on cybersecurity topics affecting nonprofits, including cyber insurance compliance, staff training, and incident response. You can view Matt’s free cybersecurity videos from past webinars here.
Matt always enjoys talking about ways cybersecurity fundamentals can keep your nonprofit safer. He was happy to be asked to give this short presentation on AI and cybersecurity at nonprofits for Jitasa.
Transcript:
Introduction: AI and Cybersecurity for Nonprofits
Jon Osterburg: Welcome to Jitasa University, where our goal is to empower nonprofits through free educational content. These short videos are all provided by trusted partners within our network as well, so each topic will be relevant to every organization, regardless of size. Let’s dive right in.
This month, we’re featuring Matthew Eshleman, the Chief Technology Officer at Community IT Innovators. As a 100% employee-owned and managed outsourced IT services provider, they exclusively assist nonprofit organizations in utilizing technology to accomplish their missions and have been doing so for over 20 years. Let’s pass things over to Matt and get started with the content.
Matthew Eshleman: Great. Thanks for that introduction, and I’m looking forward to talking a little bit more about AI and some of the intersections of AI and cybersecurity.
My name is Matthew Eshleman, I’m the Chief Technology Officer at Community IT. I’ve been with Community IT actually for a little over 22 years. It’s been a long time working with a really wide range of nonprofit organizations in a variety of different sectors, mostly in that small to midsize space and it’s great to get that perspective and variety of client experiences.
We get to work with great clients and also have really great fellow employee-owners. And, we are recognized as one of the top 501 MSPs in the United States, an honor that we’ve received for the last several years and again in 2023.
Introduction to AI
As we think about AI and innovation and all the changes that it’s brought to the sector it’s helpful to have a little bit of background information of just how far we’ve come. On the one hand, the pursuit of artificial intelligence has been going on for decades. I mentioned that I’ve been at Community IT for 22 years. Before that, I was a computer science major. I had classes in AI. And so, this topic has been around for quite a while.
There have been a couple things in recent history that show us just how far we’ve come. Part of that is the famous artificial intelligence test called the Turing test, named in honor of Alan Turing, was passed in 2022. (a test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.) That means that we’ve had conversational AI occur. You can’t tell if you’re talking to a computer or a person, so that’s a hallmark that the sector achieved in 2022.
I think it’s not an underestimation to say that the new technology of ChatGPT, part of OpenAI, is seen as revolutionary. It’s caught people’s imagination in a way that no other new technology platform has. That’s really evidenced by the dramatic growth in user adoption that the ChatGPT platform has had. ChatGPT gained 100 million users in its first two months of launch. It’s really tremendous end-user adoption that has made it the topic of conversation everywhere.
At the end of 2022, we weren’t really thinking about AI at all. Then all of a sudden, by January of 2023, it was everywhere and that’s really driven by the dramatic increase in AI adoption. How easy it is to use, the fact that it was free, obviously that doesn’t hurt either. But it’s really been a dramatic ramp up in AI adoption.
AI Tools for Nonprofits
Every organization has users that are getting experience with the platform and tools. And I think it’s going to be a big driver in how we shape our organizational work in the future in the nonprofit space.
There’s a whole range of public AI tools and resources, the big name one being ChatGPT. It’s free, it’s available. You can just go to the website and type in some prompts and get responses right away. That’s under the umbrella of OpenAI.
Google has its own AI platform, that has now recently been rebranded as Gemini, from what used to be called Bard. There’s lots of name changing in the space. But there’s lots of tools publicly available for organizations to get started, experiment with, that are standalone.
The other thing we’re seeing is a lot of AI or artificial intelligence being added into the tools and products that you are already using. For example, there’s the Zoom AI Assistant. You didn’t have to sign up or do anything extra, the AI Assistant is a plug-in and it’s available. AI is something that companies are using to provide additional features, additional engagement, and make their product easier to use. We’re seeing AI being adopted into the tools that we already have installed and we’re not necessarily desiring to go out and add new ones.
There’s certainly a whole set of AI tools that are geared towards enterprise or specific product features that you can go out and pursue. There’s Copilot in the Microsoft world. You can probably think of a number of other ones where you can buy and sign up and add additional licensing to give you new generative AI capability within your existing platform. Whether that’s to help you generate and write content easier or to improve the code development process, there’s lots of specific tool sets being developed and marketed for the enterprise to maintain and improve efficiency.
We’re seeing a lot of excitement, a lot of interest in the nonprofit sector on how to use AI intentionally to achieve their mission and to be a lot more efficient. That’s a big driver as well, particularly in the corporate sector, how can we be more efficient? How are these companies using AI tools to gain efficiency in their operations?
For the nonprofit sector, because there’s just so much work to do and nonprofits have been historically under-resourced. There’s never enough time, there’s never enough people, there’s never enough capacity to really do all the work that needs to be done.
My hope and my expectation is that nonprofit organizations are going to be able to engage and leverage these AI tools to meet that gap that exists between the mountain of work that needs to be done and what they have capacity to do as an organization.
Maybe they don’t have all the human resources available, hopefully this can be a way to give organizations more capacity to address the needs that are out there.
Should Your Nonprofit Invest in AI Tools?
For organizations considering AI tools, thinking about AI adoption, answer these questions:
- What does it mean for you?
- How do I know that the content I’m reading is generated by a person? Or generated by AI? Does that matter?
- If I’m making judgments in grant applications or funding, do I need to have some qualifier if I’m using AI tools to evaluate them? How that is being processed really raises a lot of questions, and I think it’s important for organizations to engage intentionally and be deliberate in their adoption and use of these really powerful technology tools.
In terms of what’s next in adopting AI, there’s all these questions:
- What are those AI tools that nonprofits should consider this year and for the future?
- How are nonprofits utilizing AI?
- What do decision-makers need to know about AI adoption?
- There’s a significant security aspect to this as well. How are hackers or threat actors using AI?
- What policies and staff trainings are needed as these AI tools become ubiquitous?
This is moving very, very fast, and a lot of organizations are really in a position to play catch up when it comes to figuring out how they are going to use these tools. Some organizations are saying, Well, hey, we really need to be slow and deliberate.
As a managed service provider which supports over 7,000 endpoints, we do a lot of security. We get reporting and analytics on where devices are going. At many organizations, well over half the computers are making visits to ChatGPT.
That’s an ungoverned public resource that staff at nonprofits are going to in the absence of any blocks or policy guidance. Organizations need to have appreciation that staff are innovative, they’re creative. They want to use the latest and greatest things that are out there. And so, it’s likely that staff at the organization are already taking advantage of some of these AI tools even in the absence of an organization’s AI policy or AI policy template. We’ll talk a little bit more about some resources to help guide that coming up, but I do want to take a little bit more time to talk about some of those different scenarios and some of the early lessons learned in adopting some of these tools.
Right now among the AI tools, particularly Microsoft Copilot is fantastic. It’s a really neat product, but it’s not necessarily a silver bullet. It’s not going to be your digital assistant that you wave a magic wand and everything just works. It’s designed to work alongside somebody. It’s not a completely autonomous system.
AI tools, they’re good, they’re inventive, but they’re not perfect. They still need to be used under advisement, and you need to review the output of everything that’s coming out of it. There’s been lots of different situations where AI has invented research citations or gone off the rails and invented content or resources. You can’t just set it free and expect it to do work and then come back and everything is fine. AI needs to be monitored in how it’s being used.
Using AI Tools
We found the most benefit of these tools being automated meeting note taking, generation of task lists, and generating of action items. I find that super helpful. I always want to take meeting notes, but I always get distracted with the conversation. Being able to have the AI assistant transcribe the meeting and then go beyond transcribing and say, what’s the meeting summary? What are my action items? How do I follow-up? That’s been invaluable because it seems to be pretty accurate, and it provides a capacity that takes a lot of manual time otherwise. That’s one good use case that we’ve seen.
It’s been helpful in the work that we do in some cogeneration, speeding through that process, and providing some consistency in our cogeneration and scripting that we’ve developed. So that’s been a great use case, and for some general content generation.
For organizations that are starting some proposals, general grant applications or general policy work, it can take time and energy just to get those processes started. I’ve found some Copilot tools helpful in just getting the ball rolling. I haven’t relied on it for something in a whole cloth or from start to finish. But it is great to help you get started, make some edits or review and see what else may be missing from the policy document.
That’s some of the good use cases that we’re seeing organizations take advantage of now. But, again, there’s lots of promise for future capabilities. For organizations that are just getting started, it’s great to go into that process with a blue skies approach, being open to the possibilities, having a small group involved in AI adoption and providing them resources and tools to get started.
AI Policies
Specifically with that, getting started, there are a couple resources I want to highlight. The first is an AI policy template. It’s important for organizations to have that policy foundation in place to help guide future decisions. In the absence of policy, everything is just free range.
https://communityit.com/template-acceptable-use-of-ai-tools-in-the-nonprofit-workplace/
You can go through a process of developing a framework for your organization.
- How are we going to adopt it?
- What are the guardrails we have in place?
- What areas are acceptable for us to use these tools?
- What are some prohibited areas for using these tools?
Starting with the policy gives an organization a way to come together and talk through the different scenarios, the different possibilities that exist, and get something written down so that you can make decisions. I think it’s helpful if you can make decisions in a more systematic way moving forward.
We have a policy template that’s available through our website as a free download.
I would also highlight that the Technology Association of Grantmakers or TAG, has done a lot of work around an AI adoption framework. Theirs is a much more sophisticated model and goes into things like,
- How would we potentially use this tool to evaluate grant applications?
- How do we identify and address potential areas of bias within the AI models that exist?
- It’s a much more extensive approach. So, I think that’s a great resource to reference as well. They’ve engaged with some very thoughtful folks around the framework adoption, and I think it’s well done. It’s a great place to get started.
Microsoft, as well, has their Microsoft AI Compass. That’s a resource for nonprofits to go through and fill out some basic information about your organization, the sector that you’re in, how mature you are in different areas, different process information. Then it’ll give you a report that highlights different areas where your organization may be able to take advantage of some of the AI tools.
Like I said, the deliberate approach to these AI adoption initiatives is really important, so that you don’t get into a situation where you might unintentionally disclose private information into the models. Maybe you are deploying tools a little in advance and exposing information internally, that maybe is not supposed to be available.
These AI tools are really powerful. At least in the Microsoft world, they have access to everything that you do as a user. And so, if you’ve been relying on security by obscurity and you’re just hiding things and all of a sudden you put an AI Bot on the system, it’s able to scan and discover information, that could be a problem.
AI policies are designed to help organizations establish that framework to make sure you keep the integrity of your organization’s data secure and give you a roadmap to move forward.
Risks from AI in Cyberhacking
We’ve been talking a little bit about how great AI is and how fantastic all these tools are. But I think it’s also important for us to consider a little bit on the risk side of AI as well, specifically for the finance department.
We’re seeing continued rising levels of cybercrime. It’s important for organizations to understand that cybercrime, in the vast majority of cases, is financially motivated. It’s a business and there are threat actors that are in this to make money. The finance departments and the folks are really targeted by cyber criminals because they have access to the organization’s finance and banking information and that’s how these actors are exploiting weaknesses in the system.
The numbers are staggering whenever we look at it. Consumers lost $10 billion to scams in 2023. That’s FTC data, so that is actual money out the door that folks were scammed out of last year.
And just in the last month, there was an example of a $25 million loss. An organization in Hong Kong was scammed out of that money through the use of a deep fake video. A threat actor was able to build a model of the organization’s CFO, joined a meeting as this CFO and authorized the transfer of $25 million into an account that the hacker controlled. That example really highlights the need for good systems and processes in place to authorize changes in financial payment information and banking information. And through the use of deepfake technology, just being on camera with somebody may not be enough.
Organizations really need to have clear systems and processes in place to verify changes to payment information. Make sure those things are validated, and make sure you have high trust in the folks that you’re working with.
We’re talking about cybercrime, the financial motivation there, and the fact that while AI is fantastic for us as a productivity tool, it also means that the threat actors and the bad guys are using those tools as well. They can use it to write better phishing emails. They can use it to deep fake video your finance team. And it’s a pretty good payoff. For a few hours work, they’re able to generate a $25 million return.
Having an understanding of this financial dimension of cybercrime is important because it helps to put things in perspective. They’re not targeting you, or you’re not being avoided because you’re a nonprofit and you do good work. You’re being targeted because you’re an organization that has money and maybe you haven’t been able to invest in some of the cyber controls that more sophisticated or larger organizations have been able to.
Cybersecurity for Nonprofits
There are some things that you can do as an organization to keep yourself more secure, I wanted to highlight a couple of additional controls and some threats.
Adversary in the Middle Attacks
In the industry data, there are sophisticated attacks out there that are able to steal money. That usually starts with an attack on an individual’s identity. Attackers are typically going to try to hack somebody’s account, appear as that person, and then manipulate conversations within an organization. We’ve seen this occur in the nonprofit space using some pretty sophisticated Adversary in the Middle (AitM) attacks. That means that they can steal the MFA tokens for folks if you’re using the authenticator app on your phone or an SMS message. There are ways that information can be stolen now. That occurred last year and something we’re seeing amongst the clients that we’re supporting. There are some ways to protect against that.
FIDO Keys
And so for organizations, particularly in the finance department, or if you have targeted staff, moving to physical security keys is an important step. So you also see that called, FIDO. Physical security keys are a way to basically provide hack-proof MFA methods. It’s a little physical token that you can plug into your computer and tap whenever you need to authenticate. Those are important transition steps that organizations can take to continue to raise the level of their protection, particularly in the finance department where we see the most risk.
Cyber Insurance Requirements
Organizations also need to continue to review cyber liability insurance requirements. We’ve seen the year-on-year renewal rates flatten out a little bit, but we’re also seeing a dramatic increase in the required security control. So just having MFA may not be enough. Maybe you need to have MFA and also backups of your data. You need to have sophisticated endpoint protection. Maybe you also need to have monitoring of your digital identities.
The number of controls that organizations are being asked to include as part of their insurance policy continues to increase because the amount of cybercrime, particularly financially motivated, continues to increase year-over-year.There’s an insurance dimension to this that is driving a lot of the compliance requirements that we see to cybersecurity controls.
Training
I would say the last point here is, underlying all that is training and training and training. Getting your staff engaged, understand the risks that they face as an individual, as an organization, and their unique role.
Investing in an engaging online training platform is really important because you can have a lot of technical controls, but if you can be tricked into updating payment information or making wire transfers to organizations that you’re not 100% confident in, it’s easy to subvert a lot of the technical controls.
Investing in a training program really builds off of that policy foundation that we want to see. Educating staff, making them aware of these different attacks, and then showing them what that looks like. What does it look like to get a spoofed message? What does it look like, to review the link and make sure that it’s actually legitimate and it’s coming from a trusted sender?
Talking a little bit about cybersecurity best practices:
- Organizations should make sure that they have a good cyber liability insurance plan in place.
- Organizations should make sure that they have a good training program in place, particularly for finance and operations staff.
- If you have not implemented MFA widely, that needs to be something that’s on your list.
- If you’re going to do it, it’s really worth going to a really secure method, which would be physical security keys to help do all you can to protect that digital identity of your staff.
That’s a lot, and we covered a lot on the topic of both AI and how AI is intersecting and causing some additional risk in the cyber world. I want to thank you for your time, and I think we may have time for a few questions.
Jon Osterburg: Awesome. Thanks so much, Matthew. That was super informative. I definitely agree it’s not about if you’re going to get a cybersecurity attack or a phishing scam, it’s about when at this point.
Q and A
Jon Osterburg: The first question is whether or not you feel like there’s any risk associated with not pursuing AI in your business today? Are people falling behind because of this, or are they okay to wait a little bit longer and see how everything plays out?
Matthew Eshleman: The risk certainly would be in the area of falling behind. The interesting stats that I’ve seen around AI adoption is that independent contractors are adopting AI at a much faster rate than everybody else because they see the direct benefit that it has in their own productivity and efficiency.
So yes, organizations that are adopting AI are going to be leaders in their space. You don’t need to rush out and turn it on for everybody. I think having a deliberate approach is important. But organizations that really are hands off and saying, hey, we’re not going to do this at all, are going to fall behind just because of the dramatic efficiency improvements that other people are going to see.
Jon Osterburg: Yeah, it makes total sense. There’s a good business case for it in a lot of different places. The next question I had is,
What do you think of instead of focusing on broader AI like ChatGPT or Bard or even Copilot in some cases, we’re seeing a lot of software applications insulate AI directly within their products, such as Zoom is a good example, and you use summaries. Instead of saying, we’re going to use ChatGPT to try to figure out efficiencies, seeking the utilization of AI that’s insulated in products we already use.
Matthew Eshleman: I think that’s probably going to be an unavoidable productivity enhancer that organizations are going to face. From a policy standpoint, and from a disclosure standpoint, I do think it’s important for organizations to make that clear, hey, we’re using AI Companion. We are going to record it; we’re going to share it, and this is something that we’re going to do as an organization. So, having that policy framework in how and where we’re going to use AI and how we’re going to disclose it, I think is important.
For organizations, on that efficiency side, look at the organization’s workflow. What are your processes? Where do you spend the most time? How can they be automated or improved, and what does AI do for us in these areas? is probably a good way to look at it.
Obviously, if your organization doesn’t write a lot of content from scratch, then maybe Copilot isn’t helpful for you. But maybe there are some built in tools or add-ons within the existing product set that you’re using, that are really going to offer more efficiencies. That’s where you want to invest, in what’s going to give you the biggest return on investment, from a productivity standpoint.
Jon Osterburg: Yeah. We’ve definitely found a lot of utilization in the Zoom summaries and the to-dos and takeaways because we would draft a full email and send those to our clients afterwards anyhow. Now that’s essentially partially drafted for us, which is great.
My last question I have is, whether you’ve got any cautionary tales about assumptions AI has made or mistakes AI has made in its pursuit of providing content or something for you?
Matthew Eshleman: I don’t have any direct examples of things where I’ve been led astray by AI. I think right now, the tooling is still pretty immature. I use Copilot, we have licensing for that. I’ve definitely been underwhelmed on a regular basis when I go to use it and submit prompts and it’s like, “we’re not available right now,” or “I can’t give you that information.” Like I said, it’s still a very new product, so proceed with caution. Don’t rely on it to generate that term paper at the end of the semester, because it just might not be available.
There’s certainly examples where it has invented references or citations. Again, if you’re using it as a tool to make your case and you’re relying on it to build a data-driven decision model, you definitely need to take time to validate the output because it’s not 100% perfect yet.
Jon Osterburg: I have one story that someone shared with me. They were asking about old NFL Football Games, and made reference to a player and said the player scored a touchdown in the game. And then the person went back and fact checked, and the player was on the injured reserve during that game. It was weird that it had created that somehow from its database. So, I use it as a way to generate a first draft, but then read through it critically as you would anything else that you were going to put your name on.
Well, thank you so much for this overview and your insights. We appreciate you back here as always.
We hope you took away some valuable information on what’s next for nonprofits and AI tools. If you had a question for Matt that didn’t get answered, feel free to email him at cybersecurity at communityit.com.