What will we be talking about next year?
Matt Eshleman, Community IT CTO and cybersecurity expert, sat down with Carolyn recently to discuss what he’s hearing about and expecting from AI in the upcoming year. In his role as CTO he interacts with all of our clients and also plays a foundational role in adopting new technologies internally at Community IT.
Carolyn wanted to know what questions clients have, particularly about the ways AI impacts their cybersecurity risks. Not surprisingly, Matt recommends creating policies that address the way your staff uses AI – if you haven’t updated your Acceptable Use policies recently, AI concerns are a good reason to do that. He also recommends taking an inventory of your file sharing permissions before AI surfaces something that wasn’t secured to the correct staff level.
Community IT has created an Acceptable Use of AI Tools policy template; you can download it for free here. The Technology Association of Grantmakers has published a free framework for nonprofits using AI tools available here.
Matt points out that in January 2023 no one was talking about ChatGPT. The exponential growth in AI tools this year should keep us paying attention; no one knows what we’ll be talking about in January 2025.
“My hope is that we can really use these AI tools to help build bridges between towers of knowledge that we couldn’t figure out on our own. I think there’s lots of really smart and thoughtful people that are doing a lot of good work in this area. It’s important to read and understand and process and be open to the conversation about AI. I really hope that AI can be an enabler of technology to help us, to shape our world into what we want it to be and not just a way for corporations to reduce the value of our individual creativity. I’m optimistic about the technology and the benefits that we’re going to receive from it.” – Matt Eshleman.
Listen to Matt’s thoughts on the potential for AI to impact the nonprofit world, and the work we have to do to keep our organizations ready for the challenges.
Listen to Podcast
Like podcasts? Find our full archive here or anywhere you listen to podcasts: search Community IT Innovators Nonprofit Technology Topics on Apple, Spotify, Google, Stitcher, Pandora, and more. Or ask your smart speaker.
Presenter
As the Chief Technology Officer at Community IT, Matthew Eshleman leads the team responsible for strategic planning, research, and implementation of the technology platforms used by nonprofit organization clients to be secure and productive. With a deep background in network infrastructure, he fundamentally understands how nonprofit tech works and interoperates both in the office and in the cloud. With extensive experience serving nonprofits, Matt also understands nonprofit culture and constraints, and has a history of implementing cost-effective and secure solutions at the enterprise level.
Matt has over 22 years of expertise in cybersecurity, IT support, team leadership, software selection and research, and client support. Matt is a frequent speaker on cybersecurity topics for nonprofits and has presented at NTEN events, the Inside NGO conference, Nonprofit Risk Management Summit and Credit Builders Alliance Symposium, LGBT MAP Finance Conference, and Tech Forward Conference. He is also the session designer and trainer for TechSoup’s Digital Security course, and our resident Cybersecurity expert
Matt holds dual degrees in Computer Science and Computer Information Systems from Eastern Mennonite University, and an MBA from the Carey School of Business at Johns Hopkins University. He was happy to sit down for this discussion of nonprofits, AI, and cybersecurity.
Transcript: Nonprofits, AI, and Cybersecurity
Carolyn Woodard: Welcome everyone to the Community IT podcast. My name is Carolyn Woodard. I’m the Outreach Director for Community IT. I’m here today with Matthew Eshleman, who is our Chief Technology Officer. He wants to tell us a little bit more about AI and what’s happening with artificial intelligence in the nonprofit space. So, Matt, take it away.
Matthew Eshleman: Thanks Carolyn. It’s good to have another conversation with you, and especially on this topic, which seems to be unavoidable here in 2023. It’s the topic that everybody is discussing.
What’s on my mind recently is understanding the full pace of adoption of these AI tools. What really struck me was the realization that ChatGPT reached 100 million active users in a period of two months. That’s why we’re all talking about it. AI has been around, I mean, I took an AI class in college 20 years ago.
This year AI went from basically zero adoption to a hundred million users. Users gained access to a generative AI tool in a period of two months, which is just kind of staggering by comparison. TikTok took nine months and Instagram took two and a half years to get to that same level of user adoption. It’s really stunning just how fast that’s occurred. And I think that’s part of the reason why it’s everywhere, and everybody’s using it.
Carolyn Woodard: My son had to put a summary of his resume together, and we put it into ChatGPT day before yesterday. Then he worked on the summary from their draft of it, but it’s a good skill for him to know how to use as well. The young people today will be using it.
Microsoft and AI Products
I know you said that you recently went to a Microsoft training on AI and what’s coming at Microsoft, what were your impressions or realizations at that training?
Matthew Eshleman: I think the big thing to realize that AI is phenomenally expensive to develop and to operate. One of the big investments that Microsoft has made in AI is just making the Azure platform available for ChatGPT or open AI to run their models and learn. It’s incredibly computer intensive to generate all of this stuff. We’re really disconnected from that, as a user, if you’re paying $20 a month, you don’t see that, but it’s really expensive to develop.
I think Microsoft and the other big players have an incentive to encourage adoption and get corporations and enterprises and governments to use their platform. They have a real incentive to make people feel comfortable using the tools and putting information into the systems.
It’s important for nonprofits to have good adoption guidelines. I think companies are trying to be transparent about the ethics and how these systems are developed, providing some maybe legal cover for the output that’s generated by the tools. There’s a real incentive for corporations to make people feel comfortable using AI.
I think the other thing to understand is that while yes, ChatGPT reached a hundred million users in two months, the real promise of AI being available to everybody in business to help you generate your PowerPoint or analyze your spreadsheet is still a little way off.
While it is true that Copilot, which is Microsoft’s enterprise add-in, was available on November 1st, that was only available to enterprises. You had to have a minimum order of 300. Copilot is not available to small to medium sized nonprofits right now. There are some other AI tools that we can use but, the big Microsoft tools are not available to the small mid-sized organizations as of yet.
Microsoft’s Copilot
Carolyn Woodard: Can you talk a little bit more about what Copilot is, compared to a lot of Microsoft tools have AI within them, summarizing this or giving you a transcript of that. How is Copilot different?
Matthew Eshleman: And I’m not sure I even have all of the branding and acronyms and services down correctly, but Copilot in general is Microsoft’s enterprise AI that protects an organization’s data that they have.
While Copilot has been informed by the large language learning models and has built up its expertise through these other tools, it is not exposing your information or your requests back into those platforms. It’s a self-contained environment.
Microsoft talks about how in Copilot, if you’re making a request, that request is first grounded in your organizations identity and access management roles. It looks at the permissions that you have, the files and the data that you have access to, then it normalizes that, sends it out, gets feedback, recontextualizes it, and then provides a response. So it’s AI for the corporate world that includes some permission controls and additional context enhancements, understanding that you’re doing this work for an organization as opposed to just making these requests on your own.
What About Your Data Permissions?
Carolyn Woodard: It’s funny because I know when we do webinars and talk about migrating your files, we talk a lot about knowing what you have and maybe getting rid of files that are really old. Copilot can plumb the depths of your records and files that you have, going back years, and get more insights from that, potentially within that data pond, self-contained to your data.
Matthew Eshleman: Correct. I think for organizations, one of the best things that you can do now is, if you haven’t already start with that data inventory, understand what data you have, what systems is it in, where might you have that sensitive information, that personal identifiable information.
Then make sure you’ve got the appropriate permission models in place to ensure that only people who need to have access to that data actually do have access to that data. I think a number of years ago Microsoft had an early version of AI, a platform called Delve, which I really liked. It was great because it would surface information about colleagues that you work with, what files they were working on. Unless you had good permissions in place, it would surface information like, “Your boss is working on an updated salary plan.”
While before that was hidden by obscurity, with some of these tools, they surface stuff and unless the permission model is in place to say, “these people have access to this information,” the AI tool’s going to give access to whatever you give it access to.
We need to be intentional about putting up guardrails. We need to have good permission structures in place so that we don’t inadvertently end up surfacing information that we would rather keep private. Right now, it’s hidden by obscurity, but these AI tools are going to be able to really cut through a lot of that, and they will adhere to the permission models that exist. Many organizations have rather lax permission models and they’re going to find out things that they weren’t quite ready for.
AI and Automations
Carolyn Woodard: I know we were just recently at a conference and a lot of people were talking about the potential of automation with records or with reporting. For example, a foundation might want to do automated reporting on grants.
Can you talk a little bit about the applications of AI that could be automated that would be really interesting to nonprofits, even nonprofits of the size that we work with? So under that 300 staff level (that can use Copilot)?
Matthew Eshleman: I think there’s certainly lots of potential for what AI can do and can automate.
Taking an initial step back, when I started at Community IT over 20 years ago, we did a lot of things manually. In our case, for managed services, we were automating the things that we used to do manually. Running disc cleanup, installing software, running system updates, performing system maintenance that used to be something that was part of my scheduled tasks. I would go around to all the computers and run updates and do cleanup and clear out internet history and all of that stuff. And then we got a tool that allowed us to automate that and save a lot of time. In the same vein, and maybe this is completely evolutionary, the capacity that AI is going to give us is helping us to automate and improve the work that we do and be more efficient in that.
So I think right now organizations are going to benefit from some of the AI stuff that seems a little mundane. A lot of the use cases that I heard (at the conference) were around things like using the AI tools to create meeting summaries with action notes and emailing that out after a meeting.
As somebody who’s in a lot of meetings but is not really good at taking notes and being reminded of what I need to do, that kind of a tool is really helpful because it gives me capacity that I didn’t have before. We need to be clear that if you’re in a meeting with somebody and you’re recording it. That’s visible. You know, you may want to choose to not record some conversations. But taking steps to automate some mundane tasks is a great place to start.
I think in our Community IT world of scripting and automation and programming, there’s real clear demonstrable benefits for programming and development. These AI Copilots that can help write code and give us a foundation to stand on are really great. If you’re doing some API integrations, or you need to write some code to help manipulate data, I think that’s a great place to start. As you identified, getting a leg up or dumping some ideas and getting some content templates just to kind of get started, I think that’s a great place to start.
I think organizations really do benefit from identifying a couple of specific use cases that they want to try out and just explore, figure things out, work with it, experiment, see how it works, what are the benefits or what are any drawbacks, how do you want to change it? This is a real time for exploration right now to identify how these tools can really help us be more efficient and give us back more time to focus on the things that are most important to us or that only we can do in our work.
Carolyn Woodard: I want to pick up on two things that you made me think of.
One is, you alluded to the importance of having policies. So both to understand what AI tools you are using, but also, like you said, the permissions, the implications. You’re going to use this AI tool that can show these things, so making sure that the people who have access to that that is all in your governance documents and that you have a policy over those permissions.
And I think also having a little bit of skepticism about all of these tools. I know for years now I’ve been seeing advertising for AI tools to help you fundraise better, and you kind of have to think about the data, right? If you don’t have good data in your database already on your donors, then having an AI tool tell you when to email people better, if that’s not the right email address for them to begin with, you still have a problem. A lot of data policy is management issues and data issues more than being able to just hand it over to an AI tool to help you fundraise better.
AI and Hackers: What to Watch Out For
I wanted to ask you a little bit about the darker side of automation. I’m definitely seeing more tailored phishing emails and attacks on Facebook Messenger and they seem to be very personalized. You still can kind of feel that maybe that’s an AI tool that the hacker is using to try and get you to think that it’s legitimate. So can you talk a little bit more about that?
Matthew Eshleman: Yeah, I think your point is exactly right. The AI tools that we use to help us, get a jumpstart on some marketing material, or help to write our resume or even help develop our policy, are the same things that these threat actors can use as well in order to start those engagements. They have the ultimate goal of having wire fraud or a financial transaction.
These are tools and they’re going to be used by people raising money and advocating for positive causes. They’re also going to be used by threat actors to craft more convincing emails. Some of the old things that we used to rely on to identify scams, say, if it’s a poorly worded email, that’s a great giveaway, and if it’s well written, okay, we used to think you can have confidence in it.
We need to evolve along with AI to recognize that you can put in a prompt to say “write me a good email that’s going to get somebody to click on this link.” And that’s going to be a prompt that the AI tools will respond to. There is a little bit of an escalation of how these tools can be used both for positive things and also how these tools can be used for not so positive purposes.
Carolyn Woodard: I’m sure there will be new AI tools that can scan your emails to look for other AI phishing attempts out there that will get better and better at spotting them.
What’s Next in Nonprofit AI Tools?
Could you talk a little bit more to wrap up about what we’re seeing coming for nonprofits? What can we expect in the next year or two around AI tools for Nonprofits?
Matthew Eshleman: Microsoft is really the leading player in the AI space. I think Google is coming along as well, but I think Microsoft really is the player here because of their tremendous resources and investment in the enterprise level. While Copilot was available generally November 1st, that was only for the 300+ staff size enterprise customers, I do think that that will make it to the small to mid-sized organizations in 2024 sometime. That would be Copilot that would be integrated in office. Nonprofit organizations that have Office 365 accounts already have access to what I would call an enterprise version of ChatGPT. If you go to chat.bing.com, that is Bing Chat Enterprise, which is AI powered Copilot for the web.
That is a protected version of ChatGPT. The comments or the requests that you make and the content that’s returned to you is private. I think if you’re starting off looking at AI tools and what AI can do, that’s a good place to start. I think that’ll certainly become more widely available as organization, throughout the year.
Generative AI to create content is starting to grow now and is really easy to adopt. And then I do expect those Copilot add-ins for Excel and PowerPoint and kind of all of that stuff will become more widely available to organizations later on in 2024.
Carolyn Woodard: I’m expecting also that as cybersecurity was prompting insurance companies to prompt organizations to have to up their game and know how they were keeping themselves safe and do some of those assessments and audits and make sure they had security on all their accounts. I feel like AI is also going to prompt those necessary changes and policies. Things are going to happen as it evolves that are going to prompt organizations to put those policies in place and develop those governance documents as well. So I expect to see that over the next year too.
Matthew Eshleman: I think setting those expectations, the data inventory, the permissions, it’s really boring work. But that governance piece and the planning piece and the intentionality is just so important to set up organizations for much greater success later on. If you can have a good foundation that helps to define some of those expectations for the organization early on. It gives you a much safer, much stronger place to operate from and then adopt some of the new tools that are going to come out.
Were we talking about ChatGPT in January of 2023? I don’t know, I wasn’t. So who knows what’s going to happen in January of 2024? And so I think just being open and having a good foundation in place will really help the adoption of these tools, when they become available.
Carolyn Woodard: Thank you so much for chatting about this with me today, Matt. I really appreciate it.
Matthew Eshleman: My hope is that we can really use these AI tools to help build bridges between towers of knowledge that we couldn’t figure out on our own. I think there’s lots of really smart and thoughtful people that are doing a lot of good work in this area. It’s important to read and understand and process and be open to the conversation about AI. I really hope that AI can be an enabler of technology to help us, to shape our world into what we want it to be and not just a way for corporations to reduce the value of our individual creativity. I’m optimistic about the technology and the benefits that we’re going to receive from it.
Ready to get strategic about your IT?
Community IT has been serving nonprofits exclusively for twenty years. We offer Managed IT support services for nonprofits that want to outsource all or part of their IT support and hosted services. For a fixed monthly fee, we provide unlimited remote and on-site help desk support, proactive network management, and ongoing IT planning from a dedicated team of experts in nonprofit-focused IT. And our clients benefit from our IT Business Managers team who will work with you to plan your IT investments and technology roadmap, if you don’t have an in-house IT Director. And our CTO Matthew Eshleman is available for cybersecurity consulting and assessments, especially as new AI hacking practices come into play. Nonprofits, AI, and cybersecurity will continue to be pressing issues.
We constantly research and evaluate new technology to ensure that you get cutting-edge solutions that are tailored to your organization, using standard industry tech tools that don’t lock you into a single vendor or consultant. And we don’t treat any aspect of nonprofit IT as if it is too complicated for you to understand.
We think your IT vendor should be able to explain everything without jargon or lingo. If you can’t understand your IT management strategy to your own satisfaction, keep asking your questions until you find an outsourced IT provider who will partner with you for well-managed IT.
If you’re ready to gain peace of mind about your IT support, let’s talk.