Artificial Intelligence (AI) has moved from a futuristic concept to an everyday reality in the nonprofit sector. Nonprofit staff members are increasingly finding AI helpers integrated into familiar tools from Microsoft and Google, and the pressure to adopt new AI technologies can create a real fear of missing out (FOMO) among leadership.
However, as IT partners for nonprofits, Community IT knows that successful technology adoption is about more than just trends—it’s about strategy, mission alignment, and risk management.
We recently hosted a conversation with Peter Campbell, Principal Consultant at Techcafeteria, who specializes in helping nonprofits effectively use technology. Peter shared essential insights on how nonprofit leaders, staff, and boards can manage the inherent risks of AI adoption by putting clear policies and training in place before widespread use.
AI is already at work in your organization, whether management has formally approved it or not. It exists in applications you use daily, from simple email search functions to more complex data analysis and content generation tools. And staff are no doubt using AI informally as free and fun tools multiply.
While AI offers innovative uses, every single application carries some risk. While AI is taught to simulate critical thinking, it does not think independently. Your nonprofit staff need to keep an eye on the output they are using from AI and whether that output represents your organization well.
Common AI tasks at nonprofits move along a risk spectrum. Low-level risks might include searching your own email inbox for a specific piece of information. Higher-risk activities, which are more prone to consequential errors, involve automation, analysis of sensitive data, or public-facing content generation.
The risks associated with these activities fall into several key categories:
For executives who may fear they don’t have the technical background to govern AI, the advice is straightforward and empowers human judgment: You must apply your own judgment and remember that you are the subject matter expert, not the AI tool.
GPS is a form of AI. When your GPS instructs you to make a dangerous left turn onto a busy street, you ignore it because you know better. AI output must be treated the same way. You must bring your professional experience and critical eye to assess the information the AI provides.
A commonsense rule of thumb is to use AI tools only for subjects where you are already knowledgeable. This allows you to easily and accurately assess the output for errors, bias, and authenticity.
Given that AI is already present in your workplace and is only set to grow, the most responsible action a nonprofit can take is to establish a formal AI Policy, sooner rather than later.
A clear, written policy communicates how your organization expects AI tools to be used. Just as you have policies governing cybersecurity and acceptable technology use, AI requires its own specific guidance on what is safe, what is appropriate, and what is unacceptable.
Trust is a cornerstone of the nonprofit mission. When communicating with the public or constituents, maintaining your authentic voice is crucial. If your constituents are used to a specific tone or style of writing, an impersonal, AI-generated communication could shock them and potentially lead to a loss of trust.
Transparency is therefore an essential part of your AI policy. You should inform people when AI has been used and how it was used. Consider adopting a simple disclaimer in external communications:
For nonprofits, trust and transparency with our constituents, our donors, and our communities is an asset that can be lost quite easily and rapidly. Ensure that your community knows when you are using AI and how it may impact them.
For nonprofits in particular, there is an extra layer of ethical concern surrounding AI. Nonprofits are dedicated to mission-driven work and serving their communities. If the AI models they use are trained on biased data sets, they can perpetuate harm or exclusion to the very populations the organization serves.
You must be careful about the data you use and how your AI is trained. Leaders should guard against unintended bias. This requires a deeper understanding of the tool’s underlying data sources and a commitment to curating the data that goes into your analysis. There are many sources of guidance on bias in the ways AI is using data and the terms and conditions for your specific AI tools can be found online.
In short, your IT governance and mission strategy must align. When adopting any new technology, but especially AI, nonprofit leadership must ask: Does this tool help us achieve our mission responsibly, or does it introduce a risk of harm to our constituents?
AI adoption is an inevitable reality, but leadership and a sound IT strategy are essential to avoiding AI mistakes which may damage your reputation or your organization itself. AI is all around us today, and we want to use it responsibly, effectively, and safely. If you haven’t yet, now is a great time to adopt an AI policy.
Community IT is committed to helping nonprofits navigate disruptive technologies like AI. We believe your IT partner should be able to explain everything without jargon or lingo, empowering you to make well-informed decisions for your mission. If you are worried about AI risks and need help developing a thoughtful IT roadmap and governance strategy, we can help.
Community IT has been serving nonprofits exclusively for almost twenty-five years. We offer Managed IT support services for nonprofits that want to outsource all or part of their IT support and hosted services. For a fixed monthly fee, we provide unlimited remote and on-site help desk support, proactive network management, and ongoing IT planning from a dedicated team of experts in nonprofit-focused IT. And our clients benefit from our IT Business Managers team who will work with you to plan your IT investments and technology roadmap if you don’t have an in-house IT Director.
We constantly research and evaluate new technology to ensure that you get cutting-edge solutions that are tailored to your organization, using standard industry tech tools that don’t lock you into a single vendor or consultant. And we don’t treat any aspect of nonprofit IT as if it is too complicated for you to understand. We know that free AI tools at Nonprofits aren’t always a good thing.
We think your IT vendor should be able to explain everything without jargon or lingo. If you can’t understand your IT management strategy to your own satisfaction, keep asking your questions until you find an outsourced IT provider who will partner with you for well-managed IT.
If you’re ready to gain peace of mind about your IT support, let’s talk.
As advocates for using technology to work smarter, we’re practicing what we recommend. This article was drafted with the assistance of an AI, but the content was reviewed, edited, and finalized by a human editor to ensure accuracy and relevance.
Photo by Ussama Azam on Unsplash
Wednesday November 19th at 3pm Eastern learn how to implement strategies to make sure your nonprofit technology aligns with your values.
Fill out the form below to request a quote. We’ll be in touch shortly to discuss your needs and take the first step toward better nonprofit IT.