View Video
Subscribe to our Youtube Channel here
Download the slides as a PDF (3.8 MB)
Listen to Podcast
In part one, Matt explains the AI continuum from assistants to workflow help to autonomous AI agents. He goes over the difference between “freemium” AI and enterprise AI and goes over pricing tiers for nonprofits. In part two, Matt and Carolyn go over ways to tell you are logged in to your official account or not, the importance of continuous and iterative staff education, and how (and why) to get started creating AI policies to share with staff.
Like podcasts? Find our full archive here or anywhere you listen to podcasts: search Community IT Innovators Nonprofit Technology Topics on Apple, Google, Stitcher, Pandora, and more. Or ask your smart speaker.
How to Use AI Tools Safely for Nonprofits
with Matt Eshleman, Chief Technology Officer
Confused about how to keep your nonprofit data safe and still use AI tools?
Matt explains how to use AI tools safely and securely at your nonprofit.
If you don’t know the difference between using a “freemium” tool like Chatgpt and logging on to a more private enterprise tool at your organization like Copilot, or Gemini if you use Google Workspace, then this webinar is going to help clarify that for you.
We hear a lot about “make sure not to share your sensitive data with AI learning models,” but how do you know how to use them safely, and how do you check on the terms and conditions, and where do you begin?
Matt demystifies how AI enterprise tools work and gives you some questions to ask at your own nonprofit to get the conversation around AI implementation and policy going.
As with all our webinars, this presentation is appropriate for an audience of varied IT experience.
Community IT is proudly vendor-agnostic, and our webinars cover a range of topics and discussions. Webinars are never a sales pitch, always a way to share our knowledge with our community.
Presenters:

As the Chief Technology Officer at Community IT, Matthew Eshleman leads the team responsible for strategic planning, research, and implementation of the technology platforms used by nonprofit organization clients to be secure and productive. With a deep background in network infrastructure, he fundamentally understands how nonprofit tech works and interoperates both in the office and in the cloud. With extensive experience serving nonprofits, Matt also understands nonprofit culture and constraints, and has a history of implementing cost-effective and secure solutions at the enterprise level.
Matt has over 22 years of expertise in cybersecurity, IT support, team leadership, software selection and research, and client support. Matt is a frequent speaker on cybersecurity topics for nonprofits and has presented at NTEN events, the Inside NGO conference, Nonprofit Risk Management Summit and Credit Builders Alliance Symposium, LGBT MAP Finance Conference, the Tech Forward Conference, and ITC Conferences. He is also the session designer and trainer for TechSoup’s Digital Security course, a member of NGO-ISAC, and our resident Cybersecurity expert.
Matt holds dual degrees in Computer Science and Computer Information Systems from Eastern Mennonite University, and an MBA from the Carey School of Business at Johns Hopkins University.
He is available as a speaker on cybersecurity topics affecting nonprofits, including cyber insurance compliance, staff training, and incident response. You can view Matt’s free cybersecurity videos from past webinars here. Matt is always happy to help nonprofits get smarter about how to use AI tools safely.
Contact Matt: https://meetings.hubspot.com/meshleman
Transcript below
Some resources shared in this webinar:
If you aren’t familiar with it – here is the website to Change Agent AI. Built by nonprofits for nonprofits. https://thechange.ai
Google Notebook LM is a very useful Small Language Model that uses only what you put in.
Here is a new tool that helps evaluate AI tools for nonprofits specifically. https://nonprofit-ai-tools-trust-directory.mtmapps.now Run by folks who do the AI classes for Tech Soup. They show the criteria they use and their analysis/vetting is specific toward nonprofits using these AI tools.
Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI By: Karen Hao (My favorite among the computer books)
The scariest “doomer” book: Nexus: A Brief History of Information Networks from the Stone Age to AI – Yuval Noah Harari (forthcoming 2025) — Contextualizes AI within the broader trajectory of human information networks (Five Books).
I found this Linkedin post that was very useful breaking down how you can turn off the model sharing https://www.linkedin.com/pulse/60-second-ai-privacy-fix-kim-snyder-ehude/
My favorite “notetaker” is Fieldy.ai
I like Fathom and Zoom’s notetakers better than Otter
Acceptable AI Use Policy Template: https://communityit.com/template-acceptable-use-of-ai-tools-in-the-nonprofit-workplace/
Ethical AI framework for thinking about AI adoption/implementation at your organization: https://communityit.com/webinar-nonprofit-ai-framework/
Department of Labor documentation on AI Literacy (pdf)
Reddit Q&A with Matt of registration and webinar questions we couldn’t answer during an hour webinar: https://www.reddit.com/r/NonprofitITManagement/comments/1rekaqk/qa_how_to_use_ai_tools_safely_at_nonprofits/
Full Transcript
Community IT Roundtable: Using AI Safely at Your Nonprofit
Carolyn Woodard:
If you do not know the difference between using a freemium tool like ChatGPT or logging on to a more private enterprise tool at your organization, like Copilot if you are using Microsoft or Gemini if you are using Google Workspace, this webinar is really going to clarify that for you. We hear a lot about making sure not to share your sensitive data with the AI learning models, but how do you know that you are using them safely?
How do you check the terms and conditions? Where do you begin? Today, our cybersecurity expert Matt Eshleman is going to demystify enterprise AI and share some tips on how to create an organizational AI policy, how to share knowledge, and how to conduct staff training. This ensures that you and all of your colleagues are using AI in the most secure way possible.
My name is Carolyn Woodard, and I am the outreach director for Community IT. I am going to be the moderator today. First, I am going to go over our learning objectives. By the end of the session today, we hope that you will learn the difference between enterprise or subscription AI and the more freemium tools like the free ChatGPT. We are going to go over accessing Microsoft Copilot and Google Workspace Gemini tools at the organizational level. We are going to review IT policy guidelines and provide some training and knowledge-sharing tips. If you have tips or something you are doing at your organization that is working, I am going to provide a chance for you to share that with us later.
If you are looking for more information on AI topics, we just started a midweek nonprofit AI podcast where we give you 10 to 15 minutes of news and resources weekly. If you subscribe to our regular podcast, the Technology Topics Podcast, you will get that in your feed on Tuesdays and our regular Friday podcast on Fridays. On Fridays, we cover many different nonprofit IT topics and chat with guests. We also have other recorded webinars which you can access on our site, communityit.com, covering things like creating an ethical AI framework for your organization and AI governance. We did an amazing webinar last summer with Brenda Foster on how to use AI in general. We also have a downloadable AI acceptable use policy template that you can use if you are working on your policy for your organization.
We have addressed many of those bigger ethical questions in other resources. Today we are really going to focus on this specific question that we get a lot: how do you know if you are using your AI tools safely? With that, Matt, would you like to introduce yourself?
Matthew Eshleman:
Great. Well, thanks for that introduction, Carolyn. It is great to be here with you. I am looking forward to talking about adopting AI in your organization, sharing those tips and tricks, and getting under the hood a little bit. As Carolyn mentioned, I am the chief technology officer here at Community IT. I am pleased to have just officially celebrated 24 years full-time with Community IT. I get to play a lot of different roles, and I am really excited about this topic in particular.
Carolyn Woodard:
You have so much experience. Before we get started with Matt, if you are not familiar with Community IT, I want to tell you just a little bit about us. We are a 100% employee-owned managed services provider. We provide outsourced IT support exclusively to nonprofit organizations. Our mission is to help nonprofits accomplish their missions through the effective use of technology. We are big fans of what well-managed IT can do for your nonprofit. We serve nonprofits across the United States. This year is our 25-year anniversary. We are technology experts and are consistently given an MSP 501 recognition for being a top managed services provider, an honor we received again in 2025. We believe that we are the only MSP on the list that serves nonprofits exclusively.
I want to remind everyone that for these presentations, Community IT is vendor-agnostic. We only make recommendations to our clients based on their specific business needs. We never try to place a client into a product because we get an incentive or a benefit from it. We do consider ourselves a best-of-breed IT provider. It is our job to know the landscape and what tools are available, reputable, and widely used. We make recommendations on that basis for our clients based on their business needs, priorities, and budget.
Today, we are going to talk about two big IT stacks that everyone uses, Microsoft and Google, because many nonprofits are using them.
We received many good questions at registration, so we are going to try to answer as many of those as we can. For anything we cannot get to, please join Matt and me in our community on Reddit at r/nonprofitITmanagement after the webinar for about 30 minutes. Matt also pops in once a week or so to answer other questions that come in.
Our mission is to create value for the nonprofit sector through well-managed IT. We also identify four key values as employee-owners that define our company: trust, knowledge, service, and balance. We seek always to treat people with respect and fairness. We seek to empower our staff, clients, and sector to understand and use technology effectively. We recognize that the health of our communities is vital to our well-being and that work is only a part of our lives.
As we usually do, I am going to start with a poll. We want to get a feel for your comfort level with AI tools.
- Completely uncomfortable or unfamiliar with most tools.
- Somewhat uncomfortable; we use a few popular tools occasionally.
- Neutral; neither uncomfortable nor comfortable (average use).
- Somewhat comfortable; I use a few AI tools daily.
- Completely comfortable; I use many of these tools frequently and colleagues ask me for help.
- Not applicable or other.
Matt, can you see the results?
Matthew Eshleman:
Yes, I can. I always like the big reveal. It is like a drum roll because I cannot see it as the results are coming in.
Carolyn Woodard:
All right, can you go ahead and share them with us?
Matthew Eshleman:
Yes. In terms of the folks responding today, about 12% are completely uncomfortable or unfamiliar. On the flip side, about 9% of the respondents are completely comfortable and use it a lot; they are the resource people in their organization go to. In the middle, about 29% of folks are somewhat uncomfortable and use some tools occasionally. 16% are right in the middle. We have a bigger bump with folks who are somewhat comfortable using AI tools daily. It is an interesting distribution. We have folks on both ends of the spectrum related to their AI usage.
Carolyn Woodard:
That is interesting because often we get a bell curve, but here we have fewer neutral people. They are putting themselves either in the category of using it sometimes or not very often.
I mentioned a term a little while ago: “freemium AI.” Matt, you were going to talk about what that is.
Matthew Eshleman:
I used that term intentionally as we were developing this presentation to describe how many of us have come to use AI tools, with the most common being ChatGPT. Freemium is a business model for getting users into a platform. AI is very expensive to deploy. OpenAI, the entity that owns ChatGPT, is scheduled to invest over $1 trillion in building up their capacity. Because this is enormously expensive, they are giving away access to their tool in the hopes of converting everyone into paid customers later.
I would distinguish that from a public AI model, which would be more of a utility service, perhaps something owned by the government and available as a public good, as opposed to a privately held AI solution that is fundamentally there to make a profit.
Many AI models were trained using publicly available data, not always with consent. Some of those issues are currently winding their way through the court system. With that freemium model, if you are not paying for something, then you are the product. The content you put into it and the questions you ask typically go back to feed the model to provide increasing information about the usage of the tool.
ChatGPT was officially released in November 2022 and had the fastest adoption for a million active users, measured in weeks. In the last two and a half years, they have grown to over 800 million active users. The adoption is driven by the fact that people are finding it helpful. It really is the disruptive technology change of our time.
Carolyn Woodard:
That covers the free versions. Can you talk a little bit more about enterprise AI?
Matthew Eshleman:
Many AI tools or large language models developed by companies have a free way to access them, in which case you may be giving up some privacy. Distinguish that from enterprise AI, where the same back-end model has an intermediary layer that protects the information. This is incredibly important for enterprise customers to ensure the data they put into the system is private.
In the Microsoft world, that is called Copilot. Copilot is Microsoft’s business integration that helps protect the information interrogated using the ChatGPT language model. The same applies to Gemini. If you go to Gemini or Copilot right now in your web browser without signing in to a business account, you will likely access a consumer version where you do not have those protections. The information you put in is recorded and used to train the models.
Google and Microsoft want to provide a paid version where you get those protections. There are also many other dedicated solutions built specifically for nonprofits, such as Change Agent AI. There are innumerable other tools available for enterprise subscription.
As we have shifted from on-prem server infrastructure to cloud services, it has become much more important for organizations to understand the terms and conditions. We do not control the software at all; it is in some far-flung data center. It is essential to investigate whether the terms and conditions align with how you expect a system to use your information. Change Agent AI, for example, does a good job of clearly stating how they use your data and what rights you have to it. It is often clearer than the terms of the larger enterprise players, which can be more obtuse.
Carolyn Woodard:
I have a friend who uses AI to help understand and summarize what terms and conditions actually mean.
You were also telling me about the gradation between different AI tools and how autonomous they can be. Can you talk about that?
Matthew Eshleman:
This is an exciting evolution of the AI toolset. In late 2022, you could ask ChatGPT questions to augment your internet search or review a document. That is generative AI, being able to analyze and create new things.
Now we see that evolving into three categories. The first is the assistive technology model, such as Microsoft Copilot. It acts as an assistant to you, the human, who remains fully in control. You ask a question, and it provides suggestions, drafts, or explanations. Once you receive a response, the interaction is done. It might keep a history for a few hours, but it is a “human-in-the-loop” model. It is like having a smart intern sitting next to you. It is low risk and easy to adopt because the end user is the final arbiter of decisions.
In the middle is a scenario-based or workflow model. These are being embedded into the tools you already use and are triggered by specific events or parameters. Rather than an intern sitting next to you, this is more like a smart automation with some judgment. We see this in IT ticketing systems where the AI looks at a ticket and points to potential solutions based on past examples. It is designed to speed up routine decisions in regularly occurring processes, such as customer support, triage, or compliance reviews.
The final example is the agentic model. You cannot have a webinar without talking about “agentic AI.” This is an autonomous or semi-autonomous agent that is given a goal and figures out how to solve it. If you interact with customer support chats, those are often agentic AI agents. They draw on a body of knowledge to interpret information and respond. This represents real automation and efficiency. It operates on its own, drawing from a library of information and boundaries you have set to provide answers and make decisions. This is a good solution for repetitive operational work once you have invested in good processes and documentation to feed into it.
Carolyn Woodard:
What can you expect based on what you pay?
Matthew Eshleman:
Everyone should be using the free enterprise protected tier. Instead of using the general free Copilot or ChatGPT where conversations are incorporated back into the model, you should go to copilot.microsoft.com or gemini.google.com and sign in with your organizational ID. This gives you enterprise terms of service, meaning your data is protected while you perform searches, build policies, or review emails.
Stepping up from there is the Copilot or Gemini model, which costs about $20 to $30 a month per user. Microsoft discounts many of its SKUs for nonprofits by 75%, but that discounting does not extend to Copilot to the same degree yet. At this level, instead of just interactive web search, the AI can actually analyze documents and information within your cloud environment. It has protected access to everything you can access as a user. You can analyze spreadsheets, review policies, or ask it to summarize your emails and documents from the past year to help with a performance review.
The next level up is in the several hundred dollar a month category, such as Copilot Studio or Gemini AI Elite. This is for building custom agents or extensive code development. We also see video generation appearing in these more premium tiers. Video can be an effective way to communicate information, and these services lower the barrier to entry for developing compelling online content. From a licensing perspective, you do not have to license everyone at once; you can start with a small working group.
Carolyn Woodard:
We have a quick question in the chat about that middle tier ($20–$30). Is that secure for sensitive data? Does it ensure that data is not used to train the model, even if you input client data?
Matthew Eshleman:
Yes, that data is protected from the enterprise perspective. The system has access to what you do as an end-user, but the enterprise agreement states the data is yours and remains private. An organization might still decide as a policy guardrail that they do not want personally identifiable information put into the system, but the legal protections are there.
Carolyn Woodard:
I have noticed in both Copilot and Gemini that there is often a bar at the bottom stating you are using an enterprise version and the data is not being used for training.
Someone asked in the registration: how do I convince my organization to pay for licenses instead of just asking staff to use the free ChatGPT?
Matthew Eshleman:
The equation these companies are making is that you will find enough value in efficiency to justify the cost. The key difference is the assurance that the data provided to the system will not be disclosed or made available to other users who might use sophisticated attacks to elicit information.
The most important thing is having good policies and ongoing training. Asking “what is the most secure AI tool” might not be the right question, as any tool can be used insecurely. You might still want to prevent staff from uploading personally identifiable information even with an enterprise license. Being intentional means understanding the data you have, where it lives, and who has access to it.
Carolyn Woodard:
You have to constantly reiterate the policy to staff. For example, ChatGPT’s terms for the free version prohibit creating misleading information, yet many people use it for that. Your staff might do something that goes against organizational values, even if it is possible within the tool. Because these tools change so quickly, you have to keep the conversation going. It is very hard to put automatic restrictions in place that stop every unwanted behavior, so education is a better approach.
Matthew Eshleman:
I appreciate the questions coming in because the fact is that your staff are already using generative AI tools, and they are also already interacting with others’ generative AI tools. The rush to adopt and incorporate these tools is rampant. Another dimension of this new world is determining how we expect to operate and how we want to interact with others.
The word cloud that Carolyn just put up represents the AI tools used at a 50-person nonprofit organization. There are 121 of them listed. Some of these are tools that individuals at the organization are reaching out to and using directly. Others are part of services that the end users are interacting with. As I mentioned, if you are using a chat agent, it is likely driven by AI.
I recently went to schedule service for my boiler, which had some heating problems, and I talked to a very helpful AI agent that scheduled the whole appointment. It was quite good. These tools are everywhere, from nonprofits to plumbing supply companies. Seeing this variety here can be shocking. There are so many different names—do we know the AI use policies of these organizations? Do we trust them? Big players like Google Gemini and ChatGPT pop up prominently, but then there are others like “Common Ninja” or “Beehive AI.” There are many tools out there trying to succeed in this marketplace.
Carolyn Woodard:
We are going to talk a little bit about policies. Looking at the time, Matt, we are going to have to move quickly through a few of these slides. Several people asked in the chat if we are going to include all the questions and answers from the Q&A. Yes, Matt will help answer those, and we will put them in the transcript on our website. As an attendee, you will receive an email with the link, but you can always look back at our website; there is no paywall. You can access all our old webinars to see the transcripts and the questions people asked.
We are going to quickly cover Copilot and Gemini, the two biggest players. Matt, you are going to talk more about using your company account and how to know you are using the correct version.
Matthew Eshleman:
Yes. Thank you for putting up this graphic, as it is helpful to see what we are discussing.
On the right-hand side, you can see my Edge browser with my profile picture in the upper right. Whenever I go to copilot.microsoft.com, I am presented with a login option. On the left side, there is an option to go to “Work,” which is described as a secure and compliant Copilot integrated with your enterprise account. If you have a Microsoft 365 account right now, you can click that to enter a protected version of Copilot for interacting with the web or uploading content.
If you do not see that—for example, in a private window—it is not protected. Content put in there will not have those same enterprise terms and conditions and will feed back into the model. Once you are on the enterprise side and signed in, you will see a tab at the top that says “Web” or “Work.” If you have no license, you would just see the “Web” tab. If you have a Copilot license, you will see the “Work” tab, which means you can query information that you have access to within your SharePoint and OneDrive environment. You will see a notice that “Enterprise data protection applies to this chat.” It also shows documents you have accessed previously and provides prompts to give you a sense of what is possible. This is what you see if you have a Copilot license assigned to your account.
Carolyn Woodard:
I will add that Microsoft offers many ways to access this. If you are logged into your office account and open a Word document, you will see the Copilot icon in the top right. You can click that to open a chat window for help drafting an email or performing other tasks. The same applies to Excel and PowerPoint. As long as you are in your work environment and have the license, the icon will appear.
The other big player for many nonprofits is Google Workspace.
Matthew Eshleman:
Similarly, if you are a Google Workspace customer and you are signed in, you will get the enterprise-protected version of Gemini by default. You can perform searches, and if you have the license, you get access to the docs, sheets, and presentations you own.
Additionally, Google has “NotebookLM,” which is designed to handle research projects. If there is a specific topic you want to find more information about, you can use this to gather information about a sector or topic area. It allows you to build queries around subjects that might normally take hours to research, providing results in 10 to 15 minutes.
Carolyn Woodard:
It is amazing. You can provide documents from your organization, and it will only look within those files to give you answers you have already vetted, such as an FAQ document. I want to remind everyone, as someone with multiple Google profiles, to make sure you are logged into your work profile before using Gemini. Otherwise, you will just be using the standard version.
We have another poll for you. Several people have asked how to ensure that staff are using AI tools in the way the organization intends. Do you have AI policies at your organization?
- I do not think so / I don’t know.
- We are in the process of creating policies.
- Yes, we have an AI acceptable use policy and our staff use it.
- Not applicable to me.
I am sharing the link to the AI acceptable use policy template on our website. It is just a template, as there is no “one size fits all” solution. You have to determine your organization’s values and what you are comfortable with. You cannot just adopt a policy without individualizing it.
It looks like we have a good response rate, so I am going to end the poll. Matt, can you see the results?
Matthew Eshleman:
Yes. The majority of folks—57%—are in the process of creating those policies now. 13% said they have a policy that everyone uses, and a quarter of participants said they do not think they have one. Most people are in the process of developing a policy. Typically, policy comes last, and it is hard to do, but it is vital for guiding organizational technology adoption.
Carolyn Woodard:
It is important but difficult because it requires human thought and leadership engagement. We will quickly go through why to have a policy and how to get started. Even having a one-page outline of principles or values is better than nothing. To the person in the chat who wanted to be able to say “no” to having a policy: stay on top of that. Keep asking the question. You might have a policy in an employee handbook that you do not know about, but that is not very useful if it isn’t being actively discussed.
AI impact is a leadership and board-level conversation. All staff should be involved in weighing options, learning the dangers, and strategizing tasks for AI. Nonprofits have a lot of knowledge about their missions, so it is relatively easy to get staff input on what makes them feel “queasy” or what they want to use AI for.
Nonprofits also need to take upskilling seriously. AI literacy will require ongoing, collaborative training. This includes understanding how tools can impact your mission, reaching different communities, and improving productivity. You have to revisit this often because AI is evolving so quickly. You cannot just have an annual training video. You likely have “champions” or power users on your staff already who can help train others.
I am sharing a document in the chat from the Department of Labor that outlines AI literacy categories for current and future staff. This will help you get started. Please keep sharing in the chat how you are approaching AI policies and training.
Matt, I want to turn it back to you to summarize how to use AI tools safely.
Matthew Eshleman:
The policy background is essential for getting people in the organization talking about what it means to use these tools. In my 24 years in technology, I remember when the questions were about whether everyone needed an email address or internet access. Those tools proved effective, and AI is a tool we cannot ignore. In ten years, we will likely say everyone needs these tools to do their jobs.
However, it is important for organizations to identify the problem they are trying to solve. If you give everyone a Copilot license but have not articulated the issue you are addressing, you may not get the results you want. Meeting notes and action items are clear use cases. But if the problem is related to grant applications, an AI tool alone might not be the solution.
Define the process and the goal before applying the technology. We advocate for having a small working group or team to pilot these issues. The “fail fast” mantra is helpful here; explore different things, figure out what works, and then share best practices with the broader organization. Finally, identify what is currently in use and define guardrails. People are already using tools you have never heard of. Have a conversation about “use this, not that.” If a tool lacks the necessary privacy controls, find another option that addresses the need while meeting your security requirements. You have to keep talking and iterating. This is evolving faster than many of us can process, but you have to jump in and stay on it.
Carolyn Woodard:
I am putting up a slide for those who want to book time with Matt to ask more questions about AI and cybersecurity. He also answers questions on our Reddit community about once a week. I apologize that we barely scratched the surface of the questions today.
To review our learning objectives: we discussed the difference between enterprise AI and freemium tools. You get what you pay for. If you do not pay for a license, your information is likely being used to train the model. We also touched on accessing Microsoft and Google tools at the organizational level. Talk to your IT team, as this may be unique to your organization.
We also reviewed policy guidelines and the importance of training. While these tools are marketed as “easy,” nonprofits are finding that significant upfront energy is required to think through challenges, ethics, and policies. I also have a learning objective regarding the challenges of oversharing sensitive information. It is heartening to see how aware everyone is of that risk.
Matt will be back in April for a webinar on cybersecurity, which will include much more on AI. Next month, we will change gears to talk about going from being an “accidental techie” to an intentional nonprofit tech leader. We will have two guests: Hugo Castro and Gozi Egbuono. They will talk about moving from being the person who helps with IT for free to being a strategic leader. That is on Wednesday, March 25th, at 3:00 PM Eastern.
Please take our short survey as you leave for a chance to win a $25 gift certificate. Join us on Reddit at r/nonprofitITmanagement for more Q&A with Matt. I will be loading questions from the chat there over the next few days. Thank you everyone for joining us today. Matt, thank you for your time.
Matthew Eshleman:
Thanks. I am responding to questions on Reddit now. There are many good questions there. I do not have answers for all of them, but it is a vital conversation to have.
Carolyn Woodard:
In the world of AI, none of us are absolute experts. If a consultant tells you they know everything about AI, they are lying, because what they knew yesterday is different today. Thank you again, Matt, and thank you to everyone who joined the webinar. Your time is a gift. We will see you on Reddit or at our next webinar.
Matthew Eshleman:
Great. Thanks, Carolyn.
As advocates for using technology transparently to work smarter, we’re practicing what we recommend. This transcript was edited lightly with the assistance of AI for clarity, and is not a verbatim transcript. The content was reviewed, edited, and finalized by a human editor to ensure accuracy and relevance.
Photo by Greg Rosenke on Unsplash