Artificial Intelligence: Impacts on Nonprofits
What is Well-Managed IT?

Join experts for two sessions on the role of IT in nonprofit management.

View Video

Subscribe to our Youtube Channel here

Listen to Podcast

Like podcasts? Find our full archive here or anywhere you listen to podcasts: search Community IT Innovators Nonprofit Technology Topics on AppleGoogleStitcher, Pandora, and more. Or ask your smart speaker.

What is Good Tech Fest?

Good Tech Fest
Washington DC and virtual
May 1-4, 2023

Good Tech Fest is a series of virtual and in-person gatherings happening every year around the world. GTF brings together a community of data nerds, technologists, product leaders, philanthropists and more to learn from one another, build our capacities for using technology for impact, and wrestling with the challenges and limitations of technology in the social sector. 

Originally launched as the Do Good Data conference in 2011, it was re-branded as Good Tech in 2018 to create a more inclusive environment for technologists and product leaders who are critical in creating applications and uses of impact data.

We were proud to support this community with our second annual Good Tech Fest Community IT presentations. This year we discussed two important topics for IT leadership at nonprofits to consider.

Artificial Intelligence: Impacts on Nonprofits

Vendors have been trying to sell fundraising AI (Artificial Intelligence) for years now, promising to effortlessly analyze your database and help move members to major donors by automating the right ask at the right time. AI completes our emails and chides us when we don’t take a break between meetings. AI can be a tool for equity and can also just be an expensive investment. AI is transforming jobs right now.

Nura Aboki is a senior consultant at Community IT and watching the rapid evolution of machine learning as it will impact the work of our clients. Johan Hammerstrom is CEO of Community IT and also keeps an eye on AI developments and the nonprofit sector. Our room at this virtual conference had a great expert Q&A on the impacts of Artificial Intelligence on nonprofits.

More resources on the growing role of AI at nonprofits:

Biggest Nonprofit Tech Stories of 2022

AI and Machine-Based Learning: Nonprofit Tech of the Future

How your organization can harness AI for more equitable outcomes 

Really good article from one of my favorite writers/thinkers about fundraising and nonprofits and equity.

What is Well-Managed IT?

Community IT is an MSP (Managed Service Provider) serving nonprofits exclusively for 20+ years with outsourced IT. The IT has changed over time but the business need of nonprofits for IT that supports their mission is constant. When should you outsource and when build IT staff in-house? Do different sized nonprofits need different IT support? How can you integrate your IT strategy with your overall strategy, and build executive buy-in? What tech projects should you be planning to implement in the next year?

Outreach Director Carolyn Woodard answered questions about budgeting and strategic planning for IT support to keep your organization and your databases running, and how to see IT as a strategic opportunity.

Community IT is entirely employee-owned, which is unusual in the IT and MSP space. For anyone contemplating ethical IT business models and practices, CEO Johan Hammerstrom is happy to answer those questions as well, and has advice on how to undertake the process of becoming an ESOP company.

More free download resources on Outsourced IT:

How Do I Know if an MSP is Right for My Organization?

The Nonprofit Guide to Vetting a Managed IT Services Provider

Building the Foundation for IT Innovation Guide

The Nonprofit Guide to Remote Work

Cybersecurity Readiness for Nonprofits Playbook

As with all our webinars, these presentations are appropriate for an audience of varied IT experience.

Community IT is proudly vendor-agnostic and our webinars cover a range of topics and discussions. Webinars are never a sales pitch, always a way to share our knowledge with our community.


Presenters:



Johan Hammerstrom’s focus and expertise are in nonprofit IT leadership, governance practices, and nonprofit IT strategy. In addition to deep experience supporting hundreds of nonprofit clients for over 20 years, Johan has a technical background as a computer engineer and a strong servant-leadership style as the head of an employee-owned small service business. After advising and strategizing with nonprofit clients over the years, he has gained a wealth of insight into the budget and decision-making culture at nonprofits – a culture that enables creative IT management but can place constraints on strategies and implementation.

As CEO, Johan provides high-level direction and leadership in client partnerships. He also guides Community IT’s relationship to its Board and ESOP employee-owners. Johan is also instrumental in building a Community IT value of giving back to the sector by sharing resources and knowledge through free website materials, monthly webinars, and external speaking engagements.

Johan graduated with Honors and a BS in Chemistry from Stanford University and received a master’s degree in Biophysics from Johns Hopkins University.

Johan enjoys talking with webinar attendees about all aspects of nonprofit technology.



Nura Aboki is a Senior Engineer and IT Business Manager at Community IT Innovators. In that role, he proactively oversees technology infrastructure for clients. Nura started his career at Community IT as a Network Administrator in 2009. In 2012, he was promoted to Network Engineer and assumed a supervisory role in IT service operations. 

As an IT Business Manager (ITBM), Nura guides some of our largest clients through complex implementation of effective technology investments and utilizing efficient IT services in direct support of their missions. 

The ITBM makes recommendations on IT investments, training programs, maintenance, and licenses. They help the client be forward-looking, and act as a vendor-agnostic, trusted advisor with deep knowledge of the nonprofit IT software and platforms available. Because Community IT works in partnership with clients to manage long-term IT needs, the ITBM relationship with the client makes them a true asset. 

Prior to joining Community IT Innovators, Nura served as a member of the technical support team at George Washington University where he provided incident management to over 20,000 end users on computer hardware, software, and networking issues. Nura also held a Network Specialist role at the Economic Community of West African States (ECOWAS) Parliament in Abuja, Nigeria. 

Nura holds a Bachelor of Science in Computer Engineering and Master of Science in Electrical Engineering, both from George Washington University. He continues development of his professional competence through continuing studies in Technology Management. 

Carolyn Woodard Using Microsoft Teams at Nonprofits


Carolyn Woodard is currently head of Marketing and Outreach at Community IT Innovators. She has served many roles at Community IT, from client to project manager to marketing. With over twenty years of experience in the nonprofit world, including as a nonprofit technology project manager and Director of IT at both large and small organizations, Carolyn knows the frustrations and delights of working with technology professionals, accidental techies, executives, and staff to deliver your organization’s mission and keep your IT infrastructure operating. She has a master’s degree in Nonprofit Management from Johns Hopkins University and received her undergraduate degree in English Literature from Williams College. She was happy to be moderating and presenting at Good Tech Fest. Always an opportunity to interact with thoughtful nonprofit tech leaders.



Transcript: AI Impacts on Nonprofits

Carolyn Woodard: I want to welcome all to Artificial Intelligence Impacts on Nonprofits presented by Community IT Innovators. 

We have a panel of our senior staff here today. They’re going to talk about some fun stuff like artificial intelligence, AI, as it relates to tech companies and products, and how any of this might impact your nonprofit and the tech you use now and years in the future. 

My name is Carolyn Woodard and I’m the outreach director for Community IT. I’ll be the moderator today, and I’m very happy to hear from our experts. Nura, would you like to introduce yourself?

Nuradeen Aboki: Thank you, Carolyn. My name is Nura Aboki. I’m a senior consultant at Community IT, I’ve been with Community IT for over 14 years now, and I do advise clients on their IT solutions, provide solution design, architecting, and also strategic planning for some of our sophisticated clients that do have sophisticated business requirements. I’m so excited to be here to talk about AI and its potential in the nonprofit space. Johan?

Johan Hammerstrom: Yeah. Good afternoon or good morning, depending on where you’re joining us from. We’re really happy that you decided to attend this session today. And we’re really excited to be presenting on this topic. My name’s Johan Hammerstrom, I’m the CEO at Community IT, and I’ve been with the company for over 20 years and throughout that time I’ve had the opportunity to work with hundreds of different nonprofit organizations and help them identify technology solutions that will help them accomplish their mission more effectively. And we’re really excited to talk about how this new world of AI that we’re moving into might be able to contribute and enhance the ability of nonprofits to do their great work.

Carolyn Woodard: Thanks. So, I see people are putting in the chat, if you want to put in the chat where you’re coming from to this conference. And I just shout out to our friends and friends we haven’t met yet, as I say. And while you’re doing that, I’m going to tell you a little bit about Community IT. 

We’re a 100% employee owned, managed services provider. So, we provide outsourced IT support and we work exclusively with nonprofit organizations. Our mission is to help nonprofits accomplish their missions through the effective use of technology. So, we are big, big fans of what well-managed IT can do for your nonprofit. And we serve nonprofits across the United States. We’ve been doing this for over 20 years. We are technology experts and we are consistently given an MSP501 recognition for being a top MSP, which is an honor we received again in 2022.

Learning Objectives

I want to give us a few learning objectives for this session today. 

We’re going to dive into discussing AI. We’re going to talk about the ways we expect nonprofit work and missions to evolve over the next three years as AI becomes very present in our work and also in our personal lives. 

We’re going to try to leave as much time as we can for Q and A and get a conversation going after the presentation. So, please submit questions. If you want to use the live chat, I think that’s a better bet. I’ll break in and ask them or I’ll save them for the end. 

I want to remind everyone that Community IT for these presentations is vendor agnostic. We make recommendations to our clients, but only based on their specific business needs. We never try to get a client into a product because we get an incentive or anything like that. 

We do consider ourselves to be best of breed providers. It’s our job to know the landscape, what tools are available and reputable and widely used, and we make recommendations on that basis for our clients based on their business needs, priority and budget. So, a little disclaimer out of the way. And the next thing that we’re going to do is launch a poll.

Poll: How cutting edge are you?

So, hang on a second. So, this question is just a little icebreaker not specifically about AI. 

How cutting edge are you or how cutting edge do you consider your nonprofit to be? 

This will give us a little idea of, in this conversation, how much we need to explain and how much we can just jump into a conversation about. 

  1. We use AI tools or we are starting to use them this year. And I think this is not able to be a multiple choice, if I remember correctly. You can only choose one. 
  2. Second option is, we accept crypto donations.
  3. Third option, we start using cybersecurity tools as soon as they’re on the market. 
  4. Fourth option, we love open source platforms. We love custom apps. We’re always customizing, tinkering and messing around. 
  5. Number five is, we are on new social media platforms. A lot of social media platforms you might not have heard of, but your kids use. So, very forward in your social media. 
  6. And number six is not applicable, or we are not tech savvy. 

And that’s right here. So I, if you, if that’s you, there’s no shame. So, you can go ahead and click on that. Nura, do you feel like you could read the results for us?

Nuradeen Aboki: Certainly, yes. Thank you to those of you that have participated in this poll. It’s looking like we have about 50% that are using AI tools/starting to use AI tools. We have another 25% that use cybersecurity tools and then another 25% say they love open source and custom solutions. 

But we don’t have any actually accepting crypto currencies for donations. And none of the correspondents also use new social media as well as there’s no one here that is ashamed that they are not tech savvy. So, quite interesting. We have some 50% of the respondents are using AI. Excellent.

Carolyn Woodard: Excellent. Thank you Nura. Thank you everyone for filling that little poll out for us. I just wanted to throw that out and see how tech savvy we were feeling. 

We wanted to talk about artificial intelligence being one part of a sea change and how the nonprofit sector will be impacted by it. Technology and work are always changing, but right now, I think we can all admit we are going through a sea change and no one knows what nonprofit work and technology will be like in the future.

Sometimes within nonprofit tech, it can seem like nonprofits are the last to get the cutting edge tech, either because of budgets or internal attitudes, et cetera. But in our 20 years serving nonprofits we also see a lot of innovation, nonprofits that are tech savvy, that see opportunities to be more efficient, to be entrepreneurial, to develop something that fits a certain need or fits a certain niche.

And many nonprofits, as we know, act like start-ups. They see a need, they try to address that need, and they have a vision of how they want to make that impact. And a lot of times technology is an integral part of that vision. 

And on the other hand, we’ve seen many times when there’s a new technology that kind of stumbles onto the nonprofit market and tries to dive in and market to nonprofits; trying to market products and tools that maybe aren’t really set up for nonprofit clients, aren’t needed, maybe aren’t appropriate. They kind of try to jimmy them in, hack them to make them fit a nonprofit need. So, in my time at nonprofits and in technology, I kind of have developed an attitude that a healthy dose of buyer beware is pretty much always appropriate.

There are a lot of companies out there that see the nonprofit market and are just like, Oh, we can easily sell here. And I think that the sophistication of the nonprofit IT managers and directors, et cetera, is a lot more mature than it used to be. You know, 20 years ago there was a lot of trying to sell to this market, hoping that the market didn’t really know what you were selling them. I think that’s changed a lot, a lot. But I still think that when you have a new product like this, there’s going to be this kind of hard sell that happens. 

So, I want to quickly go through this because I’m assuming that people on this panel and in the room know a lot about artificial intelligence, but for centuries people have wondered about machines; could machines think and act as humans?

I think as long as people have been thinking about machines being able to be intelligent, there’s been this thought of, would it be for good or would it come as a threat? 

Artificial intelligence is a term often used very loosely by popular culture. For academics, it encompasses machine-based learning, generative AI and approaches using simulated neural networks, large competing abilities, et cetera.

So, like most technology tools, AI itself can be positive or negative depending on the ethics of the users, which makes this a really interesting area for nonprofits to get involved because we think about ethics and equity all the time in our work. We have the ability to draw attention to the impacts of new technologies on the communities and populations that we serve and sometimes the unintended impacts and outcomes that those new technologies have on these populations. 

So, for this presentation, we’re going to focus on the tools and areas where AI is either being specifically marketed to nonprofits or where we expect AI to have a big impact that nonprofits will not escape. 

All right, so I want to turn the conversation over to our experts. 

How are nonprofits using AI in your experience?

Nuradeen Aboki: Nonprofits are increasingly using AI to improve their operations and impact. And there’s some ways that we see nonprofit organizations using AI specifically in fundraising. AI can be used to identify potential donors, predict who is most likely to give, and personalize fundraising appeals. There are certain specific nonprofit charities that are actually using AI to identify people who are likely to donate to its course and then send personalized emails with information about the organization’s work. 

Another area that nonprofit are using AI is in program delivery, for instance to improve the delivery of nonprofit programs and services. And a specific nonprofit organization is using AI to predict which food banks are most likely to run out of food and then send them additional supplies. You can see how AI can intelligently do that predictive analysis. 

And the third area that I wanted to mention is in operations where AI is being used to automate administrative tasks such as data entry and report generation. This would free up staff time to focus on more strategic activities. One specific nonprofit organization is using AI to automate its data entry processes which has saved the organization hundreds of hours of work each year.

Johan Hammerstrom: Karen asks, 

Are you seeing this happen mostly via donor management systems or are nonprofits using independent tools for that donor analysis that you were talking about?

So, in terms of the different areas where you’re seeing nonprofits using AI, are they doing it through independent AI tools or are they doing it through AI tools that are being built into the traditional systems that they’re already using?

Nuradeen Aboki: In the traditional systems that they’re already using now, the vendors actually have seen the opportunity there to step up and take advantage of AI. So, they’re introducing those AI capabilities into traditional tools. Those that lag in or lack AI capabilities have ventured into using independent tools.

Carolyn Woodard: Yeah, this is definitely an area where for maybe four or five years, a lot of marketing to nonprofits says you can magically use AI to draw a lot of major donors out of the list that you already have and do some quick analysis that will dramatically improve your fundraising. And I think, again, buyer beware, it’s usually a larger question for nonprofits about the status of their database and their fundraising. 

Johan, did you have something you wanted to add?

Johan Hammerstrom: Well, I think we’re in the very early stages of this current AI revolution that we’re experiencing. And one of the things that I’ve read, I forgot the person who mentioned it, didn’t remember who said it, but AI is always the new technology, or it’s always the technology that’s coming in two or three years, but it’s always that. And so, things like driver assisted technology. If you’ve purchased a car in the last five years, now it will detect when you’re merging lanes on the highway and someone’s in your blind spot, it’ll give you a little bit of a warning. And before that technology was built into cars, that was considered to be AI. When Siri first was released and had the voice recognition software, that was considered to be artificial intelligence because we’re building intelligence into these devices so that they can do things that previously computers couldn’t do.

It’s possible that the current, new generation or iteration of artificial intelligence is something that over time just gets built into the technology tools that people are using and no longer has the mysterious quality that I think it does right now for us. We’re seeing some of the amazing things that this generative AI, which is really what this new sort of iteration of artificial intelligence is all about, sort of generating novel text, generating novel images based on these large learning models that have collected vast amounts of image and text data from the internet. And then through the massive computing power at the big companies like Microsoft and Google, have been able to train these algorithms that can generate these new things. Like generating text that appears like it was created by a human being or generating images that appear to have been created by a human being, but were actually generated by these large learning models instead.

Carolyn Woodard: We have a question in Q and A. for either one of you. 

One of the main concerns we have is privacy. And we noticed that some of these tools like Chat GPT for example, are focused on the end users, but do not at this time have an admin portal where an organization can manage privacy, assign licensing, put in place restrictions. How should organizations deal with this aspect of AI?

Johan Hammerstrom: I don’t know if I have a great answer to that. I don’t know if I have a great direct answer to that question, but there are legal issues that we’re in the very early stages of trying to figure out as far as all of this goes. 

All of these models were trained on vast amounts of data that are available on the internet. And so, you can go in and do a prompt, like make me a picture of someone riding a bike in the style of this particular artist. And the reason that the model can create an image that’s convincing is because the model has learned from art that was already on the internet from this artist that you’ve referenced in your prompt. Is that legal?

Because that art is copyrighted material, is it fair use for the person generating it through the prompt to rely on the existing images from that artist? Is it fair use for the AI model to do that? 

That’s a fascinating question. That’s a legal question that has not been answered in a court of law yet. We’re just getting to the point where, and these are novel questions because these large systems and what they’re capable of doing is something new that we’ve never had before. It is important, I think, also to keep in mind that these large learning models are all basically built by the largest tech companies. 

OpenAI is a new company that Microsoft has an equity stake in. Part of the reason that Microsoft has an equity stake is because OpenAI, in order to create this model, needed to have vast amounts of computing power. They couldn’t afford to do that on their own.

They couldn’t afford the computing power required to train these models. Microsoft was able to provide them with that computing power and in exchange for that Microsoft has an equity stake in OpenAI. So, somebody conceivably could sue OpenAI and then it becomes a question for the courts to resolve. Is the training of these models on the copyrighted material that’s available on the internet constitute fair use or not? 

Those are questions that need to be figured out. Once those questions get figured out, then ultimately Microsoft and OpenAI become liable for either shutting down those models that have been trained on material that the courts might find to be used improperly in violating copyrights, or if the courts find that it’s fair use, then Microsoft can continue what it’s doing. 

And their goal ultimately is to make money in all of this. They’re generating a product that enhances their business objectives as a company. So, to try to bring this all around, back to the question. We don’t know yet what constitutes violations of privacy in terms of these AI models. In some ways it’s sort of similar to what happened with social media where five to 10 years went by before GDPR was passed and before the courts of law started getting involved in how Facebook, for example, could use or not use all of the information that they’re collecting. So, right now, from a legal standpoint, it’s unresolved. There’s no legal liability because the legal issues haven’t been worked out yet. From an ethical standpoint there’s still a lot of open questions as well. I hope that’s not too much of a dodge.

Carolyn Woodard: I feel like that’s a great segue to our next slide. And I loved this image because as you can see, all of the photos of the planes are mislabelled as cars. So, if the AI is learning from these images, it’s not learning what a plane looks like. 

I feel like this was a second question that we wanted to try and address.

How do you evaluate the AI hype versus business needs of your nonprofit? And do you have some thoughts on that? 

We talked earlier about fundraising software that tries to improve your ability to ask the right donor at the right time to get the right gift. But in other ways, AI is kind of floating on top of some of our business needs. You can write better emails or you can research your grants better or you can have prompts for your grant writing better. 

So, do you see business needs that AI is going to address? And would you talk a little bit about how to evaluate those?

Nuradeen Aboki: Yeah certainly AI is evolving rapidly, becoming more accessible to nonprofit organizations. And there are a growing number of ways that AI can be used to address nonprofit business needs. 

Some specific examples, I mentioned a few earlier, but we could touch on automating tasks. So, this is where writing that email or sending emails, data entry, generating reports and automation of all of that is achievable. And several businesses do require repetitive tasks and AI is able to provide that capability, so that staff time is freed up for more focus on strategic activities. Some decision making these days is done through AI, especially if you have a vast amount of data or large amounts of data that you need to identify trends and patterns.

That kind of information AI is able to provide so that nonprofit organizations can make better decisions about how to allocate resources and deliver services. So, sometimes in distributing limited resources, one has to figure out, where is the best place to provide that resource or who’s the best person to actually give that resource to? AI can help provide additional insights that can help with that informed decision.

We talked about personalized services to tailor certain services to individual clients or prospective donors. AI certainly has that capability of personalization of information, so that the potential donor is able to receive information that is tailored to them. AI is able to provide that capability based on their interest and needs.

Going back to fundraising, I mentioned that you can actually functionalize fundraising appeals, identify potential donors as well as track donations. AI can all be used in raising funds. If organizations are looking to recruit volunteers. AI has that capability in the recruitment appeals, potentially finding or identifying the potential of volunteers and also tracking their hours even after recruiting the volunteers, managing their volunteers. 

AI can provide that capability of assigning tasks to volunteers, tracking the hours of the volunteers and providing feedback to them as they help you achieve your mission or achieve the mission of the nonprofit organization.

Lastly, AI can be an advocate for change on behalf of the nonprofit organization where it analyzes data on social issues, identifies policy solutions, and helps develop advocacy campaigns. AI is so powerful these days that it can actually help a nonprofit organization in their operations and provide great impact. As AI continues to evolve, it’s likely that there are even more ways that AI can be used to address nonprofit business needs.

Johan Hammerstrom: Yeah and I think the trend in technology is generally towards centralized managed systems. So, if you think back to when websites were brand new, the first generation of websites were hard coded in HTML because it was this new technology, very exciting. Everybody built their own website by hand. And then over time, that got replaced with I think Microsoft Front Page. Those of you who go back far enough may recall where you could build the website in an interface and then have it turned into the HTML. And then, there were content management systems, but they had to be custom built. And now, we’re in an era where the majority of organizations have their website in WordPress or Squarespace, or there’s a small number of platforms that solved all of the problems that made doing it difficult and then solved those problems in a scalable way and created a solution that a lot of people could use.

You see that pattern repeated over and over again when it comes to new technology. I suspect that that’ll probably be what happens and what is already happening with AI. And so nonprofits don’t need to worry that if they’re not using these generative AI tools in a very technical way that they’re somehow falling behind. Chances are that those tools are eventually going to get incorporated into the software packages that you use to accomplish your mission in various ways. One example of this is Copilot, and that’s something that one of the earliest uses of generative AI came from Microsoft in GitHub. GitHub is the coding repository that lots and lots of developers use to develop software code. At Microsoft, an early version use case of OpenAI was this utility called Copilot. 

A developer could have Copilot running alongside the screen where they were doing the coding and Copilot would start to make suggestions. Have you thought about coding it in this way? Have you thought about coding it in that way? And what they found over time is that developers who used Copilot were something like 40% more efficient. They were moving more quickly, they were generally developing better code. The name is perfect because Copilot was just providing suggestions and it still required a developer with experience and skill to make the final decisions about how the software was going to be developed. And now Microsoft has taken Copilot, that term, that branding and they’re applying it to Office. So, Excel’s going to have Copilot, Microsoft Word’s going to have Copilot. The joke is like, finally we get a Clippy that works.

Even right now when you are designing a PowerPoint presentation, there’s a design ideas button at the top of the application. You can click on that and it starts giving you some design ideas. It’s going to get a lot better with Copilot, but this is all going to be through Microsoft Office. This is all going to be through technology that a lot of organizations are already using.

Carolyn Woodard: Which I think brings us to our next slide of issues we wanted to make sure to discuss. Going back to the last slide, I think you could see how there are specific areas of nonprofit spheres where AI is going to make a big impact very quickly. 

My question was about equity issues in AI and how that impacts the nonprofit space particularly in certain sectors. I don’t know if anybody else uses Duolingo. I’m a big fan. It’s a lot of fun to use, but from my understanding, it started out as an artificial intelligence education tool and they just decided languages would be a place that we would start with this.

But their aim is to expand Duolingo and be able to teach all subjects free to students anywhere in the world, basically

Do you want to talk a little bit more about equity issues that nonprofits particularly can address or may be facing when they’re using AI tools, if you’ve seen that come up in your work?

Nuradeen Aboki: Okay. Yeah. Certainly it’s a growing concern also around how AI could exacerbate existing inequities, specifically racial and gender disparities. Generally, one of the big concerns is actually the data that AI models trained on is biased. And that could lead to AI systems that make biased decisions. And these decisions could be an example of maybe recommending lower paying jobs to women. The field of AI is disproportionately male. And it’s mostly white people that are working in the field of AI. So, that could actually be an issue there. Another concern is that AI systems could be used to automate tasks that are currently done by people and which could lead to job losses. And this could marginalize certain groups of people that are already unemployed. So, there’s a great concern there. 

Some of the other concerns that have come across is that this could create a new form of surveillance and control where AI could be used to track movement or to monitor our online activity. And that could definitely have a chilling effect on free speech and dissent. These are some of the general areas where we have seen some equity issues. There are specific examples that I’ve come across. Facial recognition is one of them. Facial recognition systems are often biased against people of color. This is because the systems, like I mentioned, are trained on data sets that are predominantly white; systems are more likely to misidentify people of color.

There are ways to address some of these issues of equity that we are coming across. And one of these ways is to essentially just take a look at the equity issues that we’ve mentioned. Isn’t it better for the industry to actually look at inclusiveness of getting more people of greater diversity involved in developing AI rather than having a lack of diversity in the AI workforce? So, increasing diversity in the AI workforce in both race and gender would be one way to ensure that these equity issues are resolved. And making AI more accessible could be one way to ensure that this inaccessibility issue with AI is addressed. So, if AI systems are more affordable and easier to use, then it would make it possible for people of all backgrounds to access the benefits of AI. By addressing these equity issues we can help ensure that AI is used for good and that it benefits everyone.

Johan Hammerstrom: Yeah, I think that those are really great points, Nura, and I think it just goes back to the importance of understanding this term. AI is kind of a blanket term that is, there’s a sort of magical quality to what’s happening right now. 

You see the images and the text that’s being created and it seems really remarkable, but it’s important to remember that computers are not creating these things from scratch. These systems are basically taking vast amounts of data and then using that data to generate images, text, sound, video files, increasingly in response to specific prompts that are being made. 

Search is an area where this technology is probably going to become more and more common and there’s going to be a huge battle between Google and Microsoft and Bing in particular. Microsoft is hoping there’s a new era of search and is hoping that Bing can defeat Google in that new era of search.

There’s these huge corporate battles that are taking place around this technology. But all that is important to remember because what’s going to become more and more important is gaining skill and experience with prompts. The ability to prompt these systems, it’s going to be like the next level of coding. Being able to code effectively, right? 

Computer code is a new skill that emerged over the last 30 or 40 years. And being able to prompt these generative systems is another new skill that’s going to evolve over time. And then the output that the system provides you, being able to evaluate that outcome is another really important skill. 

Copilot is another great illustration of this. I’m not a developer, so I couldn’t really even use Copilot because there’d be nothing for it to see me doing, but if I said, Write me a program that does this and this and it gave me the code, I have no way to evaluate whether or not that is good code or bad code or efficient code or inefficient code. 

And in the same way, I’m not really an artist either, so I could give a prompt to Midjourney or Stable Diffusion and get an image back, but I’m not going to be able to necessarily evaluate that image. So, the prompting and the evaluation, the prompting that creates the generative output and the evaluation that determines whether or not that generative output is biased, whether or not that generative output is harmful, whether it’s violated privacy and so forth. There’s still human beings that are going to be needed to evaluate the output that’s being created. But we’re still, again, in very early stages of understanding those required expertises.

Carolyn Woodard: We’re coming right up on the edge, as I told you it is my wont that we go right up to the end of our session. I want to thank everybody for joining us today. This was such a great conversation. Thank you Nura and Johan as well for sharing your expertise and your thoughts and your deep thinking on this and what we can expect and how it’s impacting us now and what we can expect going forward. 

Thank you all for joining us and again, thank you Johan and Nura, this was really great.