By Cristal Mojica
The Connecting California: Solving the Digital Divide series has focused on some of the longest standing and historically intractable challenges in achieving digital equity including disinvestment in municipal broadband infrastructure and systemic digital redlining.
With recent leaps in generative artificial intelligence technology, however, we have seen the emergence of the potential for a new frontier of the digital divide: The AI divide.
On October 24th, the Michelson 20MM Foundation convened practitioners, advocates, educators, legislative staff, and Los Angeles City and County officials to join in conversation during the latest installment of Connecting California. The discussion featured a panel of experts in workforce development, social work, and advocacy as they answered the big questions: What is the impact of artificial intelligence technology on digital equity? If there is an AI divide, how do we collectively bridge it?
Our panelists, Natalie Gonzalez, Digital Equity Initiative, California Community Foundation; Alex Swartsel, Insights, Jobs for the FutureLabs; and Dr. Eric Rice, University of Southern California Center for AI, whose individual work involves driving equity and examining the potential for AI bias in their respective fields, shared the concern regarding the rise of AI. Specifically, they fear that those already on the wrong side of the digital divide will be left behind at an increasing rate—n classrooms, the workforce, and beyond. They also recognized that the exclusion of these vulnerable populations from participation in AI ecosystems also means that their needs, priorities, and voices could largely be absent in the subsequent training and evolution of AI models, perpetuating a cycle of discrimination and exclusion.
“Risk scores were being given to folks‚—and what was happening was these algorithms were designed, probably by well intentioned people, but who weren’t really thinking carefully through issues about equity, and issues of discrimination and the disparities that exist in our society. What turned out was that these algorithms were highly biased.” -Dr. Eric Rice, University of Southern California Center for AI
“Risk scores were being given to folks‚—and what was happening was these algorithms were designed, probably by well intentioned people, but who weren’t really thinking carefully through issues about equity, and issues of discrimination and the disparities that exist in our society. What turned out was that these algorithms were highly biased,” Dr. Eric Rice shared as one example of how AI bias negatively affects the Black community through AI-generated high risk scores that determined bail bond decisions.
The conversation also provided hope and highlighted opportunities to empower communities through training on the applications of AI technology and participation in the responsible development and use of AI. AI has the potential to both alleviate social inequities and expand human capabilities when used equitably and responsibly.
The speakers confirmed that as AI technology’s presence continues to expand, it is critical for communities, policymakers, equity advocates, and technology companies to be thoughtful and deliberate in steering these transformative technologies towards closing the divide.
In order to successfully incorporate AI into their digital skills training and ongoing digital equity conversations, digital equity practitioners in the audience expressed the need for trusted educational resources to bring back to their organizations. They also raised the important question of what ethical and policy guardrails will be needed in the coming years to ensure equity in the use and development of AI while also avoiding discrimination via AI. The panelists concluded it will take broad ongoing education and collaboration to collectively develop these guardrails hand-in-hand with our communities.
The lively discussion also revealed:
- The equity issue in AI lies not only in who has access to technology, but also who is represented and how they are represented in the foundational data sets that AI is developed upon.
- The guardrails for the responsible use of AI will be a major topic of policy conversation at the Federal and State level, including how AI fits into the broader digital discrimination conversation.
- A top priority for educational institutions, workplaces, and nonprofits providing digital navigator services is figuring out how to best incorporate AI into a digital skills curriculum to ensure that communities are AI literate and ready to leverage this technology for economic opportunity, education, and more.
We thank our long-standing partners in this series—the California Community Foundation, Silicon Valley Community Foundation, SoCal Grantmakers—and our organizing partner for this session, Microsoft Philanthropies.
Michelson 20MM is a private, nonprofit foundation working toward equity for underserved and historically underrepresented communities by expanding access to educational and employment opportunities, increasing affordability of educational programs, and ensuring the necessary supports are in place for individuals to thrive. To do so, we work in the following verticals: Digital Equity, Intellectual Property, Smart Justice, Student Basic Needs, and Open Educational Resources (OER). Co-chaired and funded by Alya and Gary Michelson, Michelson 20MM is part of the Michelson Philanthropies network of foundations.
To sign up for our newsletter, click here.