Katy Knight is Executive Director and President of Siegel Family Endowment, a foundation focused on the impact of technology on society.
Over the past few months, generative AI has taken the world by storm. Bill Gates hailed it as “the most important advance in technology since the graphical user interface.” ChatGPT has seen over 1 billion total visits since its debut in November 2022, and its 100 million active monthly users have made it “the fastest-growing consumer application in history.” In everyday life, AI is increasingly being used to complete both simple and complex tasks—from composing recipes to generating code.
As promising as generative AI is, it comes with significant peril. An open letter from leading researchers, advocates and tech industry leaders has called for a “pause” on AI experiments, citing “profound risks to society and humanity” if the emergent technology were allowed to continue without firm guardrails. The letter itself promoted its own backlash, with renowned researchers criticizing its focus on an overly hyped philosophical future, and its failure to mention many other well-documented harms of AI, such as worker exploitation, data theft, the reproduction of systems of oppression, endangerment of information ecosystems and concentration of power.
As the leader of a tech-focused foundation, I make investments in both sides of this conversation—in piloting and scaling technologies that we believe can have a large-scale positive impact on society, as well as efforts to remediate the harms that tech has left in its wake and research about future pitfalls we might encounter.
To me, largely missing from this conversation is thorough consideration of the impact and opportunity for public interest, and the voice of average citizens who will use, benefit from and be afflicted by this technology. If we want AI to be anchored in the public interest—designed to serve all citizens—we’re doing it wrong. Instead of a focus on what the AI can do, we should be asking what humans actually need it to do, and develop accordingly. While the notion of creating technology in the public interest is not new, the tactics and tools at society’s disposal are often secondary to making a profit. To truly capitalize on all that generative AI is capable of across communities, I believe we must take the following steps.
1. Infuse public interest principles throughout the AI product chain.
AI is not a singular product; it has a supply chain consisting of raw materials (data), design choices and business decisions that are all created and influenced by humans. To orient AI in the public interest, public interest values—like equity, transparency and accountability—must be infused throughout the building process, not simply as a consultation after the fact. This means that those responsible for these tools should be proactively collaborating with the public to construct technologies that respond to their needs. Each link in the chain presents an opportunity to combat bias, exploitation and information disorder—from ensuring large language models are transparent and auditable to elevating voices that have been historically excluded and forefronting diverse people to steer design decisions. The Partnership on AI and European Center for Not-for-Profit Law have produced tools for this type of meaningful engagement.
2. Spur new business models based on public interest values.
Generative AI is certainly not free. It has a clear business model, which I’ve been heartened to see many folks calling attention to in recent months. With big tech companies like Microsoft and Google already competing in the “AI arms race” (likely powered by the unimaginative tactics of ad-based revenue or subscriptions), we have a pivotal question to consider: Are these the right incentives—or the right stewards—for this wave of transformative technology?
For too long, we’ve let private companies have a monopoly of thought and design on how technology and data can be used to generate profit, and in the last decade, they haven’t done much but tweak the way they monetize data to sell ads. The rise of generative AI, however, presents a unique moment to explore new and potentially more effective business models that push beyond our short list of flawed options while also incorporating public interest values, such as compensation for creators. Philanthropy, with its risk-tolerant capital, and government, with its unmatched spending potential and ability to operate independently from profit, are well positioned to catalyze this change by doing things like investing in out-of-the-box ventures or revamping contracting and vendor policies to keep workers from being exploited.
3. Bolster existing public institutions.
We believe that philanthropy should serve as society’s risk capital to systematically and strategically partner on and support generative AI initiatives, models and projects that offer new, more collaborative ways of serving the public interest.
This includes lifting up the voices of researchers and civil society organizations that have spent years developing a body of knowledge on ethical AI, from compiling best practices and oversight frameworks to advocating for developments that act in service of democracy and the environment. It also includes investing in programs working to connect more diverse tech talent with government offices to help lawmakers better understand, govern and regulate the use of AI tools to serve and protect all communities, like the work currently being done at TechCongress and the Tech Talent Project.
Funders also have a responsibility to elevate the diverse experts and community members who should be afforded a seat at the table when it comes to AI design and development, as well as support growing public education and literacy efforts around new technologies so people are informed and empowered to advocate for themselves and the communities they represent.
The future of generative AI technologies will have profound implications for the lives of ordinary people. And it is we, the broad and diverse public, who need to be involved in the process of deciding what limits and possibilities should be established for this nascent and incredibly powerful technology.
Forbes Nonprofit Council is an invitation-only organization for chief executives in successful nonprofit organizations. Do I qualify?
Credit: Source link