As I wait for Fei-Fei Li to meet me for lunch, groups of students begin queueing for seminars around me. Through glass walls I can hear the squeak of marker pen on whiteboard and see the furrowed brows of a dozen eager scientists in the making. I feel a nostalgic dread as I await the arrival of a professor and wonder if I am adequately prepared.
The feeling quickly lifts after Li arrives at the nondescript café, here on Stanford University’s campus, that she has chosen for our meeting. “I had an impression that I would have made it if I had lunch with the Financial Times,” says Li, as she arrives for what will surely rank among the cheapest lunches the FT has hosted.
Li is one of a small number of academics and technologists responsible for laying the foundation for today’s revolution in artificial intelligence. She is now pushing to ensure that revolution is carried out responsibly from a new institute at Stanford, her base since 2013. In one form or another, campuses have been Li’s home for more than 25 years.
During that time, universities and research labs have driven a string of breakthroughs in machine learning, computer vision and natural language processing. Li herself led the development of ImageNet, a vast repository of categorised images that demonstrated the importance of big data in powering AI and paved the way for significant advances in computer vision over the past decade.
But the AI tools being rolled out today, which demonstrate near-human level abilities to communicate, are coming instead from start-ups backed by the world’s biggest technology companies. Can any university keep up?
“I get why you’re asking this,” says Li. “But it really, really bothers me if we collectively are assuming there’s only one centre of gravity [in AI].” She insists that the public sector, with universities at its axis, still has a hugely significant role. “We’re pushing research on neuroscience, we’re pushing research on climate . . . We still have very unique interdisciplinary thinking. We have unique interdisciplinary data. And we have the youngest and most daring minds.”
In 2019, Li set up a new institute for Human-Centered Artificial Intelligence at Stanford with professor of philosophy John Etchemendy. Their aim is to ensure that powerful new AI tools and policies are designed explicitly to improve the human condition, rather than simply to boost productivity or play. Li describes herself as “walking between being a scientist and a humanist”.
In the pursuit of AI, she says, “civilisation is like a big boat and we’re sailing forward in the dark”. She sees HAI and other public bodies as lighthouses illuminating a safe passage.
Plotting that passage has become increasingly fraught since the launch of OpenAI’s powerful ChatGPT chatbot in late 2022. That brought consumers face to face with the enormous power of modern AI and kicked off a race for technological supremacy between start-ups and Big Tech players such as Microsoft and Google.
The leap forward in capability shown by ChatGPT also aggravated fears about the dangers of AI: workforce disruption, disinformation and even existential risk — the subject of a major summit hosted by the UK earlier this year.
More than just a race to build the best chatbot, the past 12 months have been about increasingly fierce competition to determine how AI is developed, deployed and governed. Li does not dismiss the idea of AI as a threat to humanity, but her work has focused on curbing the more immediate dangers of it and ensuring that powerful new tools are used for good.
Universities remain vital places from which to pursue public benefits such as finding cures for rare diseases or mapping the Earth’s biodiversity, she says, and can provide a useful counterweight to purely profit-driven companies.
But Li is also aware of how the odds are stacked, having punctuated her tenure at Stanford with a stint as chief scientist of AI and machine learning at Google Cloud from 2017 to 2018. Arriving there, she found the abundance of snacks “staggering”, let alone the technology and the depth of talent.
It is an observation I recall as we scan the more limited menu at the counter of Coupa Café, a family-run eatery that sources all its produce from the San Francisco Bay Area. We order two portions of pollo arepas, Venezuelan cornbread stuffed with chicken, cheese and caramelised onions.
Menu
Coupa Café,
473 Via Ortega, Stanford, California CA 94305
Pollo arepa x 2 $22.50
Vietnamese coffee $4.10
Pumpkin spice latte (decaf) $5.85
Total (inc tax and service) $41.25
“Right now in AI, what worries me is we don’t have the resources to make sure that academic AI continues to be a centre of gravity. Because if we lose that centre of gravity, then the other centre of gravity is driven by capitalism,” says Li, when we get back to our table. “Public-sector investment in AI is so abysmal. Not a single university today can train a ChatGPT model . . . academia cannot fully develop its own versions so that it can be used for more open scientific research. That is a problem.”
Li met an executive at OpenAI shortly after the company launched as a not-for-profit in 2015. Raising a glass in toast, the executive said: “Everyone doing research in AI should seriously question their role in academia going forward.” Today the comment looks prescient. OpenAI has transitioned to a for-profit model and carries a theoretical valuation of nearly $90bn. It and rival start-ups have become magnets for the best researchers.
Li has “such respect” for OpenAI. But a boardroom coup at the San Francisco start-up in November suggested that private enterprise might be a more precarious place from which to develop AI than it had appeared. Co-founder and chief executive Sam Altman was abruptly fired for not being “consistently candid” with his board, only to be returned to post days later after employees and investors rushed to his side.
“It’s such an important company and I’m going to trust that even with glitches like we’ve seen — a tsunami glitch — we’re going to get to a better place,” says Li. “They have energised AI.”
An insistent mechanical buzz interrupts our conversation; our lunch is waiting at the counter. I have paired mine with an iced Vietnamese coffee, Li opts for a pumpkin spiced latte. We return to our table and set about the arepas, paired with a heap of shredded lettuce and a thick, tangy dressing.
Li has just published her autobiography The Worlds I See, recounting her journey from China to the US as a teenager and her path to Stanford. She insists she does not do “public emotion”, and yet her memoir is deeply personal. It captures a life in which “there has been unfairness, there has been pain and humiliation” but is fundamentally “a love letter to the science I love”.
Li writes most intimately about her parents. She depicts her father as curious, playful and prone to mishaps. Her mother is fiercely bright and intellectual, hemmed in by the circumstances of her time and ill health.
Li’s parents gave up a middle-class life in China after 1989’s Tiananmen Square massacre and came to the New Jersey suburbs in the hope of a better future for their daughter in the US. Until the coronavirus pandemic, they lived with Li, her husband, the academic Silvio Savarese, and their two children. Now they are a short cycle ride away.
But in other regards, Li’s parents remain far removed from her world. When Stanford threw a party to celebrate the publication of The Worlds I See last month, her parents made a rare excursion to attend. Li took the opportunity to say how much she appreciated them, and that she loved them.
“After that my parents called me the next day and said, ‘We saw so many people clapping and we even saw people crying, we had no idea what you said.’ It was probably the first time in my life I’d said those words to them and in such a public setting and it was very, very heartfelt, and they still didn’t understand it; they didn’t understand the English,” says Li.
“It is the life of an immigrant . . . that barrier is not just expression, it’s the world. Immigrants like them, they still have a barrier with this world . . . For me, I broke that barrier,” she adds, before hesitating. “Or I didn’t, I don’t know. You know, maybe I broke it because, look at me: I’m a Stanford professor. But I didn’t because I’m not part of the Silicon Valley bro world.”
The most prominent figures and the loudest voices in AI are still overwhelmingly male. Li pays tribute to a number of male collaborators, peers and students, but is frustrated by a lack of diversity in the field. That is an issue, says Li, because more varied backgrounds mean “you see the world in different ways: that’s why your science can be unique.”
In 2017, she and her former student Olga Russakovsky set up AI4ALL, a non-profit aiming to redress a persistent gender imbalance in the sector. Russakovsky is one of a number of people I spoke to about Li who lauded the acuity of her mind and her humanity in the same breath.
The combination is essential to Li’s work, says Russakovsky. “The way that researchers approach problems, the questions they ask, all of that is informed by their perspectives. All of it is guided by their background.”
Li says she struggled to write a book about herself. “I’m too shy to tell my story . . . Who am I in my forties to write a memoir? I’m not Einstein.” Ultimately, she was persuaded to reveal so much of herself partly to demonstrate that there was space for more diverse voices in her field.
A few days before Li and I met, the New York Times published a “Who’s Who Behind the Dawn of the Modern Artificial Intelligence Movement.” All of the dozen entries were men — a fact that Li has been publicly critical of — among them Elon Musk, Dario Amodei and OpenAI co-founder Sam Altman.
Does AI still have a gender problem? “If you tell me today there is an undercurrent of sexism, I totally believe you,” says Li. “Are women’s voices heard, are women in classrooms and boardrooms, are women in the news? That is a greater question and I’m very concerned by that.”
When Altman returned to OpenAI last month, the terms of his reinstatement stipulated that two remaining board members — who happened to be the only female directors — stepped down. The episode reinforced the impression that the fate of the most powerful start-up in AI today hinged to a large degree on Altman; while much of the coverage gave short shrift to the women ejected from the board.
Li is wary of reducing a story that has been decades in the making to a vignette about one man, or even a handful of them. “It would bother me if the history of AI is only written for one of a few people and forgets about the others. That would bother me. That is not a Sam Altman problem, it’s a history-writing problem.”
Her book is a form of redress, contextualising today’s AI boom by referencing the decades of technological innovation, refinement and increases in raw computing power underpinning it. It also highlights that moments that feel like immense breakthroughs can, in retrospect, look like false dawns.
When Li left Stanford for Google Cloud in 2017, her pursuit of science met cold corporate reality. She had far more resources, a much bigger team of staff and access to everyone from “Japanese cucumber farmers all the way to Fortune 500 companies”. But, for the first time, she also had a company line. “I was usually happy to follow the script,” she writes in her memoir.
That became harder in 2018, when Google was at the centre of a controversy regarding the use of its AI by the US Department of Defense. Li was not directly responsible for the partnership, but was nonetheless caught up in an internal crisis which saw a number of staffers quit the company.
Li’s time at Google “really made me feel ‘my generation has ushered in this technology and we need to be responsible’ . . . It was humbling to realise: maths is easy, equations can be long but it’s pretty clean. Humans and societies are messy,” says Li. “I spent months thinking ‘what do I do’? Do I just ride along this wave? Do I go back and close the door and pretend the world doesn’t exist and continue to write my papers? What should I do?”
Ultimately, she decided to return to Stanford. “I was very mission-driven. It’s easy to forget about GPUs [high-powered computer chips] and paychecks when you’re mission-driven.”
By now, our arepas have been pushed to one side. A colleague of Li’s arrives with a boba tea, to her delight. “I have a soft spot for boba tea,” she says, using her straw to harpoon the lid of her drink. Then she mistakes my look of curiosity for one of covetousness. “Do you want it?,” she offers. I do not.
A peer who has known Li for more than a decade says Li has “really been focused on the right side of AI. She deeply cares about making sure AI has the right guardrails. Very few researchers in AI have that beautiful resonance between what they say and how they act.”
She has pursued unfashionable projects before, often against the counsel of peers and mentors. It took years to compile and label the more than 14mn images used to train ImageNet, a Sisyphean task that paid off only after a long period of doubt and loneliness. As Li puts it, “If you’re chasing the fashionable algorithm you’re not doing the best science.”
Her subsequent work — including bringing AI into healthcare and uniting ethicists, economists, technologists and policymakers to pursue human-centred AI — has been similarly trailblazing. Russell Wald, HAI’s deputy director, says she has the prescience and vision of a fortune teller.
“You have to be lonely to be a good scientist, because science as a profession is braving the unknown. You have to be lonely. You have to be scared. You have to see no one around you,” says Li. “And you could be wrong, but at least you have a fighting chance of discovering something great.”
George Hammond is the FT’s venture capital correspondent
Find out about our latest stories first — follow @FTWeekend on X and Instagram, and subscribe to our podcast Life and Art wherever you listen
Credit: Source link