We'd like to know you better so we can create more relevant courses. What do you do for work?
Course Syllabus
You've achieved today's streak!
Complete one lesson every day to keep the streak going.
Su
Mo
Tu
We
Th
Fr
Sa
You earned a Free Pass!
Free Passes help protect your daily streak. Complete more lessons to earn up to 3 Free Passes.
Elevate Your Career with Full Learning Experience
Unlock Plus AI learning and gain exclusive insights from industry leaders
Access exclusive features like graded notebooks and quizzes
Earn unlimited certificates to enhance your resume
Starting at $1 USD/mo after a free trial – cancel anytime
Hi, I'm delighted to have with us here today my old friend Professor Fei-Fei Li. Fei-Fei is a professor of computer science at Stanford University and also co-director of HAI, the Human-Centered AI Institute, and previously she also was responsible for AI at Google Cloud as a chief scientist for the division. It's great to have you here, Fei-Fei. Thank you, Andrew. Very happy to be here. So I guess, actually, how long have we known each other? I've lost track. Definitely more than a decade. I mean, I've known your work, right, before we even met. And I came to Stanford in 2009, but we started talking in 2007, so 15 years. And I actually still have very clear memories of how stressful it was when collectively, you know, a bunch of us, me, Chris Manning, a bunch of us were trying to figure out how to recruit you to come to Stanford. It wasn't hard. I just needed to sort out my students and life, but it's hard to resist Stanford. This is really great having you as a friend and colleague here. Me too. It's been a long time, and we're very lucky to be the generation seeing AI's great progress. So there was something about your background that I always found inspiring, which is, you know, today people are entering AI from all walks of life, and sometimes people still wonder, oh, I majored in something or other. Is AI a right path for me? So I thought one of the most interesting parts of your background was that you actually started out not studying computer science or AI, but you started out studying physics, and then had this path to becoming one of the most globally recognizable AI scientists. So how did you make that switch from physics to AI? Right. Well, that's a great question, Andrew. Especially both of us are passionate about young people's future and their coming to the world of AI. The truth is, if I could enter AI back then, more than 20 years ago, today anybody can enter AI because AI has become such a prevalent and globally impactful technology. But myself, maybe I was an accident. So I have always been a physics kid or STEM kids. I'm sure you were too. But physics was my passion all the way through middle school, high school, college. I went to Princeton and majored in physics. And one thing physics has taught me till today is really the passion for asking big questions, the passion for seeking north stars. And I was really having fun as a physics student at Princeton. One thing I did was reading up stories and just writings of great physicists of the 20th century. I just hear about what they think about the world, especially people like Albert Einstein, Roger Penrose, Erwin Schrödinger. And it was really funny to notice that many of the writings towards the later half of the career of these great physicists were not about just the atomic world or the physical world, but ponderings about equally audacious questions like life, like intelligence, like human conditions. You know, Schrödinger wrote this book, What is Life? And Roger Penrose wrote this book, Emperor's New Mind, right? And that really got me very curious about the topic of intelligence. So one thing led to another during college time. I did intern at a couple of neuroscience labs and especially vision related. And I was like, wow, this is just as audacious a question to ask as the beginning of the universe or what is matter made of. And that got me to switch from undergraduate degree in physics to graduate degree in AI. Even though, I don't know about you, during our time, AI was a dirty word. It was AI winter. So it was more machine learning and computer vision and computational neuroscience. Yeah, I know. Honestly, I think when I was in undergrad, I was too busy writing code. I just, you know, managed to blindly ignore the AI winter and just kept on coding. Yeah, well, I was too busy solving PDE equations. And so actually, do you have an audacious question now? Yes, my audacious question is still intelligence. I think since Alan Turing, humanity has not fully understand what is the fundamental computing principles behind intelligence. You know, today we use the words AI, we use the word AGI. But at the end of the day, I still dream of a set of simple equations or simple principles that can define the process of intelligence, whether it's animal intelligence or machine intelligence. And this is similar to physics. For example, many people have joined the analogy of flying, right? Are we replicating birds flying or are we building an airplane? And a lot of people ask the question of the relationship between AI and brain. And to me, whether we're building a bird or replicating a bird or building an airplane, at the end of the day, aerodynamics and physics that govern the process of flying. And I do believe one day we'll discover that. I sometimes think about this, you know, one learning algorithm hypothesis. Could a lot of intelligence, maybe not all, but a lot of it be explained by one or very simple machine learning principles? And it feels like we're still so far from cracking that nut. But in the weekends when I have spare time, when I think about learning algorithms and where they could go, this is one of the things I still, you know, I'm excited about. I totally agree. I still feel like we are pre-Newtonian if we're doing physics analogy. Before Newton, there has been great physics, great physicists, a lot of phenomenology, a lot of studies of how the astral bodies move and all that. But it was Newton who started to write the very simple laws. And I think we are still going through that very exciting coming of age of AI as a basic science. And we're pre-Newton, in my opinion. It's really nice to hear you talk about how despite machine learning and AI having come so far, it still feels like there are a lot more unanswered questions, a lot more work to be done by maybe some of the people joining the field today than work that's already been done. Absolutely. I mean, let's calculate. It's only, what, 60 years about? It's a very nascent field. Modern physics and chemistry and biology are all hundreds of years. So I think it is very exciting to be entering the field of science of intelligence and studying AI today. I remember chatting with the late Professor John McCarthy, who had coined the term artificial intelligence. And boy, the field has changed since when he conceived of it at the workshop and came up with the term AI. But maybe another 10 years from now, maybe someone watching this will come up with a new set of ideas. And then we'll be saying, boy, AI sure is different than what you and I thought it would be. That's an exciting future to build towards. Yeah, I'm sure Newton would have not dreamed of Einstein. So, you know, our evolution of science sometimes takes strides, sometimes takes a while. And I think we're absolutely in an exciting phase of AI right now. You know, it's interesting hearing you paint this grand vision for AI. Going back a little bit, there was one other piece of your background that I found inspiring, which is when you're just getting started, I've heard you speak about how you're a physics student. But not only that, you're also running a laundromat to pay for school. And so just tell us more about that. So I came to this country, to America, to New Jersey, actually, when I was 15. And one thing great about being in New Jersey is it was close to Princeton. So I often just take a weekend trip with my parents and to admire the place where Einstein spent most of his career in the latter half of his life. But, you know, with typical immigrant life and it was tough. And by the time I enter Princeton, my parents didn't speak English. And one thing led to another. It turns out running a dry cleaner might be the best option for my family, especially for me to lead that business because it's a weekend business. If it's a weekday business, it would be hard for me to be a student. And it's actually, believe it or not, running a dry cleaning shop is very machine heavy, which is good for a STEM student like me. So we decided to open a dry cleaner shop in a small town in New Jersey called Persepolis, New Jersey. It turned out we were physically not too far from Bell Labs and where lots of early convolutional neural network research was happening. But I had no idea. I was actually a summer intern at H&T Bell Labs way back. That's right. With Rob Shapiro? With Michael Kearns was my mentor. And Rob Shapiro invented boosting grid algorithms. So you're coding AI. I was trying to clean. No, no, no. Not very far. Only much later in my life did I start interning. Yeah. And then it was seven years. I did that for the entire undergrad and most of my grad school. And I hired my parents. Yeah, that's really inspiring. I know you've been brilliant at doing exciting work all your life. And I think the story of running a laundromat to a globally prominent computer scientist, I hope that inspires some people watching this that no matter where you are, there's plenty of inspiration for everyone. Don't even notice, my high school job was an office admin. And so to this day, I remember doing a lot of photocopying. And the exciting part was using the shredder. That was a glamorous part. But I was doing so much photocopying in high school, I thought, boy, if only I could build a robot to do this photocopying, maybe I could do something else. Did you succeed? I'm still working on it. We'll see. And then, you know, when people think about you and the work you've done, one of the huge successes everyone thinks about is ImageNet, where Hub established early benchmark for computer vision. It was really completely instrumental to the modern rise of deep learning and computer vision. One thing I bet not many people know about is how you actually got started on ImageNet. So tell us the origin story of ImageNet. Yeah, well, Andrew, that's a good question, because a lot of people see ImageNet as just labeling a ton of images. But where we began was really going after North Star, bringing back my physics background. So when I entered grad school, when did you enter grad school, which year? 97. Okay, I was three years later than you, 2000. And that was a very exciting period, because I was in the computer vision and computational neuroscience lab of Pietro Perona and Christoph Koch at Caltech. And leading up to that, there has been, first of all, two things that were very exciting. One is that the world of AI at that point wasn't called AI, or natural language processing has found its lingua differential, its machine learning, statistical modeling as a new tool has emerged, right? I mean, it's been around. And I remember when the idea of applying machine learning to computer vision, that was like a controversial thing. Right, and I was the first generation of graduate students who were embracing all the base net, all the inference algorithms and all that. And that was one exciting happening. A second exciting happening that most people don't know and don't appreciate is that a couple of decades, probably more than two or three decades of incredible cognitive science and cognitive neuroscience work in the field of vision, in the world of vision, human vision, that has really established a couple of really critical North Star problems. Just understanding human visual processing and human intelligence. And one of them is the recognition of understanding of natural objects and natural things. Because a lot of the psychology and cognitive science work is pointing to us that is an innately optimized, whatever that word is, functionality and ability of human intelligence. It's more robust, faster, and more nuanced than we had thought. We even find neural correlates, brain areas devoted to faces or places or body parts. So these two things led to my PhD study of using machine learning methods to work on real world object recognition. But it became very painful very quickly that we are coming, banging against one of the most, continue to be the most important challenge in AI and machine learning is the lack of generalizability. You can design a beautiful model all you want if you're overfitting that model. I remember when it used to be possible to publish a computer vision paper showing it works on one image. Yeah, it's just the overfitting. The models are not very expressive and we lack the data. And we also as a field was betting on making the variables very rich by hand-engineering features. Remember, every variable carrying a ton of semantic meaning, but with hand-engineered features. And then towards the end of my PhD, my advisor Pietro and I start to look at each other and say, well, boy, we need more data. If we believe in this North Star problem of object recognition and we look back at the tools we have, mathematically speaking, we're overfitting every model we're encountering. We need to take a fresh look at this. So one thing led to another. He and I decided we'll just do a, at that point, we think it was a large-scale data project called Caltech 101. I remember the data set. I wrote papers using your Caltech 101 data set way back. You did, you and your early graduate student. It helped benefit a lot of researchers, Caltech 101 data set. That was me and my mom labeling images. Oh, and a couple of undergrads. But that was, it was the early days of internet. So suddenly the availability of data was a new thing. You suddenly, I remember Pietro still had this super expensive digital camera. I think it was Canon or something like $6,000, walking around Caltech taking pictures. But we are the internet generation. I go to Google image search. I start to see these thousands and tens of thousands of images. And I tell Pietro, let's just download. Of course, it's not that easy to download. So one thing led to another. We built this Caltech 101 data set of 101 object categories and about, I would say, 30,000 pictures. I think it's really interesting that, you know, even though everyone's heard of ImageNet today, even you kind of took a couple iterations where you did Caltech 101. And that was a success. Lots of people used it. But even the early learnings from building Caltech 101, they gave you the basis to build what turned out to be an even bigger success. Right. Except that by the time we started, I became an assistant professor. We started to look at the problem and realized it's way bigger than we think. Just mathematically speaking, Caltech 101 was not sufficient to power the algorithms. We decided to do ImageNet. And that was the time people start to think we're doing too much. Right. It's just too crazy. The idea of downloading the entire internet of images and mapping out all the English nouns was a little bit— I started to get a lot of pushback. I remember at one of the CVPR conference when I presented the early idea of ImageNet, a couple of researchers publicly questioned and said, if you cannot recognize one category of object, let's say the chair you're sitting in, how do you imagine or what's the use of a data set of 22,000 classes and 15 million images? But in the end, that giant data set unlocked a lot of value for countless number of researchers around the world. I think it was the combination of betting on the right North Star problem and the data that drives it. So it was a fun process. To me, when I think about that story, it seems like one of those examples where sometimes people feel like they should only work on projects that are the huge thing at the first outset, but I feel like for people working in machine learning, if your first project is a bit smaller, it's totally fine. Have a good win, use the learnings to build up to even bigger things, and then sometimes you get an ImageNet-sized win out of it. But in the meantime, I think it's also important to be driven by an audacious goal, though. You can size your problem or size your project as local milestones and so on along this journey. But I also look at some of our current students, they're so peer pressured by this current climate of publishing nonstop, it becomes more incremental papers to just get into a publication for the sake of it. And I personally always push my students to ask the question, what is the North Star that's driving you? Yeah, that's true. For myself, when I do research over the years, I've always pretty much done what I'm excited about, where I want to try to push the field forward. It doesn't mean you don't listen to people. You have to listen to people, let them shape your opinion, but in the end, I think the best research is let the world shape their opinion, but in the end, drive things forward using their own opinion. Totally agree, yeah. It's your own inner fire, right? Yeah, I think so, yeah. So as your research program developed, you've wound up taking your, let's say, foundations in computer vision and neuroscience and applying it to all sorts of topics, including your very visibly healthcare-looking neuroscience applications. We'd love to hear a bit more about that. Yeah, happy to. I think the evolution of my research in computer vision also kind of follows the evolution of visual intelligence in animals, and there are two topics that truly excites me. One is, what is a truly impactful application area that would help human lives? And that's my healthcare work. The other one is, what is vision, at the end of the day, about? And that brings me to trying to close the loop between perception and robotic learning. So on the healthcare side, you know, one thing, Andrew, there was a number that shocked me about 10 years ago when I met my long-term collaborator, Dr. Arne Milstein, at Stanford Medical School, and that number is about a quarter of a million Americans die of medical errors every year. I had never imagined a number being that high due to medical errors. There are many, many reasons, but we can rest assured most of the reasons are not intentional. These are errors of unintended mistakes and so on. That's a mind-boggling number. It's made about 40,000 deaths a year from automotive accidents, which is completely tragic, and this is even vastly greater. I was going to say that. I'm glad you brought it up. Just one example, one number within that mind-boggling number is the number of hospital-acquired infection-resulted fatality is more than 95,000. That's 2.5 times than the death of car accidents. And in this particular case, hospital-acquired infection is a result of many things, but in large, a lack of good hand hygiene practice. So if you look at WHO, there has been a lot of protocols about clinicians' hand hygiene practice, but in real healthcare delivery, when things get busy and when the process is tedious and when there is a lack of feedback system, you still make a lot of mistakes. Another medical, tragic medical fact is that more than $70 billion every year are spent in fall-resulted injuries and fatalities. And most of this happens to elderlies at home, but also in the hospital rooms. And these are huge issues. And when Arne and I got together back in 2012, it was the height of self-driving car, let's say not hype, but what's the right word? Excitement in Silicon Valley. And then we look at the technology of smart sensing, cameras, lidars, radars, whatever, smart sensors, machine learning algorithm, and holistic understanding of a complex environment with high stakes for human lives. I was looking at all that for self-driving car and realized in healthcare delivery, we have the same situation. Much of the process, the human behavior process of healthcare is in the dark. And if we could have smart sensors, be it in patient rooms or senior homes, to help our clinicians and patients to stay safer, that would be amazing. So Arne and I embarked on this what we call ambient intelligence research agenda. But one thing I learned, which probably will lead to our other topics, is as soon as you're applying AI to real human conditions, there's a lot of human issues in addition to machine learning issues. For example, privacy. I remember reading some of your papers with Arne and found it really interesting how you could build and deploy systems that were relatively privacy preserving. Thank you. The first iteration of that technology is we use cameras that do not capture RGB information. You've used a lot of that in self-driving cars, the depth cameras, for example. And there you preserve a lot of privacy information just by not seeing the faces and the identity of the people. But what's really interesting over the past decade is the changes of technology is actually giving us a bigger tool set for privacy-preserved computing in this condition. For example, on-device inference. As the chip's getting more and more powerful, if you don't have to transmit any data through the network and to the central server, you help people better. Federated learning. We know it's still early stage, but that's another potential tool for privacy-preserved computing. And then differential privacy and also encryption technologies. So we're starting to see that human demand, privacy and other issues, is driving actually a new wave of machine learning technology in ambient intelligence in healthcare. I've been encouraged to see the real practical applications of differential privacy that are actually real. Federated learning, as you said, probably the PR is a little bit ahead of the reality, but I think we'll get there. But it's interesting how consumers in the last several years have fortunately gotten much more knowledgeable about privacy and are increasingly... So important. I think the public is also making us to be better scientists. And I think ultimately people understand AI holds everyone, including us, but holds everyone accountable for really doing the right thing. Yeah. And on that note, one of the really interesting pieces of work you've been doing has been leading several efforts to help educate legislators or help governments, especially U.S. government, work toward better laws and better regulation, especially as it relates to AI. That sounds very important, and I suspect some days of the week I would get somewhat frustrating work, but we'd love to hear more about that. Yeah. So I think, first of all, I have to credit many, many people. So about four years ago, and I was actually finishing my sabbatical from Google time, I was very privileged to work with so many businesses, enterprise developers, just a large number and variety of vertical industries are realizing AI's human impact. And that was when many faculty leaders at Stanford and also just our president, provost, former president and former provost all get together and realize there is a role, historical role that Stanford needs to play in advances of AI. We were part of the birthplace of AI. A lot of work our previous generation have done and a lot of work you've done and some of our work I've done led to AI, today's AI. What is our historical opportunity and responsibility? With that, we believe that the next generation of AI education and research and policy needs to be human-centered. And having established the Human-Centered AI Institute, what we call HAI, one of the work that really took me outside of my comfort zone or aiding expertise is really a deeper engagement with policy thinkers and makers. Because we're here in Silicon Valley and there is a culture in Silicon Valley is we just keep making things and the law will catch up by itself. But AI is impacting human lives and sometimes negatively so rapidly that it is not good for any of us if we, the experts, are not at the table with the policy thinkers and makers to really try to make this technology better for the people. I mean, we're talking about fairness, we're talking about privacy. We also are talking about the brain drain of AI to industry and the concentration of data and compute in a small number of technology companies. All these are really part of the changes of our time. Some are really exciting changes, some have profound impact that we cannot necessarily predict yet. So one of the policy work that Stanford HAI has very proudly engaged in we were one of the leading universities that lobbied a bill called the National AI Research Cloud Task Force Bill. It changed the name from Research Cloud to Research Resource. So now the bill's acronym is NAIR, National AI Research Resource. And this bill is calling for a task force to put together a roadmap for America's public sector especially higher education and research sector to increase their access to resource for AI compute and AI data. It really is aimed to rejuvenate America's ecosystem in AI innovation and research. And I'm on the 12-person task force under Biden administration for this bill. We hope that's a piece of policy that is not a regulatory policy. It's more an incentive policy to build and rejuvenate ecosystems. I'm glad that you're doing this to help shape U.S. policy and this type of making sure enough resources are allocated to ensure healthy development of AI. I feel like this is something that every country needs at this point. So, you know, just from the things that you are doing by yourself not to speak of the things that the global AI community is doing there's just so much going on in AI right now. So many opportunities, so much excitement. I found that for someone getting started in machine learning for the first time sometimes there's so much going on they can almost feel a little bit overwhelming. Totally. What advice do you have for someone getting started in machine learning? Good question, Andrew. I'm sure you have great advice. You're one of the world-known advocates for AI machine learning education. So I do get this question a lot as well. And one thing you're totally right is AI really today feels different from our time. During our time... For the record, now is still our time. That's true. When we were starting in AI. I love that, exactly. We're still part of this. When we got started, the entrance to AI and machine learning was relatively narrow. You almost have to start from computer science and go, right? As a physics major, I still had to wedge myself into the computer science track or electrical engineering track to get to AI. But today I actually think that there is many aspects of AI that creates entry points for people from all walks of life. On the technical side, I think it's obvious that there's just an incredible plethora of resources out there on the Internet from Coursera to YouTube to TikTok to GitHub. There's just so much that students worldwide can learn about AI and machine learning compared to the time we began learning machine learning. And also, any campuses, we're not talking about just college campuses. We're talking about high school campuses. Sometimes earlier, we're starting to see more available classes and resources. So I do encourage those of the young people with a technical interest and resource and opportunity to embrace these resources because it's a lot of fun. But having said that, for those of you who are not coming from a technical angle, who still are passionate about AI, whether it's the downstream application or the creativity it engenders or the policy and social angle or important social problems, whether it's digital economics or the governance or history, ethics, political sciences, I do invite you to join us because there is a lot of work to be done. There's a lot of unknown questions. For example, my colleagues at HAI are trying to find answers on how do you define our economy in the digital age? What does it mean when robots, software are participating in the workflow more and more? How do you measure our economy? That's not an AI coding question. That is an AI impact question. We're looking at the incredible advances of generative AI. And there will be more. What does that mean for creativity and to the creators, from music to art to writing? I think there is a lot of concerns, and I think it's rightfully so. But in the meantime, it takes people together to figure this out and also to use this new tool. So I think, in short, I just think it's a very exciting time. And anybody with any walks of life, as long as you're passionate about this, there's a role to play. I think that's really exciting. How about economics? Think about my conversations with Professor Eric Brynjolfsson, studying the impact of AI on the economy. But from what you're saying, and I agree, it seems like no matter what your current interests are, AI is such a general-purpose technology that the combination of your current interests and AI is often promising. And I find that even for learners that may not yet have a specific interest, if you find your way into AI, start learning things, often the interests will evolve, and then you can start to track your own path. And given where AI is today, there's still so much room and so much need for a lot more people to craft their own paths, to do this exciting work that I think the world still needs a lot more of. Totally agree. So one piece of work that you did that I thought was very cool was starting a program, initially called Sailors and then later AI for All, which was really reaching out to high school and even younger students to try to give them more opportunities in AI, including people of all walks of life. We'd love to hear more about that. Yeah, well, this is in the spirit of this conversation. That was back in 2015. There was starting to be a lot of excitement of AI, but there was also starting to be this talk about killer robot coming next door, Terminators coming. And I was, at that time, Andrew, I was the director of Stanford AI Lab, and I was thinking, you know, we know how far we are from Terminators coming, and that seemed to be a really, a little bit of far-fetched concern. But I was living my work life with a real concern I felt no one was talking about, which was the lack of representation in AI. At that time, I guess after Daphne has left, I was the only woman faculty at Stanford AI Lab, and we're having very small, around 15% of women graduate students, and we really don't see anybody from the underrepresented minority groups in Stanford AI program. And this is a national or even worldwide issue, so it wasn't just Stanford. Frankly, it still needs a lot of work today. Exactly. So how do we do this? Well, I got together with my former student, Olga Rusakofsky, and also a long-term educator of STEM topics, Dr. Rick Sommer, from Stanford Pre-Collegiate Study Program, and thought about inviting high schoolers at that time, women, high school, young women to participate in a summer program to inspire them to learn AI. And that was how it started in 2015. In 2017, we got a lot of encouragement and support from people like Jensen and Lori Huang and Melinda Gates, and we formed the national nonprofit called AI for All, which is really committed to training or helping tomorrow's leaders, shaping tomorrow's leaders for AI from students of all walks of life, especially the traditionally underserved and underrepresented communities. Till today, we've had many, many summer camps and summer programs across the country. More than 15 universities are involved, and we have online curriculum to encourage students as well as college pathway programs to continue to support these students' careers by matching them with internships and mentors. So it's a continual effort of encouraging students of all walks of life. And I remember back then, I think your group was printing these really cool T-shirts that asked the question, AI will change the world, who will change AI? And I thought the answer of making sure everyone can come in and participate, that was a great answer. Yeah, still an important question today. So that's a great thought, and I think that takes us toward the end of the interview. Any final thoughts for the people watching this? Still that this is a very nascent field. As you said, Andrew, we are still in the middle of this. I still feel there's just so many questions that, you know, I wake up excited to work on with my students in the lab, and I think there's a lot more opportunities for the young people out there who want to learn and contribute and shape tomorrow's AI. Well said, Fei-Fei. That's very inspiring. Really great to chat with you, and thank you. Thank you. It's fun to have these conversations. Microsoft Mechanics www.microsoft.com