We'd like to know you better so we can create more relevant courses. What do you do for work?
Course Syllabus
You've achieved today's streak!
Complete one lesson every day to keep the streak going.
Su
Mo
Tu
We
Th
Fr
Sa
You earned a Free Pass!
Free Passes help protect your daily streak. Complete more lessons to earn up to 3 Free Passes.
Elevate Your Career with Full Learning Experience
Unlock Plus AI learning and gain exclusive insights from industry leaders
Access exclusive features like graded notebooks and quizzes
Earn unlimited certificates to enhance your resume
Starting at $1 USD/mo after a free trial – cancel anytime
One of the challenges of becoming good at recognizing what AI can and cannot do is that it does take seeing a few examples of concrete successes and failures of AI. And if you work on an average of, say, one new AI project a year, then to see three examples would take you three years of work experience, and that's just a long time. What I hope to do, both in the previous video and in this video, is to quickly show you a few examples of AI successes and failures, or what it can and cannot do, so that in a much shorter time you can see multiple concrete examples to help hone your intuition and select valuable projects. So let's take a look at a few more examples. Let's say you're building a self-driving car. Here's something that AI can do pretty well, which is to take a picture of what's in front of your car, and maybe just using a camera, maybe using other sensors as well, such as radar or LiDAR, and then to figure out what is the position or where are the other cars. So this would be an AI where the input A is a picture of what's in front of your car, or maybe both a picture as well as radar and other sensor readings, and the output B is where are the other cars. And today, the self-driving car industry has figured out how to collect enough data and has pretty good algorithms for doing this reasonably well. So that's what AI today can do. Here's an example of something that today's AI cannot do, or at least would be very difficult using today's AI, which is to input a picture and output the intention or whatever the human is trying to gesture at your car. So here's a construction worker holding out a hand to ask your car to stop. Here's a hitchhiker trying to wave a car over. Here's a bicyclist raising the left hand to indicate that they want to turn left. And so if you were to try to build a system to learn an A to B mapping, where the input A is a short video of a human gesturing at your car, and the output B is what's the intention or what does this person want, that today is very difficult to do. Part of the problem is that the number of ways people gesture at you is very, very large. Imagine all the hand gestures someone could conceivably use to ask you to slow down or go or stop. The number of ways that people could gesture at you is just very, very large. And so it's difficult to collect enough data from enough thousands or tens of thousands of different people gesturing at you in all of these different ways to capture the richness of human gestures. So learning from a video to what this person wants is actually a somewhat complicated concept. In fact, even people have a hard time figuring out sometimes what someone waving at your car wants. And then second, because this is a safety critical application, you would want an AI that is extremely accurate in terms of figuring out does the construction worker want you to stop or does he or she want you to go. And that makes it harder for an AI system as well. And so today, if you collect just, say, 10,000 pictures of other cars, many teams will be able to build an AI system that at least has a basic capability at detecting other cars. In contrast, even if you collect pictures or videos of 10,000 people, it's quite hard to track down 10,000 people waving at your car. Even with that data set, I think it's quite hard today to build an AI system to recognize human intention from the gestures and the very high level of accuracy needed in order to drive safely around these people. So that's why today, many self-driving car teams have some components for detecting other cars and they do rely on that technology to drive safely. But very few self-driving car teams are trying to count on an AI system to recognize a huge diversity of human gestures and counting just on that to drive safely around people. Let's look at one more example. Say you want to build an AI system to look at x-ray images and diagnose pneumonia. So all of these are chest x-rays. So the input A could be the x-ray image and the output B can be the diagnosis. Does this patient have pneumonia or not? So that's something that AI can do. Something that AI cannot do would be to diagnose pneumonia from 10 images of a medical textbook chapter explaining pneumonia. A human can look at a small set of images, maybe just a few dozen images, and read a few paragraphs from a medical textbook and start to get a sense. But I actually don't know, given a medical textbook, what is A and what is B, or how to really pose this as an AI problem to let me know how to write a piece of software to solve if all you have is just 10 images and a few paragraphs of text that explain what pneumonia and a chest x-ray looks like. Whereas a young medical doctor might learn quite well reading a medical textbook and just looking at maybe dozens of images. In contrast, an AI system isn't really able to do that today. To summarize, here are some of the strengths and weaknesses of machine learning. Machine learning tends to work well when you're trying to learn a simple concept, such as something that you could do with less than a second of mental thought, and when there's lots of data available. Machine learning tends to work poorly when you're trying to learn a complex concept from small amounts of data. A second underappreciated weakness of AI is that it tends to do poorly when it's asked to perform on new types of data that's different than the data it has seen in your dataset. Let me explain with an example. Say you built a supervised learning system that uses A to B to learn to diagnose pneumonia from images like these. These are pretty high-quality chest x-ray images. But now, let's say you take this AI system and apply it at a different hospital or a different medical center, where maybe the x-ray technician somehow strangely had the patients always lie at an angle, or sometimes there are these defects, not sure if you can see the little scratches in the image, these little other objects lying on top of the patients. If the AI system has learned from data like that on your left, maybe taken from a high-quality medical center, and you take this AI system and apply it to a different medical center that generates images like those on the right, then its performance will be quite poor as well. A good AI team would be able to ameliorate or to reduce some of these problems, but doing this is not that easy. And this is one of the things that AI is actually much weaker than humans. If a human has learned from images on the left, they're much more likely to be able to adapt to images like those on the right as they figure out that the patient is just lying at an angle. But an AI system can be much less robust than human doctors in generalizing or figuring out what to do with new types of data like this. I hope these examples are helping you hone your intuitions about what AI can and cannot do. And in case the boundary between what it can and cannot do still seems fuzzy to you, don't worry. That's completely normal, completely okay. In fact, even today, I still can't look at a project and immediately tell if something is feasible or not. And I often still need weeks of, small numbers of weeks of technical diligence before forming strong conviction about whether something is feasible or not. But I hope that these examples can at least help you start imagining some things in your company that might be feasible and might be worth exploring more. The next two videos after this are optional and are a non-technical description of what are neural networks and what is deep learning. Please feel free to watch those. And then next week, we'll go much more deeply into the process of what building an AI project would look like. Look forward to seeing you next week.