Term Overview: Lazy vs Eager Learning
In this lesson, we're going to walk through two machine learning terms lazy and eager learning.
Guide Tasks
  • Read Tutorial
  • Watch Guide Video
Video locked
This video is viewable to users with a Bottega Bootcamp license

Now the easiest way of understanding what lazy learning is it is when the algorithm builds the model when asked to perform the production.

large

So a great example of this is K nearest neighbors. So the way that K nearest neighbors works is it does not store the entire model all as a historical dataset but instead it has access to the data and then it runs through the entire algorithm when it's asked to perform that prediction. So when it gets a new input then it goes and builds that neighbor set. So it builds to that set of classification items and then it tells you where it thinks that the new input should be classified and so that will work in certain circumstances and it is a very powerful tool when you have dynamic data that is changing quite a bit.

Now one issue with any type of lazy learning like this is that it doesn't work great if you have billions upon billions of records and different elements in your data set because it would take too long to process for every new prediction that you want to run and that is where eager learning comes in.

large

So eager learning builds and then it stores the model. So some examples of eager learning are neural networks, decision trees, and support vector machines.

Let's take decision trees for example if you want to build out a full decision tree implementation that is not going to be something that gets generated every single time that you pass in a new input but instead you'll build out the decision tree and that is going to be stored in some locations. It will be stored on some server and then when ever you have some new data input you don't have to go and retrain all of the historical data again so you don't have to iterate over each one of those items.

Instead it stores the decision tree it stores the model and then all of the new inputs that come in they simply run through that model. And so another great example with neural networks neural, networks have so many processes to go through. If you tried to build out that model every single time it would be a very poor performing algorithm. But in addition to that whenever you have a true deep learning environment and that's typically what you're trying to do when you're implementing a neural network.

What you're attempting to do is to build a true learner so not simply something that will look at a data set and then give you a basic prediction a one off type of prediction but instead it is trying to learn about its sector of the world. And so the best way of doing that is by generating the entire model and then it can make adjustments as it learns about new items.

So for example if you're building some type of image classifier and you want the system to be able to see if a new picture that gets uploaded is a tiger or not and it's a neural network and as it gets new images it will continually evolve what it believes a tiger looks like.

So if it sees a thousand pictures of tigers and all of them are orange and black striped then that's what it assumes a tiger looks like. But then eventually if it gets shown an albino tiger and then it can change its view of the world and it will add that and store it in its learning model.

And it's not going to have to run through that entire data set and build that model every time you upload a new picture that would not be a very good way to implement the algorithm but instead it stores it so that any new input any new predictions that you want to run simply pass through the model and then they generate that prediction for you.