Eager Learning vs Lazy learning
When a machine learning algorithm builds a model soon after receiving training data set, it is called eager learning. It is called eager; because, when it gets the data set, the first thing it does – build the model. Then it forgets the training data. Later, when an input data comes, it uses this model to evaluate it. Most machine learning algorithms are eager learners.
On the contrary, when a machine learning algorithm does not build a model immediately after receiving the training data, rather waits till it is provided with an input data to evaluate, it is called lazy learning. It is called lazy; because, it delays building a model, if it builds any, until it is absolutely necessary. When it gets training data, it only stores them. Later, when input data comes, only then it uses this stored data to evaluate the result. Lazy learning algorithm do not learn a discriminative function from the training data but “memorizes” the training dataset instead. On the contrary eager learning algorithm learns its model weights (parameters) during training time.
There is a different set of pros and cons associated with eager and lazy learning. It is obvious that lazy learning would take less time during training but more time during prediction. Each time you want to make a prediction, you are searching for the nearest neighbour in the entire training set (note that there are tricks such as BallTrees and KDtrees to speed this up a bit.
Eager learning builds a model for the whole data set, meaning it generalises the data set, at the beginning. It might suffer accuracy compare to lazy learning that has more options in terms of the availability of the whole data set as well as the mechanisms to make use of it.
Some examples are :
Lazy : K - Nearest Neighbour, Case - Based Reasoning
Eager : Decision Tree, SVM, logistic regression
No comments:
Post a Comment