Challenges In Machine Learning - Part Two

Introduction

 
In the previous article Challenges In Machine Learning - Part One we learned about challenges faced due to bad data, and in this article we are will go through a couple of challenges faced due to bad algorithms.
 

Overfitting the Training Data 

 
Say you are visiting a foreign country and the taxi driver rips you off. You might be tempted to say that all taxi drivers in that country are thieves, which is in all means is overgeneralizing and is something that we humans do all too often, and unfortunately, machines too can fall into the same trap if we are not careful. In context with Machine Learning, this is called overfitting which means that the model performs well on the training data, but it does not generalize well.
 
Complex models like deep neural networks can detect minute patterns in the data, but if the training set turns out to be noisy, or if it is too small it will introduce sampling noise and then the model is likely to detect patterns in the noise itself. No doubt these patterns will not generalize to new instances.
Overfitting usually happens when the model is too complex relative to the amount and noisiness of the training data. The possible solutions are,
  • Simplifying the model by picking one with fewer parameters, for instance, choosing a linear model rather than a high-degree polynomial model or by reducing the number of attributes in the training data or by compelling the model.
  • To gather more training data.
  • To minimize the noise in the training data, for instance, fix data errors and remove outliers.

 Underfitting the Training Data

 
Obviously, as you might have guessed, underfitting is the contrasting of overfitting: it happens when the model is too simple to learn the underlying structure of the given data. For instance, a linear model of life satisfaction tends to underfit but the reality is just more complex than the model, so its predictions are bound to be inaccurate, even on the training examples.
 
The possible options to fix this problem are,
  • Choosing a more powerful model, with more parameters.
  • Feeding better features to the learning algorithm a.k.a feature engineering.
  • Minimizing the constraints on the model. For instance, minimizing regularization hyperparameter.

Summary

 
There’s just one last important topic to cover: once the model is trained, we would like to evaluate it and fine-tune it if necessary and we will learn that in the next article.


Similar Articles