# KNN- K-Nearest Neighbors using Python

## K Nearest Neighbor using Python

In the previous article, we studied the Naive Bayes. One thing that I believe is that if we can correlate anything with us or our lives, there are greater chances of understanding the concept. So I will try to explain everything by relating it to humans.

## What is K nearest neighbor used for?

KNN can be used for both classification and regression predictive problems. However, it is more widely used in classification problems in the industry. k-NN is often used in search applications where you are looking for “similar” items; that is when your task is some form of “find items similar to this one”. We use k-NN when we have fewer than 20 features (attributes) per instance, typically normalized​ or we have a lot of training data.

## What is KNN?

KNN is a non-parametric, lazy learning algorithm; i.e. it does not make any assumptions on the underlying data and also there is no explicit training phase or it is very minimal.

k-NN is all about finding the next data point(s) which is at the minimum distance from the current data point and to club all into one class.

k is the max value of data point(s) that can be clubbed under a given class.

To find the distance we use employ various algorithms like Euclidean Distance, Hamming Distance, Manhattan Distance, Minkowski Distance, etc.

For example, 1-NN means that we have to generate a model that will have classes based on the data point which is at the least distance. Similarily, 2-NN means that we have to generate a model that will have classes based on the 2 data points with the least distances.

The algorithm of k-NN or K-Nearest Neighbors is:
1. Computes the distance between the new data point with every training example.
2. For computing, distance measures such as Euclidean distance, Hamming distance or Manhattan distance will be used.
3. The model picks K entries in the database which are closest to the new data point.
4. Then it does the majority vote i.e the most common class/label among those K entries will be the class of the new data point.
Steps involved in the processing and generating a model
1. Decide on your similarity or distance metric.
2. Split the original labeled dataset into training and test data.
3. Pick an evaluation metric.
4. Decide upon the value of k. Here k refers to the number of closest neighbors we will consider while doing the majority voting of target labels.
5. Run k-NN a few times, changing k and checking the evaluation measure.
6. In each iteration, k neighbors vote, majority vote wins and becomes the ultimate prediction
7. Optimize k by picking the one with the best evaluation measure.
8. Once you’ve chosen k, use the same training set and now create a new test set with the people’s wages and incomes that you have no labels for, and want to predict.
Steps involved in selecting the value for K
1. Determined experimentally​
2. Start with k=1 and use a test set to validate the error rate of the classifier​
3. Repeat with k=k+2​
4. Choose the value of k for which the error rate is minimum​
Note: k should be an odd number to avoid ties​

## KNN vs K-Means Clustering

 k-NN k-Means Types Supervised Unsupervised What is K? Number of closest neighbors to look at Number of centroids Calculation of prediction error possible Yes No Optimization is done using what? Cross-validation and confusion matrix Elbow method and the silhouette method Algorithm convergences When all observations classified at the desired accuracy When cluster membership doesn't change anymore Complexity Training: O(Dimensions/Features)Test: O (Number of Observations) O (Number of points * Number of Clusters * Number of iterations * number of attributes)

## Curse of Dimensionality

Imagine instances described by 20 features (attributes) but only 3 are relevant to the target function​. Curse of dimensionality means that nearest neighbor can be easily misled when instance space is high-dimensional​ and is dominated by a large number of irrelevant features​.

A solution to this can be
1. Stretch j-th axis by weight zj, where z1,…,zn chosen to minimize prediction error (weight different features differently)​
2. Use cross-validation to automatically choose weights z1,…,zn ​
3. Note setting zj to zero eliminates this dimension altogether (feature subset selection)​
4. PCA​

1. K-NN is pretty intuitive and simple
The K-NN algorithm is easy to implement and very simple to understand. It reads through the whole dataset to classify the new data point and to find out K nearest neighbors.
2. K-NN has fewer assumptions
K-NN is a non-parametric algorithm which means there are fewer assumptions to be met to implement K-NN. Parametric models like linear regression have lots of assumptions to be met by data before it can be implemented which is not the case with K-NN.
3. No Training Step
K-NN does not explicitly build any model and simply tags the new data entry based learning from historical data, hence no training step is required.
4. It constantly evolves
The classifier immediately adapts as we collect new training data and respond quickly to changes in the input during real-time use.
5. Very easy to implement for the multi-class problem
K-NN adjusts to multi-class without any extra efforts as compared to other algorithms.
6. Can be used both for Classification and Regression
One of the biggest advantages of K-NN is that K-NN can be used both for classification and regression problems.
7. One Hyper Parameter
K-NN might take some time while selecting the first hyperparameter but after that rest of the parameters are aligned to it.
8. KNN stores the entire training dataset which it uses as its representation.
9. KNN does not learn any model.
10. KNN makes predictions just-in-time by calculating the similarity between an input sample and each training instance.
11. A variety of distance criteria to choose from the K-NN algorithm gives the user the flexibility to choose distance while building a K-NN model.
• Euclidean Distance
• Hamming Distance
• Manhattan Distance
• Minkowski Distance

1. K-NN is a slow algorithm
K-NN might be very easy to implement except for the fact that as the dataset grows efficiency/speed of algorithm declines very fast.
2. Curse of Dimensionality
KNN predicting efficiency decreases as the numbers of variables grow works.
3. K-NN needs homogeneous features
If you decide to build k-NN using a common distance, like Euclidean or Manhattan distances, it is completely necessary that features have the same scale, since absolute differences in features weight the same, i.e., a given distance in feature 1 must mean the same for feature 2.
4. An optimal number of neighbors
One of the biggest and most crucial issues with K-NN is to choose the optimal number of neighbors to be considered while classifying the new data entry.
5. Imbalanced data causes problems
k-NN doesn’t perform well on imbalanced data.
6. Outlier sensitivity
K-NN algorithm is very sensitive to outliers as it simply chose the neighbors based on distance criteria.
7. Missing Value treatment
K-NN inherently has no capability of dealing with missing value problems.

## Assumptions of KNN

1. k-NN performs much better if all of the data have the same scale
2. k-NN works well with a small number of input variables (p) but struggles when the number of inputs is very large
3. k-NN makes no assumptions about the functional form of the problem is solved
Euclidean distance

Manhattan Distance

Chi-Square Distance

Correlation Distance

Hamming Distance

Minkowsky Distance

## KNN using an example

Let us understand how k-NN really works. For that let's take a dummy dataset.

data = [
[65.75, 112.99],
[71.52, 136.49],
[69.40, 153.03],
[68.22, 142.34],
[67.79, 144.30],
[68.70, 123.30],
[69.80, 141.49],
[70.01, 136.46],
[67.90, 112.37],
[66.49, 127.45],
]

In the above example, the data is of the form [feature, target].

Assuming the value of k to be 3 i.e. 3-NN, let's try to find the prediction for feature value "60".

So the 3 nearest neighbors would be
a. 65.75 with a distance value 5.75
b. 66.49 with a distance value 6.49
c. 67.79 with a distance value 7.79

And hence the predicted value will be 128.25.

Similarly, let's take another dataset

data = [
[22, 1],
[23, 1],
[21, 1],
[18, 1],
[19, 1],
[25, 0],
[27, 0],
[29, 0],
[31, 0],
[45, 0],
]

In the above example, the data is of the form [feature, target].

Assuming the value of k to be 3 i.e. 3-NN, let's try to find the prediction for feature value "33".

So the 3 nearest neighbors would be
a. 31 with a distance value 2.0
b. 29 with a distance value 4.0
c. 27 with a distance value 6.0

And hence the predicted value will be 0.

## Python Implementation of Decision Tree

Let's take the example of the MNIST dataset, you can directly import it from the sklearn dataset repository or download it from the article. Feel free to use any dataset, there are some very good datasets available on kaggle and with Google Colab.

### 1. From Scratch

Here, I will be using some dummy data. You can use and practice on any kind of data
1. from collections import Counter
2. import math
In the above code, we are exporting the required libraries.
1. def knn(data, query, k, distance_fn, choice_fn):
2.     neighbor_distances_and_indices = []
3.
4.     # 3. For each example in the data
5.     for index, example in enumerate(data):
6.         # 3.1 Calculate the distance between the query example and the current
7.         # example from the data.
8.         distance = distance_fn(example[:-1], query)
9.
10.         # 3.2 Add the distance and the index of the example to an ordered collection
11.         neighbor_distances_and_indices.append((distance, index))
12.
13.     # 4. Sort the ordered collection of distances and indices from
14.     # smallest to largest (in ascending order) by the distances
15.     sorted_neighbor_distances_and_indices = sorted(neighbor_distances_and_indices)
16.
17.     # 5. Pick the first K entries from the sorted collection
18.     k_nearest_distances_and_indices = sorted_neighbor_distances_and_indices[:k]
19.
20.     # 6. Get the labels of the selected K entries
21.     k_nearest_labels = [data[i][1for distance, i in k_nearest_distances_and_indices]
22.
23.     # 7. If regression (choice_fn = mean), return the average of the K labels
24.     # 8. If classification (choice_fn = mode), return the mode of the K labels
25.     return k_nearest_distances_and_indices , choice_fn(k_nearest_labels)
In the above code, we have created a function that will perform all the required tasks to result out a k-NN based model
1. #function to calculate the mean used in case of regression
2. def mean(labels):
3.     return sum(labels) / len(labels)
4.
5. #function to calculate the mode used in case of classification
6. def mode(labels):
7.     return Counter(labels).most_common(1)
8.
9. #function to calculate the distance between two data points
10. def euclidean_distance(point1, point2):
11.     sum_squared_distance = 0
12.     for i in range(len(point1)):
13.         sum_squared_distance += math.pow(point1[i] - point2[i], 2)
14.     return math.sqrt(sum_squared_distance)
The above code lists down all the auxiliary functions used during the program
1. # Regression Data
2.     #
3.     # Column 0: height (inches)
4.     # Column 1: weight (pounds)
5.     '''
6.     reg_data = [
7.        [65.75112.99],
8.        [71.52136.49],
9.        [69.40153.03],
10.        [68.22142.34],
11.        [67.79144.30],
12.        [68.70123.30],
13.        [69.80141.49],
14.        [70.01136.46],
15.        [67.90112.37],
16.        [66.49127.45],
17.     ]
The above code, declares the dummy regression data, in the above data column 0 signifies height and column 1 signifies weight
1. reg_query = 
2. reg_k_nearest_neighbors, reg_prediction = knn(
3. reg_data, reg_query, k=3, distance_fn=euclidean_distance, choice_fn=mean
4. )
5. print(reg_k_nearest_neighbors, reg_prediction)
In the above code, we will generate a model, and then based on that model will  predict the weight for a person with a height of 60
1. '''''''
2. # Classification Data
3. #
4. # Column 0: age
5. # Column 1: likes pineapple
6. '''
7. clf_data = [
8.     ,
9.     ,
10.     ,
11.     ,
12.     ,
13.     ,
14.     ,
15.     ,
16.     ,
17.     ,

The above code, declares the dummy data for classification based problem, where column 0 signifies age and column 1 signifies if a person like pineapple or not
1. clf_query = 
2. clf_k_nearest_neighbors, clf_prediction = knn(
3.               clf_data, clf_query, k=3, distance_fn=euclidean_distance,choice_fn=mode)
4. print(clf_k_nearest_neighbors, clf_prediction)
In the above code, we first create a model and based on that model, we predict if a 33 years old person likes pineapple or not.

### scratch_knn.py

1. from collections import Counter
2. import math
3.
4. def knn(data, query, k, distance_fn, choice_fn):
5.     neighbor_distances_and_indices = []
6.
7.     # 3. For each example in the data
8.     for index, example in enumerate(data):
9.         # 3.1 Calculate the distance between the query example and the current
10.         # example from the data.
11.         distance = distance_fn(example[:-1], query)
12.
13.         # 3.2 Add the distance and the index of the example to an ordered collection
14.         neighbor_distances_and_indices.append((distance, index))
15.
16.     # 4. Sort the ordered collection of distances and indices from
17.     # smallest to largest (in ascending order) by the distances
18.     sorted_neighbor_distances_and_indices = sorted(neighbor_distances_and_indices)
19.
20.     # 5. Pick the first K entries from the sorted collection
21.     k_nearest_distances_and_indices = sorted_neighbor_distances_and_indices[:k]
22.
23.     # 6. Get the labels of the selected K entries
24.     k_nearest_labels = [data[i][1for distance, i in k_nearest_distances_and_indices]
25.
26.     # 7. If regression (choice_fn = mean), return the average of the K labels
27.     # 8. If classification (choice_fn = mode), return the mode of the K labels
28.     return k_nearest_distances_and_indices , choice_fn(k_nearest_labels)
29.
30. def mean(labels):
31.     return sum(labels) / len(labels)
32.
33. def mode(labels):
34.     return Counter(labels).most_common(1)
35.
36. def euclidean_distance(point1, point2):
37.     sum_squared_distance = 0
38.     for i in range(len(point1)):
39.         sum_squared_distance += math.pow(point1[i] - point2[i], 2)
40.     return math.sqrt(sum_squared_distance)
41.
42. def main():
43.     '''''
44.     # Regression Data
45.     #
46.     # Column 0: height (inches)
47.     # Column 1: weight (pounds)
48.     '''
49.     reg_data = [
50.        [65.75112.99],
51.        [71.52136.49],
52.        [69.40153.03],
53.        [68.22142.34],
54.        [67.79144.30],
55.        [68.70123.30],
56.        [69.80141.49],
57.        [70.01136.46],
58.        [67.90112.37],
59.        [66.49127.45],
60.     ]
61.
62.     # Question:
63.     # Given the data we have, what's the best-guess at someone's weight if they are 60 inches tall?
64.     reg_query = 
65.     reg_k_nearest_neighbors, reg_prediction = knn(
66.         reg_data, reg_query, k=3, distance_fn=euclidean_distance, choice_fn=mean
67.     )
68.     print(reg_k_nearest_neighbors, reg_prediction)
69.
70.     '''''
71.     # Classification Data
72.     #
73.     # Column 0: age
74.     # Column 1: likes pineapple
75.     '''
76.     clf_data = [
77.        ,
78.        ,
79.        ,
80.        ,
81.        ,
82.        ,
83.        ,
84.        ,
85.        ,
86.        ,
87.     ]
88.     # Question:
89.     # Given the data we have, does a 33 year old like pineapples on their pizza?
90.     clf_query = 
91.     clf_k_nearest_neighbors, clf_prediction = knn(
92.         clf_data, clf_query, k=3, distance_fn=euclidean_distance, choice_fn=mode
93.     )
94.     print(clf_k_nearest_neighbors, clf_prediction)
95.
96. if __name__ == '__main__':
97.     main()
The output that I got is:

Regression case:
Neighbors: [(5.75, 0), (6.489999999999995, 9), (7.790000000000006, 4)]
Prediction: 128.24666666666667

Classification case:
Neighbors: [(2.0, 8), (4.0, 7), (6.0, 6)]
Prediction: 0

### 2. Using SkLearn

In sklean, we have KNeighnotsClassifier class, used for implementing k-NN

### knn_sklearn.py

1. X = [, , , ]
2. y = 
3. from sklearn.neighbors import KNeighborsClassifier
4. neigh = KNeighborsClassifier(n_neighbors=2)
5. neigh.fit(X, y)
6. print(neigh.predict([[4.567]]))
7. print(neigh.predict_proba([[4.567]]))
In the above code, we create a dummy data, which is then passed to the classifier, to get the predicted value and predicting probability corresponding to 4.567

The output that I got is
Predicted value: 
Predicting Probability: [[0. 1.]]

### 3. Using TensorFlow

Here I will be using the MNSIT cloth dataset.
1. from __future__ import print_function
2.
3. import numpy as np
4. import tensorflow as tf
In the above code, we are importing all the required libraries.
In the below code, I have commented on Line numbers 41 and 42, you can uncomment those if you want to see the complete output.

### knn_tensorflow.py

1. from __future__ import print_function
2.
3. import numpy as np
4. import tensorflow as tf
5.
6. # Import MNIST data
7. from tensorflow.examples.tutorials.mnist import input_data
9.
10. # In this example, we limit mnist data
11. Xtr, Ytr = mnist.train.next_batch(5000#5000 for training (nn candidates)
12. Xte, Yte = mnist.test.next_batch(200#200 for testing
13.
14. # tf Graph Input
15. xtr = tf.placeholder("float", [None784])
16. xte = tf.placeholder("float", )
17.
18. # Nearest Neighbor calculation using L1 Distance
19. # Calculate L1 Distance
20. distance = tf.reduce_sum(tf.abs(tf.add(xtr, tf.negative(xte))), reduction_indices=1)
21. # Prediction: Get min distance index (Nearest neighbor)
22. pred = tf.arg_min(distance, 0)
23.
24. accuracy = 0.
25.
26. # Initialize the variables (i.e. assign their default value)
27. init = tf.global_variables_initializer()
28.
29. # Start training
30. with tf.Session() as sess:
31.
32.     # Run the initializer
33.     sess.run(init)
34.
35.     # loop over test data
36.     for i in range(len(Xte)):
37.         # Get nearest neighbor
38.         nn_index = sess.run(pred, feed_dict={xtr: Xtr, xte: Xte[i, :]})
39.         # Get nearest neighbor class label and compare it to its true label
40.
41.         #print("Test", i, "Prediction:", np.argmax(Ytr[nn_index]), \
42.          #   "True Class:", np.argmax(Yte[i]))
43.         # Calculate accuracy
44.         if np.argmax(Ytr[nn_index]) == np.argmax(Yte[i]):
45.             accuracy += 1./len(Xte)
46.     print("Done!")
47.     print("Accuracy:", accuracy)
The output that I got is

Accuracy: 0.9450000000000007

## Conclusion

In this article, we studied what is k-NN used for, what is k-NN, 1NN, 3NN, 7NN, k-NN vs k-means clustering, the curse of dimensionality, advantages of k-NN, disadvantages of k-NN, assumptions of k-NN, euclidean distance, manhattan distance, chi-square, minkowsky distance, correlation distance, hamming distance, k-NN using an example and python implementation of the k-NN algorithm using functions, sklearn, and TensorFlow. Hope you were able to understand each and everything. For any doubts, please comment on your query.

In the next article, we will learn about K-Means.

Congratulations!!! You have climbed your next step in becoming a successful ML Engineer.

Next Article In this Series >> K-Means