skywircL

skywircL's Blog

KNN分类器

Posted at # 深度学习

KNN分类器

研究生去搞AI了,最近学完了基础知识想找点项目练练手,网上看cs231评价不错就来做下他的assignment。

推荐阅读官方notebook https://cs231n.github.io/classification/

讲的很详细,读完应该就能完成assignment1中的KNN部分

整体来说还是比较简单的,可能是在学校里学过数据挖掘的原因

距离计算

一共有曼哈顿距离,欧几里得距离,切比雪夫距离三种,在KNN部分主要是使用欧几里得距离(L2 distance)

在数学里有 Lp 范数(L-p Norm)的概念,用来衡量向量的“长度”:

xp=(i=1nxip)1/p\|x\|_p = \left( \sum_{i=1}^n |x_i|^p \right)^{1/p}

p = 1 时,就是 L1 范数(曼哈顿距离)

xy1=i=1nxiyi\|x - y\|_1 = \sum_{i=1}^n |x_i - y_i|

p = 2 时,就是 L2 范数(欧几里得距离)

xy2=i=1n(xiyi)2\|x - y\|_2 = \sqrt{\sum_{i=1}^n (x_i - y_i)^2}

p → ∞ 时,就是 L∞ 范数(切比雪夫距离)

xy=maxixiyi\|x - y\|_{\infty} = \max_i |x_i - y_i|

对于L2距离的计算又有三种实现方式,具体如下:

双循环

def compute_distances_two_loops(self, X):
    """
    Compute the distance between each test point in X and each training point
    in self.X_train using a nested loop over both the training data and the
    test data.

    Inputs:
    - X: A numpy array of shape (num_test, D) containing test data.

    Returns:
    - dists: A numpy array of shape (num_test, num_train) where dists[i, j]
      is the Euclidean distance between the ith test point and the jth training
      point.
    """
    num_test = X.shape[0]
    num_train = self.X_train.shape[0]
    dists = np.zeros((num_test, num_train))

    for i in range(num_test):
        for j in range(num_train):
            #####################################################################
            # TODO:                                                             #
            # Compute the l2 distance between the ith test point and the jth    #
            # training point, and store the result in dists[i, j]. You should   #
            # not use a loop over dimension, nor use np.linalg.norm().          #
            #####################################################################
            dists[i, j] =  np.sqrt(np.sum(np.square(X[i,:] - self.X_train[j,:]),axis=0))
            pass
    return dists

单循环

def compute_distances_one_loop(self, X):
    """
    Compute the distance between each test point in X and each training point
    in self.X_train using a single loop over the test data.

    Input / Output: Same as compute_distances_two_loops
    """
    num_test = X.shape[0]
    num_train = self.X_train.shape[0]
    dists = np.zeros((num_test, num_train))

    for i in range(num_test):
        #######################################################################
        # TODO:                                                               #
        # Compute the l2 distance between the ith test point and all training #
        # points, and store the result in dists[i, :].                        #
        # Do not use np.linalg.norm().                                        #
        #######################################################################
        dist = np.sqrt(np.sum(np.square(self.X_train - X[i,:]), axis=1))
        dists[i, :] = dist
        pass
    return dists

无循环

def compute_distances_no_loops(self, X):
    """
    Compute the distance between each test point in X and each training point
    in self.X_train using no explicit loops.
    Input / Output: Same as compute_distances_two_loops
    """
    num_test = X.shape[0]
    num_train = self.X_train.shape[0]
    dists = np.zeros((num_test, num_train))

    #########################################################################
    # TODO:                                                                 #
    # Compute the l2 distance between all test points and all training      #
    # points without using any explicit loops, and store the result in      #
    # dists.                                                                #
    #                                                                       #
    # You should implement this function using only basic array operations; #
    # in particular you should not use functions from scipy,                #
    # nor use np.linalg.norm().                                             #
    #                                                                       #
    # HINT: Try to formulate the l2 distance using matrix multiplication    #
    #       and two broadcast sums.                                         #
    #########################################################################

    test_sq = np.sum(X**2,axis=1,keepdims=True)
    train_sq = np.sum(self.X_train**2,axis=1)
    dists = np.sqrt(test_sq + train_sq - 2*np.dot(X, self.X_train.T))

    return dists

无循环是通过数学推导得出来的,hint提示里也有说到

dist(x,y)=k=1D(xkyk)2\mathrm{dist}(x, y) = \sqrt{\sum_{k=1}^D (x_k - y_k)^2} xy2=x2+y22xy\|x - y\|^2 = \|x\|^2 + \|y\|^2 - 2x \cdot y

这样就不用进行循环,效率提升很大

KNN预测分类

得到各个向量之间的距离后就可以使用KNN算法来进行分类了

将测试数据相关的距离排序取前K个统计,票数最高的标签即为测试数据的预测结果

def predict_labels(self, dists, k=1):
    """
    Given a matrix of distances between test points and training points,
    predict a label for each test point.

    Inputs:
    - dists: A numpy array of shape (num_test, num_train) where dists[i, j]
      gives the distance betwen the ith test point and the jth training point.

    Returns:
    - y: A numpy array of shape (num_test,) containing predicted labels for the
      test data, where y[i] is the predicted label for the test point X[i].
    """
    num_test = dists.shape[0]
    y_pred = np.zeros(num_test)
    for i in range(num_test):
        # A list of length k storing the labels of the k nearest neighbors to
        # the ith test point.
        closest_y = []
        #########################################################################
        # TODO:                                                                 #
        # Use the distance matrix to find the k nearest neighbors of the ith    #
        # testing point, and use self.y_train to find the labels of these       #
        # neighbors. Store these labels in closest_y.                           #
        # Hint: Look up the function numpy.argsort.  返回的是索引                 #
        #########################################################################

        #########################################################################
        # TODO:                                                                 #
        # Now that you have found the labels of the k nearest neighbors, you    #
        # need to find the most common label in the list closest_y of labels.   #
        # Store this label in y_pred[i]. Break ties by choosing the smaller     #
        # label.                                                                #
        #########################################################################
        closest_y = [self.y_train[j] for j in np.argsort(dists[i, :])[:k]]
        label_i = Counter(closest_y).most_common(1)[0][0]  # 取出几个元素 返回的是元组 [0]取第一个元组
        y_pred[i] = label_i
    return y_pred

至此,已经完成了作业中k_nearest_neighbor.py 需要完成的部分

交叉验证

作业的最后一部分是要要求我们设计交叉验证

介绍可见notebook中的交叉验证部分,讲的也很清楚了

Cross-validation goes a step further and iterates over the choice of which fold is the validation fold, separately from 1-5. This would be referred to as 5-fold cross-validation.

代码如下

num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]

X_train_folds = []
y_train_folds = []
################################################################################
# TODO:                                                                        #
# Split up the training data into folds. After splitting, X_train_folds and    #
# y_train_folds should each be lists of length num_folds, where                #
# y_train_folds[i] is the label vector for the points in X_train_folds[i].     #
# Hint: Look up the numpy array_split function.                                #
################################################################################
X_train_folds = np.array_split(X_train,num_folds)
y_train_folds = np.array_split(y_train,num_folds)


# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}
k_to_accuracies = {k: [] for k in k_choices}


for i in range(num_folds):
    X_train_cv = np.vstack(X_train_folds[:i] + X_train_folds[i+1:])
    y_train_cv = np.hstack(y_train_folds[:i] + y_train_folds[i+1:])
    X_val_cv = X_train_folds[i]
    y_val_cv = y_train_folds[i]
    classifier.train(X_train_cv,y_train_cv)
    dists = classifier.compute_distances_no_loops(X_val_cv)
    num_test1 = y_val_cv.shape[0]
    for j in k_choices:
        y_val_pred = classifier.predict_labels(dists, k=j)
        num_correct = np.sum(y_val_pred == y_val_cv)
        accuracy = float(num_correct) / num_test1
        k_to_accuracies[j].append(accuracy)
    


################################################################################
# TODO:                                                                        #
# Perform k-fold cross validation to find the best value of k. For each        #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times,   #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all     #
# values of k in the k_to_accuracies dictionary.                               #
################################################################################


# Print out the computed accuracies
for k in sorted(k_to_accuracies):
    for accuracy in k_to_accuracies[k]:
        print('k = %d, accuracy = %f' % (k, accuracy))

最后绘制图像

但是图中显示的峰值出现是在k=10的时候,实际测试下来却是k=8时是最优的,notebook里也说k=7左右是最好的

于是我改了下数据集大小 训练集:测试集 从5000:500 换成了10000:1000 ,最后结果如下

可见峰值出现在k=10处原因是数据量太小了,扩大数据量即可

总结

做下来给我的感觉很舒服,主要是各个方面有疑问的在notebook里都有提到,提示也给的很恰当。希望后面也能保持这种质量。