Neural Network Compression with k -DPP Gejia Zhang, Qingyuan Dong, Yaojun Qin, Linxuan Shi, Chenfei Hou May 2019 Abstract In order to improve the efficiency of the whole neural network, we want to delete the redundant cells in a given hidden layer and only keep the useful cells. We achieved this procedure by k-DPP method in the corresponding given hidden layer. And in order to improve this method, we tried to use k-DPP algorithm and 5 different kernels comparing with random selected neurons. Also, we tried to compare the result of reweighting and without reweighting. In the final, we find the kernel of Laplacian with reweighting works the best. 1 Neural Network with MNIST In seek of reducing memory footprints of neural networks, we test different kernels and fine-tune hyper-parameters in the implementation of DivNet. Through placing a determi- nantal point process (DPP) over neurons in a given layer, a subset of diversified neurons are selected to represent the dynamics of a network layer, followed a reweighting techniques to fuse the redundant neurons into the selected ones, and thus implicitly enforce regularization and prevent overfitting automatically. This allows powerful compression of neural networks without sacrificing the performance (accuracy) of the models. Experimental results based on MNIST dataset are shown in the paper to illustrate how different kernels and parameters affect final performance. Our main contributions are empirical results of comparison between various types kernels used in k-DPP L-ensemble matrix, and novel thoughts and discussion on the different k-DPP sampling techniques in practice. 1.1 Building Layers In this part, we have four layers, one input layer with 784 pixels, 2 hidden layers of 500 nodes and an output layer with 10 categories. 1.2 Training Data In order to training data, we use gradient method and the backpropagation algorithm. we split the data into training set with 60000 data and test set with 10000 data. And we trained the dataset for 95 times based on the training set and achieve 99% accuracy. 1