Journal of Information Technology and Computer Science Volume 5, Number 1, April 2020, pp. 23-31 Journal Homepage: www.jitecs.ub.ac.id K-Value Effect Based on Combination GLCM Angle and KNN for Detecting Smart Wheelchair Ahmad Wali Satria Bahari Johan 1 , Fitri Utaminingrum 2 , Agung Setia Budi 3 1,2 Computer Vision Research Group, Faculty of Computer Science, Brawijaya University, Malang, Indonesia 3 Embeded System Group, Faculty of Computer Science, Brawijaya University, Malang, Indonesia { 1 ahmadsatria27@gmail.com, 2 f3_ningrum@ub.ac.id, 3 agungsetiabudi@ub.ac.id} Received 11 September 2019; accepted 10 October 2020 Abstract. This study aims to analyze the k-value on K nearest neighbor classification. k-value is the distance used to find the closest data to label the class from the testing data. Each k-value can produce a different class label against the same testing data. The variants of k-value that we use are k=3, k=5 and k=7 to find the best k-value. There are 2 classes that are used in this research. Both classes are stairs descent and floor classes. The GLCM method is used to extract features. The data we use comes from videos obtained from the camera on the smart wheelchair taken by the frame. Refer to the results of our tests, the best k-value is obtained when using k=7 and angle 0° with accuracy is 92.5%. The stairs descent detection system will be implemented in a smart wheelchair. Keyword. K nearest neighbor, GLCM, smart wheelchair. 1 Introduction Classification is a categorization process where objects are recognized, distinguished and understood based on training data sets. In the classification of supervised learning techniques where a set of properly defined training and observations is available. Algorithms that implement classification are often known as classifiers, and observations are often known as instances. There are 2 types of training in classification. Both are eager learners and lazy learners. The eager learner will build a model from the major training tuples before receiving a test tuple. The lazy learner is a learner that simply stores it (or does only a little minor processing) and waits until it is given a test tuple when given a training tuple[1]. K nearest neighbor classifiers are examples of lazy learners.. K nearest neighbor algorithm, tuples are predicted based on the class of its nearest neighbors[2]. Figure 1 shows the K nearest neighbors of an x record, data points that have k with the closest distance to x. In Fig. 1(a) shows when using k=1 negative data is the closest data to x, so x is labeled as negative class. In Fig. 1(b) there is 1 negative data and 2 positive data closest to x when using k=3, so the majority of data is taken that is positive data to label x. Refer to Fig. 1, it can be seen that the use of k-value is