This is an instance-based learning method used for classification and regression tasks. The K-nearest neighbor algorithm makes predictions by comparing the similarities between the input samples and the samples in the training dataset. Evelyn Fix and Joseph Hodges proposed this algorithm to solve classification and regression problems. The K-nearest neighbor algorithm is simple and intuitive, but its computational complexity is high, especially on large datasets. In addition, the algorithm is sensitive to noisy data and irrelevant features. Regarding the principle and implementation of the K-nearest neighbor algorithm, Fix and Hodges published a paper titled "Discriminatory Analysis-Nonparametric Discrimination: Consistency Properties" in 1951.
Early neural network research
Backpropagation algorithm 1969
When studying the perceptron model, Marvin Minsky lithuania mobile database and Seymour Baye found that it could only handle linearly separable problems, so they proposed the back-propagation algorithm. This algorithm is used to train multi-layer perceptron models, enabling neural networks to solve more complex nonlinear problems, such as image recognition and speech recognition. However, a major limitation of the back-propagation algorithm is that it is prone to fall into local optimal solutions rather than global optimal solutions. In addition, the algorithm is sensitive to the initialization of weights, the training speed is slow, and the gradient disappearance problem may occur. In 1986, a paper titled "Learning representations by back-propagating errors" published in Nature magazine, authors David Rumelhart, Geoffrey Hinton and Ronald Williams introduced the back-propagation algorithm and its implementation method in detail. Despite its limitations, the back-propagation algorithm still laid the foundation for the development of deep learning.