0 valutazioniIl 0% ha trovato utile questo documento (0 voti)
21 visualizzazioni1 pagina
This document provides a summary of common machine learning algorithms, including Naive Bayes, decision trees, k-nearest neighbors, perceptrons, support vector machines, neural networks, regression trees, and nearest neighbor regression. For each algorithm, it lists the typical input and output, how the algorithm learns from examples, considerations for complexity and control, and some brief notes. The table compares key aspects of these common machine learning approaches.
Descrizione originale:
the basics of artificial intelligence to be known....!!!!!
This document provides a summary of common machine learning algorithms, including Naive Bayes, decision trees, k-nearest neighbors, perceptrons, support vector machines, neural networks, regression trees, and nearest neighbor regression. For each algorithm, it lists the typical input and output, how the algorithm learns from examples, considerations for complexity and control, and some brief notes. The table compares key aspects of these common machine learning approaches.
Copyright:
Attribution Non-Commercial (BY-NC)
Formati disponibili
Scarica in formato PDF, TXT o leggi online su Scribd
This document provides a summary of common machine learning algorithms, including Naive Bayes, decision trees, k-nearest neighbors, perceptrons, support vector machines, neural networks, regression trees, and nearest neighbor regression. For each algorithm, it lists the typical input and output, how the algorithm learns from examples, considerations for complexity and control, and some brief notes. The table compares key aspects of these common machine learning approaches.
Copyright:
Attribution Non-Commercial (BY-NC)
Formati disponibili
Scarica in formato PDF, TXT o leggi online su Scribd
Algorithm Input Output Learns by Complexity Control Notes
Naive Bayes boolean feature vector discrete estimating conditional Assumes independent LaPlace Correction, XOR probabilities (counting) features basic Decision Tree boolean feature vector discrete minimizing average parametric: leaf size, entropy at the branches minimum entropy Continuous-Valued Decision real feature vector discrete minimizing average Tree entropy at the branches K-Nearest Neighbor real feature vector discrete memorizing all points parametric: K Scaling Perceptron real feature vector discrete maximizing margin limited to linear separator guarantees separator if it (weight space search) exists SVM real feature vector discrete maximizing margin maximizes margin in error (quadratic function programming) Neural Net real feature vector discrete gradient descent (weight architecture Architecture, Scaling space search) Neural Net Regression real feature vector real gradient descent (weight architecture Architecture, Scaling space search) Regression Trees real feature vector real minimizing variance at Kernel functions (not SVM the branches kernels) Nearest Neighbor Regression real feature vector real memorizing all points Kernel functions (not SVM kernels)