CSC 4810-Artificial Intelligence ASSG# 4 Support Vector MachineSVM is an implementation of Support Vector Machine (SVM). SupportVector Machine was developed by Vapnik.

The main futures of the programare the following: for the problem of pattern recognition, for the problemof regression, for the problem of learning a ranking function. Underlyingthe success of SVM are mathematical foundations of statistical learningtheory. Rather than minimizing the training error, SVMs minimizestructural risk which express and upper bound on generalization error. SVM are popular because they usually achieve good error rates and canhandle unusual types of data like text, graphs, and images. SVM’s leading idea is to classify the input data separating themwithin a decision threshold lying far from the two classes and scoring alow number of errors. SVM’s are used for pattern recognition.

Basically,a data set is used to “train” a particular machine. This machine can learnmore by retraining it with the old data plus the new data. The trainedmachine is as unique as the data that was used to train it and thealgorithm that was used to process the data. Once a machine is trained, itcan be used to predict how closely a new data set matches the trainedmachine.

In other words, Support Vector Machines are used for patternrecognition. SVM uses the following equation to trained the VectorMachine: H(x) = sign {wx + b}Wherew = weight vectorb = thresholdThe generalization abilities of SVMs and other classifiers differsignificantly especially when the number of training data is small. Thismeans that if some mechanism to maximize margins of decision boundaries isintroduced to non-SVM type classifiers, their performance degradation willbe prevented when the class overlap is scarce or non-existent. In theoriginal SVM, the n-class classification problem is converted into n two-class problems, and in the ith two-class problem we determine the optimaldecision function that separates class i from the remaining classes.

Inclassification, if one of the n decision functions classifies an unknowndatum into a definite class, it is classified into that class. In thisformulation, if more than one decision function classifies a datum intodefinite classes, or no decision functions classify the datum into adefinite class, the datum is unclassifiable. To resolve unclassifiable regions for SVMswe discuss four types ofSVMs: one against all SVMs; pairwise SVMs; ECOC (Error Correction OutputCode) SVMs; all at once SVMs; and their variants. Another problem of SVMis slow training. Since SVM are trained by a solving quadratic programmingproblem with number of variables equals to the number of training data,training is slow for a large number of training data.

We discuss trainingof Sims by decomposition techniques combined with a steepest ascent method. Support Vector Machine algorithm also plays big role in internetindustry. For example, the Internet is huge, made of billions of documentsthat are growing exponentially every year. However, a problem exists intrying to find a piece of information amongst the billions of growingdocuments.

Current search engines scan for key words in the documentprovided by the user in a search query. Some search engines such as Googleeven go as far as to offer page rankings by users who have previouslyvisited the page. This relies on other people ranking the page accordingto their needs. Even though these techniques help millions of users a dayretrieve their information, it is not even close to being an exact science.

The problem lies in finding web pages based on your search query thatactually contain the information you are looking for. Here is the figure of SVM algorithm:It is important to understand the mechanism behind the SVM. The SVMimplement the Bayes rule in interesting way. Instead of estimating P(x) itestimates sign P(x)-1/2. This is advantage when our goal is binaryclassification with minimal excepted misclassification rate.

However, thisalso means that in some other situation the SVM needs to be modified andshould not be used as is. In conclusion, Support Vector Machine support lots of real worldapplications such as text categorization, hand-written characterrecognition, image classification, bioinformatics, etc. Their firstintroduction in early 1990s lead to a recent explosion of applications anddeepening theoretical analysis that was now established Support VectorMachines along with neural networks as one of standard tools for machinelearning and data mining. There is a big use of Support Vector Machine inMedical Field. Reference:Boser, B.

, Guyon, I and Vapnik, V. N. (1992). A training algorithm foroptimal margin classifiers.http://www.csie.ntu.edu.tw/~cjlin/papers/tanh.pdf

The main futures of the programare the following: for the problem of pattern recognition, for the problemof regression, for the problem of learning a ranking function. Underlyingthe success of SVM are mathematical foundations of statistical learningtheory. Rather than minimizing the training error, SVMs minimizestructural risk which express and upper bound on generalization error. SVM are popular because they usually achieve good error rates and canhandle unusual types of data like text, graphs, and images. SVM’s leading idea is to classify the input data separating themwithin a decision threshold lying far from the two classes and scoring alow number of errors. SVM’s are used for pattern recognition.

Basically,a data set is used to “train” a particular machine. This machine can learnmore by retraining it with the old data plus the new data. The trainedmachine is as unique as the data that was used to train it and thealgorithm that was used to process the data. Once a machine is trained, itcan be used to predict how closely a new data set matches the trainedmachine.

In other words, Support Vector Machines are used for patternrecognition. SVM uses the following equation to trained the VectorMachine: H(x) = sign {wx + b}Wherew = weight vectorb = thresholdThe generalization abilities of SVMs and other classifiers differsignificantly especially when the number of training data is small. Thismeans that if some mechanism to maximize margins of decision boundaries isintroduced to non-SVM type classifiers, their performance degradation willbe prevented when the class overlap is scarce or non-existent. In theoriginal SVM, the n-class classification problem is converted into n two-class problems, and in the ith two-class problem we determine the optimaldecision function that separates class i from the remaining classes.

Inclassification, if one of the n decision functions classifies an unknowndatum into a definite class, it is classified into that class. In thisformulation, if more than one decision function classifies a datum intodefinite classes, or no decision functions classify the datum into adefinite class, the datum is unclassifiable. To resolve unclassifiable regions for SVMswe discuss four types ofSVMs: one against all SVMs; pairwise SVMs; ECOC (Error Correction OutputCode) SVMs; all at once SVMs; and their variants. Another problem of SVMis slow training. Since SVM are trained by a solving quadratic programmingproblem with number of variables equals to the number of training data,training is slow for a large number of training data.

We discuss trainingof Sims by decomposition techniques combined with a steepest ascent method. Support Vector Machine algorithm also plays big role in internetindustry. For example, the Internet is huge, made of billions of documentsthat are growing exponentially every year. However, a problem exists intrying to find a piece of information amongst the billions of growingdocuments.

Current search engines scan for key words in the documentprovided by the user in a search query. Some search engines such as Googleeven go as far as to offer page rankings by users who have previouslyvisited the page. This relies on other people ranking the page accordingto their needs. Even though these techniques help millions of users a dayretrieve their information, it is not even close to being an exact science.

The problem lies in finding web pages based on your search query thatactually contain the information you are looking for. Here is the figure of SVM algorithm:It is important to understand the mechanism behind the SVM. The SVMimplement the Bayes rule in interesting way. Instead of estimating P(x) itestimates sign P(x)-1/2. This is advantage when our goal is binaryclassification with minimal excepted misclassification rate.

However, thisalso means that in some other situation the SVM needs to be modified andshould not be used as is. In conclusion, Support Vector Machine support lots of real worldapplications such as text categorization, hand-written characterrecognition, image classification, bioinformatics, etc. Their firstintroduction in early 1990s lead to a recent explosion of applications anddeepening theoretical analysis that was now established Support VectorMachines along with neural networks as one of standard tools for machinelearning and data mining. There is a big use of Support Vector Machine inMedical Field. Reference:Boser, B.

, Guyon, I and Vapnik, V. N. (1992). A training algorithm foroptimal margin classifiers.http://www.csie.ntu.edu.tw/~cjlin/papers/tanh.pdf