首页 Sequential Minimal Optimization - A Fast Algorithm for Training Support Vector Machines

Sequential Minimal Optimization - A Fast Algorithm for Training Support Vector Machines

举报
开通vip

Sequential Minimal Optimization - A Fast Algorithm for Training Support Vector Machines 1 Sequential Minimal Optimization: A Fast Algorithm for Training Support Vector Machines John C. Platt Microsoft Research jplatt@microsoft.com Technical Report MSR-TR-98-14 April 21, 1998 © 1998 John Platt ABSTRACT This paper proposes a new algorithm for...

Sequential Minimal Optimization - A Fast Algorithm for Training Support Vector Machines
1 Sequential Minimal Optimization: A Fast Algorithm for Training Support Vector Machines John C. Platt Microsoft Research jplatt@microsoft.com Technical Report MSR-TR-98-14 April 21, 1998 © 1998 John Platt ABSTRACT This paper proposes a new algorithm for training support vector machines: Sequential Minimal Optimization, or SMO. Training a support vector machine requires the solution of a very large quadratic programming (QP) optimization problem. SMO breaks this large QP problem into a series of smallest possible QP problems. These small QP problems are solved analytically, which avoids using a time-consuming numerical QP optimization as an inner loop. The amount of memory required for SMO is linear in the training set size, which allows SMO to handle very large training sets. Because matrix computation is avoided, SMO scales somewhere between linear and quadratic in the training set size for various test problems, while the standard chunking SVM algorithm scales somewhere between linear and cubic in the training set size. SMO’s computation time is dominated by SVM evaluation, hence SMO is fastest for linear SVMs and sparse data sets. On real- world sparse data sets, SMO can be more than 1000 times faster than the chunking algorithm. 1. INTRODUCTION In the last few years, there has been a surge of interest in Support Vector Machines (SVMs) [19] [20] [4]. SVMs have empirically been shown to give good generalization performance on a wide variety of problems such as handwritten character recognition [12], face detection [15], pedestrian detection [14], and text categorization [9]. However, the use of SVMs is still limited to a small group of researchers. One possible reason is that training algorithms for SVMs are slow, especially for large problems. Another explanation is that SVM training algorithms are complex, subtle, and difficult for an average engineer to implement. This paper describes a new SVM learning algorithm that is conceptually simple, easy to implement, is generally faster, and has better scaling properties for difficult SVM problems than the standard SVM training algorithm. The new SVM learning algorithm is called Sequential Minimal Optimization (or SMO). Instead of previous SVM learning algorithms that use numerical quadratic programming (QP) as an inner loop, SMO uses an analytic QP step. This paper first provides an overview of SVMs and a review of current SVM training algorithms. The SMO algorithm is then presented in detail, including the solution to the analytic QP step, More documents and datum download website Lu Zhenbo's Blog: blog.sina.com.cn/luzhenbo2 Communication & Cooperation: luzhenbo@yahoo.com.cn 2 heuristics for choosing which variables to optimize in the inner loop, a description of how to set the threshold of the SVM, some optimizations for special cases, the pseudo-code of the algorithm, and the relationship of SMO to other algorithms. SMO has been tested on two real-world data sets and two artificial data sets. This paper presents the results for timing SMO versus the standard “chunking” algorithm for these data sets and presents conclusions based on these timings. Finally, there is an appendix that describes the derivation of the analytic optimization. 1.1 Overview of Support Vector Machines Vladimir Vapnik invented Support Vector Machines in 1979 [19]. In its simplest, linear form, an SVM is a hyperplane that separates a set of positive examples from a set of negative examples with maximum margin (see figure 1). In the linear case, the margin is defined by the distance of the hyperplane to the nearest of the positive and negative examples. The formula for the output of a linear SVM is u w x b= ⋅ −r r , (1) where w is the normal vector to the hyperplane and x is the input vector. The separating hyperplane is the plane u=0. The nearest points lie on the planes u = ±1. The margin m is thus m w = 1 2|| || . (2) Maximizing margin can be expressed via the following optimization problem [4]: min || || ( ) , , , r r r r w b i i w y w x b i12 2 1subject to ⋅ − ≥ ∀ (3) Positive Examples Negative Examples Maximize distances to nearest points Space of possible inputs Figure 1 A linear Support Vector Machine More documents and datum download website Lu Zhenbo's Blog: blog.sina.com.cn/luzhenbo2 Communication & Cooperation: luzhenbo@yahoo.com.cn 3 where xi is the ith training example, and yi is the correct output of the SVM for the ith training example. The value yi is +1 for the positive examples in a class and –1 for the negative examples. Using a Lagrangian, this optimization problem can be converted into a dual form which is a QP problem where the objective function Ψ is solely dependent on a set of Lagrange multipliers αi, min ( ) min ( ) , r r r r r α α α α α αΨ = ⋅ − == = ∑∑ ∑12 11 1 y y x xi j N i N j i j i j i i N (4) (where N is the number of training examples), subject to the inequality constraints, α i i≥ ∀0, , (5) and one linear equality constraint, yi i N i = ∑ = 1 0α . (6) There is a one-to-one relationship between each Lagrange multiplier and each training example. Once the Lagrange multipliers are determined, the normal vector r w and the threshold b can be derived from the Lagrange multipliers: r r r r w y x b w x yi i N i i k k= = ⋅ − > = ∑ 1 0α α, .for some k (7) Because r w can be computed via equation (7) from the training data before use, the amount of computation required to evaluate a linear SVM is constant in the number of non-zero support vectors. Of course, not all data sets are linearly separable. There may be no hyperplane that splits the positive examples from the negative examples. In the formulation above, the non-separable case would correspond to an infinite solution. However, in 1995, Cortes & Vapnik [7] suggested a modification to the original optimization statement (3) which allows, but penalizes, the failure of an example to reach the correct margin. That modification is: min || || ( ) , , , , r r r r r w b i i N i i iw C y w x b iξ ξ ξ 1 2 2 1 1+ ⋅ − ≥ − ∀ = ∑ subject to (8) where ξi are slack variables that permit margin failure and C is a parameter which trades off wide margin with a small number of margin failures. When this new optimization problem is transformed into the dual form, it simply changes the constraint (5) into a box constraint: 0 ≤ ≤ ∀α i C i, . (9) The variables ξi do not appear in the dual formulation at all. SVMs can be even further generalized to non-linear classifiers [2]. The output of a non-linear SVM is explicitly computed from the Lagrange multipliers: More documents and datum download website Lu Zhenbo's Blog: blog.sina.com.cn/luzhenbo2 Communication & Cooperation: luzhenbo@yahoo.com.cn 4 u y K x x bj j N j j= − = ∑ 1 α ( , ) ,r r (10) where K is a kernel function that measures the similarity or distance between the input vectorr x and the stored training vector rx j . Examples of K include Gaussians, polynomials, and neural network non-linearities [4]. If K is linear, then the equation for the linear SVM (1) is recovered. The Lagrange multipliers αi are still computed via a quadratic program. The non-linearities alter the quadratic form, but the dual objective function Ψ is still quadratic in α: min ( ) min ( , ) , , , . r r r r r α α α α α α α α Ψ = − ≤ ≤ ∀ = == = = ∑∑ ∑ ∑ 1 2 11 1 1 0 0 y y K x x C i y i j i j i j j N i N i i N i i i i N (11) The QP problem in equation (11), above, is the QP problem that the SMO algorithm will solve. In order to make the QP problem above be positive definite, the kernel function K must obey Mercer’s conditions [4]. The Karush-Kuhn-Tucker (KKT) conditions are necessary and sufficient conditions for an optimal point of a positive definite QP problem. The KKT conditions for the QP problem (11) are particularly simple. The QP problem is solved when, for all i: α α α i i i i i i i i i y u C y u C y u = ⇔ ≥ < < ⇔ = = ⇔ ≤ 0 1 0 1 1 , , . (12) where ui is the output of the SVM for the ith training example. Notice that the KKT conditions can be evaluated on one example at a time, which will be useful in the construction of the SMO algorithm. 1.2 Previous Methods for Training Support Vector Machines Due to its immense size, the QP problem (11) that arises from SVMs cannot be easily solved via standard QP techniques. The quadratic form in (11) involves a matrix that has a number of elements equal to the square of the number of training examples. This matrix cannot be fit into 128 Megabytes if there are more than 4000 training examples. Vapnik [19] describes a method to solve the SVM QP, which has since been known as “chunking.” The chunking algorithm uses the fact that the value of the quadratic form is the same if you remove the rows and columns of the matrix that corresponds to zero Lagrange multipliers. Therefore, the large QP problem can be broken down into a series of smaller QP problems, whose ultimate goal is to identify all of the non-zero Lagrange multipliers and discard all of the zero Lagrange multipliers. At every step, chunking solves a QP problem that consists of the following examples: every non-zero Lagrange multiplier from the last step, and the M worst examples that violate the KKT conditions (12) [4], for some value of M (see figure 2). If there are fewer than M examples that violate the KKT conditions at a step, all of the violating examples are added in. Each QP sub-problem is initialized with the results of the previous sub-problem. At the last step, More documents and datum download website Lu Zhenbo's Blog: blog.sina.com.cn/luzhenbo2 Communication & Cooperation: luzhenbo@yahoo.com.cn 5 the entire set of non-zero Lagrange multipliers has been identified, hence the last step solves the large QP problem. Chunking seriously reduces the size of the matrix from the number of training examples squared to approximately the number of non-zero Lagrange multipliers squared. However, chunking still cannot handle large-scale training problems, since even this reduced matrix cannot fit into memory. In 1997, Osuna, et al. [16] proved a theorem which suggests a whole new set of QP algorithms for SVMs. The theorem proves that the large QP problem can be broken down into a series of smaller QP sub-problems. As long as at least one example that violates the KKT conditions is added to the examples for the previous sub-problem, each step will reduce the overall objective function and maintain a feasible point that obeys all of the constraints. Therefore, a sequence of QP sub-problems that always add at least one violator will be guaranteed to converge. Notice that the chunking algorithm obeys the conditions of the theorem, and hence will converge. Osuna, et al. suggests keeping a constant size matrix for every QP sub-problem, which implies adding and deleting the same number of examples at every step [16] (see figure 2). Using a constant-size matrix will allow the training on arbitrarily sized data sets. The algorithm given in Osuna’s paper [16] suggests adding one example and subtracting one example every step. Clearly this would be inefficient, because it would use an entire numerical QP optimization step to cause one training example to obey the KKT conditions. In practice, researchers add and subtract multiple examples according to unpublished heuristics [17]. In any event, a numerical QP solver is required for all of these methods. Numerical QP is notoriously tricky to get right; there are many numerical precision issues that need to be addressed. Chunking Osuna SMO Figure 2. Three alternative methods for training SVMs: Chunking, Osuna’s algorithm, and SMO. For each method, three steps are illustrated. The horizontal thin line at every step represents the training set, while the thick boxes represent the Lagrange multipliers being optimized at that step. For chunking, a fixed number of examples are added every step, while the zero Lagrange multipliers are discarded at every step. Thus, the number of examples trained per step tends to grow. For Osuna’s algorithm, a fixed number of examples are optimized every step: the same number of examples is added to and discarded from the problem at every step. For SMO, only two examples are analytically optimized at every step, so that each step is very fast. More documents and datum download website Lu Zhenbo's Blog: blog.sina.com.cn/luzhenbo2 Communication & Cooperation: luzhenbo@yahoo.com.cn 6 2. SEQUENTIAL MINIMAL OPTIMIZATION Sequential Minimal Optimization (SMO) is a simple algorithm that can quickly solve the SVM QP problem without any extra matrix storage and without using numerical QP optimization steps at all. SMO decomposes the overall QP problem into QP sub-problems, using Osuna’s theorem to ensure convergence. Unlike the previous methods, SMO chooses to solve the smallest possible optimization problem at every step. For the standard SVM QP problem, the smallest possible optimization problem involves two Lagrange multipliers, because the Lagrange multipliers must obey a linear equality constraint. At every step, SMO chooses two Lagrange multipliers to jointly optimize, finds the optimal values for these multipliers, and updates the SVM to reflect the new optimal values (see figure 2). The advantage of SMO lies in the fact that solving for two Lagrange multipliers can be done analytically. Thus, numerical QP optimization is avoided entirely. The inner loop of the algorithm can be expressed in a short amount of C code, rather than invoking an entire QP library routine. Even though more optimization sub-problems are solved in the course of the algorithm, each sub-problem is so fast that the overall QP problem is solved quickly. In addition, SMO requires no extra matrix storage at all. Thus, very large SVM training problems can fit inside of the memory of an ordinary personal computer or workstation. Because no matrix algorithms are used in SMO, it is less susceptible to numerical precision problems. There are two components to SMO: an analytic method for solving for the two Lagrange multipliers, and a heuristic for choosing which multipliers to optimize. Figure 1. The two Lagrange multipliers must fulfill all of the constraints of the full problem. The inequality constraints cause the Lagrange multipliers to lie in the box. The linear equality constraint causes them to lie on a diagonal line. Therefore, one step of SMO must find an optimum of the objective function on a diagonal line segment. a 2 = C a 2 = C a 2 0= a 1 0= a 1 = C y y k1 2 1 2¡ ˘ - =a a a 2 0= a 1 0= a 1 = C y y k1 2 1 2= ˘ + =a a More documents and datum download website Lu Zhenbo's Blog: blog.sina.com.cn/luzhenbo2 Communication & Cooperation: luzhenbo@yahoo.com.cn 7 2.1 Solving for Two Lagrange Multipliers In order to solve for the two Lagrange multipliers, SMO first computes the constraints on these multipliers and then solves for the constrained minimum. For convenience, all quantities that refer to the first multiplier will have a subscript 1, while all quantities that refer to the second multiplier will have a subscript 2. Because there are only two multipliers, the constraints can be easily be displayed in two dimensions (see figure 3). The bound constraints (9) cause the Lagrange multipliers to lie within a box, while the linear equality constraint (6) causes the Lagrange multipliers to lie on a diagonal line. Thus, the constrained minimum of the objective function must lie on a diagonal line segment (as shown in figure 3). This constraint explains why two is the minimum number of Lagrange multipliers that can be optimized: if SMO optimized only one multiplier, it could not fulfill the linear equality constraint at every step. The ends of the diagonal line segment can be expressed quite simply. Without loss of generality, the algorithm first computes the second Lagrange multiplier α2 and computes the ends of the diagonal line segment in terms of α2. If the target y1 does not equal the target y2, then the following bounds apply to α2: L H C C= − = + −max( , ), min( , ).0 2 1 2 1α α α α (13) If the target y1 equals the target y2, then the following bounds apply to α2: L C H C= + − = +max( , ), min( , ).0 2 1 2 1α α α α (14) The second derivative of the objective function along the diagonal line can be expressed as: η = + −K x x K x x K x x( , ) ( , ) ( , ).r r r r r r1 1 2 2 1 22 (15) Under normal circumstances, the objective function will be positive definite, there will be a minimum along the direction of the linear equality constraint, and η will be greater than zero. In this case, SMO computes the minimum along the direction of the constraint : α α η2 2 2 1 2new = + −y E E( ) , (16) where E u yi i i= - is the error on the ith training example. As a next step, the constrained minimum is found by clipping the unconstrained minimum to the ends of the line segment: α α α α α 2 2 2 2 2 new,clipped new new new new if if L if = ≥ < < ≤ % &K ’K H H L H L ; ; . (17) Now, let s y y= 1 2 . The value of α1 is computed from the new, clipped, α2: α α α α1 1 2 2 new new,clipped = + −s( ). (18) Under unusual circumstances, η will not be positive. A negative η will occur if the kernel K does not obey Mercer’s condition, which can cause the objective function to become indefinite. A zero η can occur even with a correct kernel, if more than one training example has the same input More documents and datum download website Lu Zhenbo's Blog: blog.sina.com.cn/luzhenbo2 Communication & Cooperation: luzhenbo@yahoo.com.cn 8 vector x. In any event, SMO will work even when η is not positive, in which case the objective function Ψ should be evaluated at each end of the line segment: f y E b K x x s K x x f y E b s K x x K x x L s L H s H L f Lf L K x x L K x x sLL K x x H f Hf H K x x L H 1 1 1 1 1 1 2 1 2 2 2 2 1 1 2 2 2 2 1 1 2 1 1 2 1 1 2 1 2 1 2 1 1 1 2 2 2 2 1 1 2 1 1 2 1 2 1 2 1 1 = + − − = + − − = + − = + − = + + + + = + + ( ) ( , ) ( , ), ( ) ( , ) ( , ), ( ), ( ), ( , ) ( , ) ( , ), ( , α α α α α α α α r r r r r r r r r r r r r r r r Ψ Ψ ) ( , ) ( , ).+ +12 2 2 2 1 1 2H K x x sHH K x x r r r r (19) SMO will move the Lagrange multipliers to the end point that has the lowest value of the objective function. If the objective function is the same at both ends (within a small ε for round- off error) and the kernel obeys Mercer’s conditions, then the joint minimization cannot make progress. That scenario is described below. 2.2 Heuristics for Choosing Which Multipliers To Optimize As long as SMO always optimizes and alters two Lagrange multipliers at every step and at least one of the Lagrange multipliers violated the KKT conditions before the step, then each step will decrease the objective function according to Osuna’s theorem [16]. Convergence is thus guaranteed. In order to speed convergence, SMO uses heuristics to choose which two Lagrange multipliers to jointly optimize. There are two separate choice heuristics: one for the first Lagrange multiplier and one for the second. The choice of the first heuristic provides the outer loop of the SMO algorithm. The outer loop first iterates over the entire training set, determining whether each example violates the KKT conditions (12). If an example violates the KKT conditions, it is then eligible for optimization. After one pass through the entire training set, the outer loop iterates over all examples whose Lagrange multipliers are neither 0 nor C (the non-bound examples). Again, each example is checked against the KKT conditions and violating examples are eligible for optimization. The outer loop makes repeated passes over the non-bound examples until all of the non-bound exampl
本文档为【Sequential Minimal Optimization - A Fast Algorithm for Training Support Vector Machines】,请使用软件OFFICE或WPS软件打开。作品中的文字与图均可以修改和编辑, 图片更改请在作品中右键图片并更换,文字修改请直接点击文字进行修改,也可以新增和删除文档中的内容。
该文档来自用户分享,如有侵权行为请发邮件ishare@vip.sina.com联系网站客服,我们会及时删除。
[版权声明] 本站所有资料为用户分享产生,若发现您的权利被侵害,请联系客服邮件isharekefu@iask.cn,我们尽快处理。
本作品所展示的图片、画像、字体、音乐的版权可能需版权方额外授权,请谨慎使用。
网站提供的党政主题相关内容(国旗、国徽、党徽..)目的在于配合国家政策宣传,仅限个人学习分享使用,禁止用于任何广告和商用目的。
下载需要: 免费 已有0 人下载
最新资料
资料动态
专题动态
is_637706
暂无简介~
格式:pdf
大小:101KB
软件:PDF阅读器
页数:21
分类:
上传时间:2011-02-12
浏览量:924