Maximize Margin denotes +1 denotes-1 wx+b= 0 WX+b≥0ify=1 Margin「 /X+b≤0fy=-1 b+x;·w yi(Wx+b)≥0 argmax arg min wbx;∈D subject to VX∈D:y(xW+b)≥0 · Min-max problem→ game problem Copyright 2001, 2003, Andrew W. Moore Support Vector Machines: Slide 16
Copyright © 2001, 2003, Andrew W. Moore Support Vector Machines: Slide 16 Maximize Margin denotes +1 denotes -1 wx +b = 0 Margin • Min-max problem → game problem WXi+b≥0 iff yi=1 WXi+b≤0 iff yi=-1 yi(WXi+b) ≥0 ( ) 2 , 1 argmax arg min subject to : 0 i i d b D i i i i i b w D y b = + + w x x w x x w ≥0
Maximize Margin Wx+b≥0fy=1 denotes +1 denotes-1 wx +b=0 WX+bs0 iff y=-1 yi(Wx+b)≥0 Margin 6+x argmax arg min D d 2 Strategy subject to Vx,∈D:y(xw+b)20 Vx;∈D:|b+ d 2 WX +b=0 argmin w b a(wx +b)=0 where a+0 subject to Vx; ED:y(x,w+b)21 Copyright 2001, 2003, Andrew W. Moore Support Vector Machines: Slide 17
Copyright © 2001, 2003, Andrew W. Moore Support Vector Machines: Slide 17 Maximize Margin denotes +1 denotes -1 wx +b = 0 Margin Strategy: + x x w i i D b : 1 ( ) 2 , 1 argmax arg min subject to : 0 i i d b D i i i i i b w D y b = + + w x x w x x w ( ) 2 1 , argmin subject to : 1 d i i b i i i w D y b = + w x x w WXi+b≥0 iff yi=1 WXi+b≤0 iff yi=-1 yi(WXi+b) ≥ 0 wx +b = 0 α(wx +b) = 0 where α≠0
Maximize margin How does it come 6+X,w argmax/argmin x;∈D argmin∑d,2 subject to VX,∈D:y(xw+b)≥0 subject to Vx,∈D:y1(xw+b)21 Wx∈D:|b+x,w21 6+x w We have Ab+x1×K1 arg min arg min v2×K 6+x Thi arg max arg min - arg max -arg min >w ∑ i=1 Copyright 2001, 2003, Andrew W. Moore Support Vector Machines: Slide 18
Copyright © 2001, 2003, Andrew W. Moore Support Vector Machines: Slide 18 Maximize Margin • How does it come ? + x x w i i D b : 1 ( ) 2 , 1 argmax arg min subject to : 0 i i d b D i i i i i b w D y b = + + w x x w x x w ( ) 2 1 , argmin subject to : 1 d i i b i i i w D y b = + w x x w = = = = + = + d i i d i i i d i i i w K w b x w K w b x w 1 2 1 2 1 2 ' | . | 1 arg min | . | arg min = = = = = + d i i d i i d i i i w w w b x w 1 2 1 2 1 2 arg min ' ' 1 arg max | . | arg max arg min We have Thus
Maximum Margin Linear Classifier ,b}= argmax∑1v 、b subject to (v:x+b)21 y2 +b)≥1 x(:xN+b)21 How to solve it Copyright 2001, 2003, Andrew W. Moore Support Vector Machines: Slide 19
Copyright © 2001, 2003, Andrew W. Moore Support Vector Machines: Slide 19 Maximum Margin Linear Classifier • How to solve it? ( ) ( ) ( ) * * 2 1 , 1 1 2 2 { , }= argmax subject to 1 1 .... 1 d k k w b N N w b w y w x b y w x b y w x b = + + +
Learning via Quadratic Programming QP is a well-studied class of optimization algorithms to maximize a quadratic function of some real-valued variables subject to linear constraints Detail solution of Quadratic Programming Convex Convex Optimization Stephen P. Boyd optimization Online edition, Free for Downloading Copyright 2001, 2003, Andrew W. Moore Support Vector Machines: Slide 20
Copyright © 2001, 2003, Andrew W. Moore Support Vector Machines: Slide 20 Learning via Quadratic Programming • QP is a well-studied class of optimization algorithms to maximize a quadratic function of some real-valued variables subject to linear constraints. • Detail solution of Quadratic Programming • Convex Optimization Stephen P. Boyd • Online Edition, Free for Downloading