本課程是中國大學(xué)慕課《機(jī)器學(xué)習(xí)》的“支持向量機(jī)”章節(jié)的課后代碼,。
課程地址:
https://www.icourse163.org/course/WZU-1464096179
課程完整代碼:
https://github.com/fengdu78/WZU-machine-learning-course
代碼修改并注釋:黃海廣,[email protected]
在本練習(xí)中,,我們將使用支持向量機(jī)(SVM)來構(gòu)建垃圾郵件分類器,。我們將從一些簡單的2D數(shù)據(jù)集開始使用SVM來查看它們的工作原理。然后,,我們將對一組原始電子郵件進(jìn)行一些預(yù)處理工作,,并使用SVM在處理的電子郵件上構(gòu)建分類器,以確定它們是否為垃圾郵件,。
我們要做的第一件事是看一個簡單的二維數(shù)據(jù)集,,看看線性SVM如何對數(shù)據(jù)集進(jìn)行不同的C值(類似于線性/邏輯回歸中的正則化項(xiàng))。
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sb
import warnings
warnings.simplefilter('ignore')
我們將其用散點(diǎn)圖表示,,其中類標(biāo)簽由符號表示(+表示正類,,o表示負(fù)類)。
data1 = pd.read_csv('data/svmdata1.csv')
data1.head()
| X1 | X2 | y |
---|
0 | 1.9643 | 4.5957 | 1 |
---|
1 | 2.2753 | 3.8589 | 1 |
---|
2 | 2.9781 | 4.5651 | 1 |
---|
3 | 2.9320 | 3.5519 | 1 |
---|
4 | 3.5772 | 2.8560 | 1 |
---|
positive = data1[data1['y'].isin([1])]
negative = data1[data1['y'].isin([0])]
fig, ax = plt.subplots(figsize=(12, 8))
ax.scatter(positive['X1'], positive['X2'], s=50, marker='x', label='Positive')
ax.scatter(negative['X1'], negative['X2'], s=50, marker='o', label='Negative')
ax.legend()
plt.show()
請注意,,還有一個異常的正例在其他樣本之外,。 這些類仍然是線性分離的,但它非常緊湊,。我們要訓(xùn)練線性支持向量機(jī)來學(xué)習(xí)類邊界,。在這個練習(xí)中,我們沒有從頭開始執(zhí)行SVM的任務(wù),,所以我要用scikit-learn,。
from sklearn import svm
svc = svm.LinearSVC(C=1, loss='hinge', max_iter=1000)
svc
LinearSVC(C=1, loss='hinge')
首先,我們使用 C=1 看下結(jié)果如何,。
svc.fit(data1[['X1', 'X2']], data1['y'])
svc.score(data1[['X1', 'X2']], data1['y'])
0.9803921568627451
其次,,讓我們看看如果C的值越大,會發(fā)生什么
svc2 = svm.LinearSVC(C=100, loss='hinge', max_iter=1000)
svc2.fit(data1[['X1', 'X2']], data1['y'])
svc2.score(data1[['X1', 'X2']], data1['y'])
0.9411764705882353
這次我們得到了訓(xùn)練數(shù)據(jù)的完美分類,,但是通過增加C的值,,我們創(chuàng)建了一個不再適合數(shù)據(jù)的決策邊界,。我們可以通過查看每個類別預(yù)測的置信水平來看出這一點(diǎn),這是該點(diǎn)與超平面距離的函數(shù),。
data1['SVM 1 Confidence'] = svc.decision_function(data1[['X1', 'X2']])
fig, ax = plt.subplots(figsize=(12, 8))
ax.scatter(data1['X1'],
data1['X2'],
s=50,
c=data1['SVM 1 Confidence'],
cmap='seismic')
ax.set_title('SVM (C=1) Decision Confidence')
plt.show()
data1['SVM 2 Confidence'] = svc2.decision_function(data1[['X1', 'X2']])
fig, ax = plt.subplots(figsize=(12,8))
ax.scatter(data1['X1'], data1['X2'], s=50, c=data1['SVM 2 Confidence'], cmap='seismic')
ax.set_title('SVM (C=100) Decision Confidence')
plt.show()
可以看看靠近邊界的點(diǎn)的顏色,,區(qū)別是有點(diǎn)微妙。如果您在練習(xí)文本中,,則會出現(xiàn)繪圖,,其中決策邊界在圖上顯示為一條線,,有助于使差異更清晰,。
現(xiàn)在我們將從線性SVM轉(zhuǎn)移到能夠使用內(nèi)核進(jìn)行非線性分類的SVM。我們首先負(fù)責(zé)實(shí)現(xiàn)一個高斯核函數(shù),。雖然scikit-learn具有內(nèi)置的高斯內(nèi)核,,但為了實(shí)現(xiàn)更清楚,我們將從頭開始實(shí)現(xiàn),。
def gaussian_kernel(x1, x2, sigma):
return np.exp(-(np.sum((x1 - x2)**2) / (2 * (sigma**2))))
x1 = np.array([1.0, 2.0, 1.0])
x2 = np.array([0.0, 4.0, -1.0])
sigma = 2
gaussian_kernel(x1, x2, sigma)
0.32465246735834974
該結(jié)果與練習(xí)中的預(yù)期值相符,。接下來,我們將檢查另一個數(shù)據(jù)集,,這次用非線性決策邊界,。
data2 = pd.read_csv('data/svmdata2.csv')
data2.head()
| X1 | X2 | y |
---|
0 | 0.107143 | 0.603070 | 1 |
---|
1 | 0.093318 | 0.649854 | 1 |
---|
2 | 0.097926 | 0.705409 | 1 |
---|
3 | 0.155530 | 0.784357 | 1 |
---|
4 | 0.210829 | 0.866228 | 1 |
---|
positive = data2[data2['y'].isin([1])]
negative = data2[data2['y'].isin([0])]
fig, ax = plt.subplots(figsize=(12, 8))
ax.scatter(positive['X1'], positive['X2'], s=30, marker='x', label='Positive')
ax.scatter(negative['X1'], negative['X2'], s=30, marker='o', label='Negative')
ax.legend()
plt.show()
對于該數(shù)據(jù)集,我們將使用內(nèi)置的RBF內(nèi)核構(gòu)建支持向量機(jī)分類器,,并檢查其對訓(xùn)練數(shù)據(jù)的準(zhǔn)確性,。為了可視化決策邊界,這一次我們將根據(jù)實(shí)例具有負(fù)類標(biāo)簽的預(yù)測概率來對點(diǎn)做陰影,。從結(jié)果可以看出,,它們大部分是正確的。
svc = svm.SVC(C=100, gamma=10, probability=True)
svc
SVC(C=100, gamma=10, probability=True)
svc.fit(data2[['X1', 'X2']], data2['y'])
svc.score(data2[['X1', 'X2']], data2['y'])
0.9698725376593279
data2['Probability'] = svc.predict_proba(data2[['X1', 'X2']])[:, 0]
fig, ax = plt.subplots(figsize=(12, 8))
ax.scatter(data2['X1'], data2['X2'], s=30, c=data2['Probability'], cmap='Reds')
plt.show()
對于第三個數(shù)據(jù)集,,我們給出了訓(xùn)練和驗(yàn)證集,,并且基于驗(yàn)證集性能為SVM模型找到最優(yōu)超參數(shù)。雖然我們可以使用scikit-learn的內(nèi)置網(wǎng)格搜索來做到這一點(diǎn),,但是本著遵循練習(xí)的目的,,我們將從頭開始實(shí)現(xiàn)一個簡單的網(wǎng)格搜索。
data3=pd.read_csv('data/svmdata3.csv')
data3val=pd.read_csv('data/svmdata3val.csv')
X = data3[['X1','X2']]
Xval = data3val[['X1','X2']]
y = data3['y'].ravel()
yval = data3val['yval'].ravel()
C_values = [0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100]
gamma_values = [0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100]
best_score = 0
best_params = {'C': None, 'gamma': None}
for C in C_values:
for gamma in gamma_values:
svc = svm.SVC(C=C, gamma=gamma)
svc.fit(X, y)
score = svc.score(Xval, yval)
if score > best_score:
best_score = score
best_params['C'] = C
best_params['gamma'] = gamma
best_score, best_params
(0.965, {'C': 0.3, 'gamma': 100})
大間隔分類器
from sklearn.svm import SVC
from sklearn import datasets
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
iris = datasets.load_iris()
X = iris['data'][:, (2, 3)] # petal length, petal width
y = iris['target']
setosa_or_versicolor = (y == 0) | (y == 1)
X = X[setosa_or_versicolor]
y = y[setosa_or_versicolor]
# SVM Classifier model
svm_clf = SVC(kernel='linear', C=float('inf'))
svm_clf.fit(X, y)
SVC(C=inf, kernel='linear')
# Bad models
x0 = np.linspace(0, 5.5, 200)
pred_1 = 5 * x0 - 20
pred_2 = x0 - 1.8
pred_3 = 0.1 * x0 + 0.5
def plot_svc_decision_boundary(svm_clf, xmin, xmax):
w = svm_clf.coef_[0]
b = svm_clf.intercept_[0]
# At the decision boundary, w0*x0 + w1*x1 + b = 0
# => x1 = -w0/w1 * x0 - b/w1
x0 = np.linspace(xmin, xmax, 200)
decision_boundary = -w[0]/w[1] * x0 - b/w[1]
margin = 1/w[1]
gutter_up = decision_boundary + margin
gutter_down = decision_boundary - margin
svs = svm_clf.support_vectors_
plt.scatter(svs[:, 0], svs[:, 1], s=180, facecolors='#FFAAAA')
plt.plot(x0, decision_boundary, 'k-', linewidth=2)
plt.plot(x0, gutter_up, 'k--', linewidth=2)
plt.plot(x0, gutter_down, 'k--', linewidth=2)
plt.figure(figsize=(12, 2.7))
plt.subplot(121)
plt.plot(x0, pred_1, 'g--', linewidth=2)
plt.plot(x0, pred_2, 'm-', linewidth=2)
plt.plot(x0, pred_3, 'r-', linewidth=2)
plt.plot(X[:, 0][y == 1], X[:, 1][y == 1], 'bs', label='Iris-Versicolor')
plt.plot(X[:, 0][y == 0], X[:, 1][y == 0], 'yo', label='Iris-Setosa')
plt.xlabel('Petal length', fontsize=14)
plt.ylabel('Petal width', fontsize=14)
plt.legend(loc='upper left', fontsize=14)
plt.axis([0, 5.5, 0, 2])
plt.subplot(122)
plot_svc_decision_boundary(svm_clf, 0, 5.5)
plt.plot(X[:, 0][y == 1], X[:, 1][y == 1], 'bs')
plt.plot(X[:, 0][y == 0], X[:, 1][y == 0], 'yo')
plt.xlabel('Petal length', fontsize=14)
plt.axis([0, 5.5, 0, 2])
plt.show()
特征縮放的敏感性
Xs = np.array([[1, 50], [5, 20], [3, 80], [5, 60]]).astype(np.float64)
ys = np.array([0, 0, 1, 1])
svm_clf = SVC(kernel='linear', C=100)
svm_clf.fit(Xs, ys)
plt.figure(figsize=(12, 3.2))
plt.subplot(121)
plt.plot(Xs[:, 0][ys == 1], Xs[:, 1][ys == 1], 'bo')
plt.plot(Xs[:, 0][ys == 0], Xs[:, 1][ys == 0], 'ms')
plot_svc_decision_boundary(svm_clf, 0, 6)
plt.xlabel('$x_0$', fontsize=20)
plt.ylabel('$x_1$ ', fontsize=20, rotation=0)
plt.title('Unscaled', fontsize=16)
plt.axis([0, 6, 0, 90])
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_scaled = scaler.fit_transform(Xs)
svm_clf.fit(X_scaled, ys)
plt.subplot(122)
plt.plot(X_scaled[:, 0][ys == 1], X_scaled[:, 1][ys == 1], 'bo')
plt.plot(X_scaled[:, 0][ys == 0], X_scaled[:, 1][ys == 0], 'ms')
plot_svc_decision_boundary(svm_clf, -2, 2)
plt.xlabel('$x_0$', fontsize=20)
plt.title('Scaled', fontsize=16)
plt.axis([-2, 2, -2, 2])
plt.show()
硬間隔和軟間隔分類
X_outliers = np.array([[3.4, 1.3], [3.2, 0.8]])
y_outliers = np.array([0, 0])
Xo1 = np.concatenate([X, X_outliers[:1]], axis=0)
yo1 = np.concatenate([y, y_outliers[:1]], axis=0)
Xo2 = np.concatenate([X, X_outliers[1:]], axis=0)
yo2 = np.concatenate([y, y_outliers[1:]], axis=0)
svm_clf2 = SVC(kernel='linear', C=10**9)
svm_clf2.fit(Xo2, yo2)
plt.figure(figsize=(12, 2.7))
plt.subplot(121)
plt.plot(Xo1[:, 0][yo1 == 1], Xo1[:, 1][yo1 == 1], 'bs')
plt.plot(Xo1[:, 0][yo1 == 0], Xo1[:, 1][yo1 == 0], 'yo')
plt.text(0.3, 1.0, 'Impossible!', fontsize=24, color='red')
plt.xlabel('Petal length', fontsize=14)
plt.ylabel('Petal width', fontsize=14)
plt.annotate(
'Outlier',
xy=(X_outliers[0][0], X_outliers[0][1]),
xytext=(2.5, 1.7),
ha='center',
arrowprops=dict(facecolor='black', shrink=0.1),
fontsize=16,
)
plt.axis([0, 5.5, 0, 2])
plt.subplot(122)
plt.plot(Xo2[:, 0][yo2 == 1], Xo2[:, 1][yo2 == 1], 'bs')
plt.plot(Xo2[:, 0][yo2 == 0], Xo2[:, 1][yo2 == 0], 'yo')
plot_svc_decision_boundary(svm_clf2, 0, 5.5)
plt.xlabel('Petal length', fontsize=14)
plt.annotate(
'Outlier',
xy=(X_outliers[1][0], X_outliers[1][1]),
xytext=(3.2, 0.08),
ha='center',
arrowprops=dict(facecolor='black', shrink=0.1),
fontsize=16,
)
plt.axis([0, 5.5, 0, 2])
plt.show()
from sklearn.pipeline import Pipeline
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=100, noise=0.15, random_state=42)
def plot_predictions(clf, axes):
x0s = np.linspace(axes[0], axes[1], 100)
x1s = np.linspace(axes[2], axes[3], 100)
x0, x1 = np.meshgrid(x0s, x1s)
X = np.c_[x0.ravel(), x1.ravel()]
y_pred = clf.predict(X).reshape(x0.shape)
y_decision = clf.decision_function(X).reshape(x0.shape)
plt.contourf(x0, x1, y_pred, cmap=plt.cm.brg, alpha=0.2)
plt.contourf(x0, x1, y_decision, cmap=plt.cm.brg, alpha=0.1)
def plot_dataset(X, y, axes):
plt.plot(X[:, 0][y==0], X[:, 1][y==0], 'bs')
plt.plot(X[:, 0][y==1], X[:, 1][y==1], 'g^')
plt.axis(axes)
plt.grid(True, which='both')
plt.xlabel(r'$x_1$', fontsize=20)
plt.ylabel(r'$x_2$', fontsize=20, rotation=0)
from sklearn.svm import SVC
gamma1, gamma2 = 0.1, 5
C1, C2 = 0.001, 1000
hyperparams = (gamma1, C1), (gamma1, C2), (gamma2, C1), (gamma2, C2)
svm_clfs = []
for gamma, C in hyperparams:
rbf_kernel_svm_clf = Pipeline([('scaler', StandardScaler()),
('svm_clf',
SVC(kernel='rbf', gamma=gamma, C=C))])
rbf_kernel_svm_clf.fit(X, y)
svm_clfs.append(rbf_kernel_svm_clf)
plt.figure(figsize=(12, 7))
for i, svm_clf in enumerate(svm_clfs):
plt.subplot(221 + i)
plot_predictions(svm_clf, [-1.5, 2.5, -1, 1.5])
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
gamma, C = hyperparams[i]
plt.title(r'$\gamma = {}, C = {}$'.format(gamma, C), fontsize=12)
plt.show()
svm推導(dǎo)
分離超平面:
點(diǎn)到直線距離:
為2-范數(shù):
直線為超平面,,樣本可表示為:
margin:
函數(shù)間隔:
幾何間隔:,,當(dāng)數(shù)據(jù)被正確分類時,幾何間隔就是點(diǎn)到超平面的距離
為了求幾何間隔最大,,SVM基本問題可以轉(zhuǎn)化為求解:(為幾何間隔,,(為函數(shù)間隔)
分類點(diǎn)幾何間隔最大,,同時被正確分類,。但這個方程并非凸函數(shù)求解,,所以要先①將方程轉(zhuǎn)化為凸函數(shù),②用拉格朗日乘子法和KKT條件求解對偶問題,。
①轉(zhuǎn)化為凸函數(shù):
先令,,方便計(jì)算(參照衡量,不影響評價(jià)結(jié)果)
再將轉(zhuǎn)化成求解凸函數(shù),,1/2是為了求導(dǎo)之后方便計(jì)算。
②用拉格朗日乘子法和KKT條件求解最優(yōu)值:
整合成:
推導(dǎo):
根據(jù)KKT條件:
帶入
再把max問題轉(zhuǎn)成min問題:
以上為SVM對偶問題的對偶形式
kernel
在低維空間計(jì)算獲得高維空間的計(jì)算結(jié)果,,也就是說計(jì)算結(jié)果滿足高維(滿足高維,,才能說明高維下線性可分)。
soft margin & slack variable
引入松弛變量,,對應(yīng)數(shù)據(jù)點(diǎn)允許偏離的functional margin 的量,。
目標(biāo)函數(shù):
對偶問題:
Sequential Minimal Optimization
首先定義特征到結(jié)果的輸出函數(shù):.
因?yàn)?span>
有
import numpy as np
import pandas as pd
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
%matplotlib inline
# data
def create_data():
iris = load_iris()
df = pd.DataFrame(iris.data, columns=iris.feature_names)
df['label'] = iris.target
df.columns = ['sepal length', 'sepal width', 'petal length', 'petal width', 'label']
data = np.array(df.iloc[:100, [0, 1, -1]])
for i in range(len(data)):
if data[i,-1] == 0:
data[i,-1] = -1
# print(data)
return data[:,:2], data[:,-1]
X, y = create_data()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)
plt.scatter(X[:50,0],X[:50,1], label='0')
plt.scatter(X[50:,0],X[50:,1], label='1')
plt.legend()
<matplotlib.legend.Legend at 0x164f688b670>
class SVM:
def __init__(self, max_iter=100, kernel='linear'):
self.max_iter = max_iter
self._kernel = kernel
def init_args(self, features, labels):
self.m, self.n = features.shape
self.X = features
self.Y = labels
self.b = 0.0
# 將Ei保存在一個列表里
self.alpha = np.ones(self.m)
self.E = [self._E(i) for i in range(self.m)]
# 松弛變量
self.C = 1.0
def _KKT(self, i):
y_g = self._g(i) * self.Y[i]
if self.alpha[i] == 0:
return y_g >= 1
elif 0 < self.alpha[i] < self.C:
return y_g == 1
else:
return y_g <= 1
# g(x)預(yù)測值,輸入xi(X[i])
def _g(self, i):
r = self.b
for j in range(self.m):
r += self.alpha[j] * self.Y[j] * self.kernel(self.X[i], self.X[j])
return r
# 核函數(shù)
def kernel(self, x1, x2):
if self._kernel == 'linear':
return sum([x1[k] * x2[k] for k in range(self.n)])
elif self._kernel == 'poly':
return (sum([x1[k] * x2[k] for k in range(self.n)]) + 1)**2
return 0
# E(x)為g(x)對輸入x的預(yù)測值和y的差
def _E(self, i):
return self._g(i) - self.Y[i]
def _init_alpha(self):
# 外層循環(huán)首先遍歷所有滿足0<a<C的樣本點(diǎn),,檢驗(yàn)是否滿足KKT
index_list = [i for i in range(self.m) if 0 < self.alpha[i] < self.C]
# 否則遍歷整個訓(xùn)練集
non_satisfy_list = [i for i in range(self.m) if i not in index_list]
index_list.extend(non_satisfy_list)
for i in index_list:
if self._KKT(i):
continue
E1 = self.E[i]
# 如果E2是+,,選擇最小的;如果E2是負(fù)的,,選擇最大的
if E1 >= 0:
j = min(range(self.m), key=lambda x: self.E[x])
else:
j = max(range(self.m), key=lambda x: self.E[x])
return i, j
def _compare(self, _alpha, L, H):
if _alpha > H:
return H
elif _alpha < L:
return L
else:
return _alpha
def fit(self, features, labels):
self.init_args(features, labels)
for t in range(self.max_iter):
# train
i1, i2 = self._init_alpha()
# 邊界
if self.Y[i1] == self.Y[i2]:
L = max(0, self.alpha[i1] + self.alpha[i2] - self.C)
H = min(self.C, self.alpha[i1] + self.alpha[i2])
else:
L = max(0, self.alpha[i2] - self.alpha[i1])
H = min(self.C, self.C + self.alpha[i2] - self.alpha[i1])
E1 = self.E[i1]
E2 = self.E[i2]
# eta=K11+K22-2K12
eta = self.kernel(self.X[i1], self.X[i1]) + self.kernel(
self.X[i2],
self.X[i2]) - 2 * self.kernel(self.X[i1], self.X[i2])
if eta <= 0:
# print('eta <= 0')
continue
alpha2_new_unc = self.alpha[i2] + self.Y[i2] * (
E1 - E2) / eta #此處有修改,,根據(jù)書上應(yīng)該是E1 - E2,書上130-131頁
alpha2_new = self._compare(alpha2_new_unc, L, H)
alpha1_new = self.alpha[i1] + self.Y[i1] * self.Y[i2] * (
self.alpha[i2] - alpha2_new)
b1_new = -E1 - self.Y[i1] * self.kernel(self.X[i1], self.X[i1]) * (
alpha1_new - self.alpha[i1]) - self.Y[i2] * self.kernel(
self.X[i2],
self.X[i1]) * (alpha2_new - self.alpha[i2]) + self.b
b2_new = -E2 - self.Y[i1] * self.kernel(self.X[i1], self.X[i2]) * (
alpha1_new - self.alpha[i1]) - self.Y[i2] * self.kernel(
self.X[i2],
self.X[i2]) * (alpha2_new - self.alpha[i2]) + self.b
if 0 < alpha1_new < self.C:
b_new = b1_new
elif 0 < alpha2_new < self.C:
b_new = b2_new
else:
# 選擇中點(diǎn)
b_new = (b1_new + b2_new) / 2
# 更新參數(shù)
self.alpha[i1] = alpha1_new
self.alpha[i2] = alpha2_new
self.b = b_new
self.E[i1] = self._E(i1)
self.E[i2] = self._E(i2)
return 'train done!'
def predict(self, data):
r = self.b
for i in range(self.m):
r += self.alpha[i] * self.Y[i] * self.kernel(data, self.X[i])
return 1 if r > 0 else -1
def score(self, X_test, y_test):
right_count = 0
for i in range(len(X_test)):
result = self.predict(X_test[i])
if result == y_test[i]:
right_count += 1
return right_count / len(X_test)
def _weight(self):
# linear model
yx = self.Y.reshape(-1, 1) * self.X
self.w = np.dot(yx.T, self.alpha)
return self.w
svm = SVM(max_iter=100)
svm.fit(X_train, y_train)
'train done!'
svm.score(X_test, y_test)
0.6
參考
- Prof. Andrew Ng. Machine Learning. Stanford University
- 李航,,《統(tǒng)計(jì)學(xué)習(xí)方法》,,清華大學(xué)出版社