Pipeline和Gridsearch并行化调参简介

前端之家收集整理的这篇文章主要介绍了Pipeline和Gridsearch并行化调参简介前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。

本例构建一个管道来进行降维和预测的工作:先降维,接着通过支持向量分类器进行预测.本例将演示与在网格搜索过程进行单变量特征选择相比,怎样使用GrideSearchCV和管道来优化单一的CV跑无监督的PCA降维与NMF降维不同类别评估器。
(原文:This example constructs a pipeline that does dimensionality reduction followed by prediction with a support vector classifier. It demonstrates the use of GridSearchCV and Pipeline to optimize over different classes of estimators in a single CV run – unsupervised PCA and NMF dimensionality reductions are compared to univariate feature selection during the grid search.)

# coding:utf-8

from __future__ import print_function,division
import numpy as np
from sklearn.datasets import load_digits
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.svm import LinearSVC
from sklearn.decomposition import PCA,NMF
from sklearn.feature_selection import SelectKBest,chi2
from pylab import *

pipe = Pipeline([
    ('reduce_dim',PCA()),('classify',LinearSVC())
])

N_FEATURES_OPTIONS = [2,4,8]
C_OPTIONS = [1,10,100,1000]
param_grid = [
    {
        'reduce_dim': [PCA(iterated_power=7),NMF()],'reduce_dim__n_components': N_FEATURES_OPTIONS,'classify__C': C_OPTIONS
    },{
        'reduce_dim': [SelectKBest(chi2)],21)">'reduce_dim__k': N_FEATURES_OPTIONS,]
reducer_labels = [u'主成分分析(PCA)',21)">u'非负矩阵分解(NMF)',21)">u'KBest(chi2)']

grid = GridSearchCV(pipe,cv=3,n_jobs=2,param_grid=param_grid)
digits = load_digits()
grid.fit(digits.data,digits.target)

mean_scores = np.array(grid.cv_results_['mean_test_score'])
# 得分按照param_grid的迭代顺序,在这里就是字母顺序 
mean_scores = mean_scores.reshape(len(C_OPTIONS),-1,len(N_FEATURES_OPTIONS))
# 为最优C选择分数
mean_scores = mean_scores.max(axis=0)
bar_offsets = (np.arange(len(N_FEATURES_OPTIONS)) *
               (len(reducer_labels) + 1) + .5)

myfont = matplotlib.font_manager.FontProperties(fname="Microsoft-Yahei-UI-Light.ttc")
mpl.rcParams['axes.unicode_minus'] = False

plt.figure()
COLORS = 'bgrcmyk'
for i,(label,reducer_scores) in enumerate(zip(reducer_labels,mean_scores)):
    plt.bar(bar_offsets + i,reducer_scores,label=label,color=COLORS[i])

plt.title(u"特征降维技术的比较",fontproperties=myfont)
plt.xlabel(u'特征减少的数量',fontproperties=myfont)
plt.xticks(bar_offsets + len(reducer_labels) / 2,N_FEATURES_OPTIONS)
plt.ylabel(u'数字的分类精度',fontproperties=myfont)
plt.ylim((0,1))
plt.legend(loc='upper left',prop=myfont)
plt.show()



Pipeline可以将许多算法模型串联起来,比如将特征提取、归一化、分类组织在一起形成一个典型的机器学习问题工作流。主要带来两点好处:
1. 直接调用fit和predict方法来对pipeline中的所有算法模型进行训练和预测。
2. 可以结合grid search对参数进行选择

下面是一个官方文档的示例:

 
 @H_568_301@
  
  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    >>> from sklearn.pipeline import Pipeline from sklearn.svm import SVC from sklearn.decomposition import PCA >>> estimators = [('reduce_dim',('svm',SVC())] >>> clf = Pipeline(estimators) >>> clf Pipeline(steps=[(True,n_components=None,whiten=False)),SVC(C=1.0,cache_size=200,class_weight=0.0,decision_function_shape=3,gamma='auto',kernel='rbf',max_iter=-1,probability=False,random_state=0.001,verbose=False))])

    estimators中定义了两个模型,一个是PCA、另一个是SVC。

      
      
  • 1
    • 1
    >>> clf.set_params(svm__C=10)

    可以通过set_params函数对pipeline中的某个模型设定参数,上面是将svm参数C设置为10

    另外一个例子:

    from sklearn import svm
    from sklearn.datasets import samples_generator
    from sklearn.feature_selection import SelectKBest
    import f_regression
    >>> # generate some data to play with
    >>> X,y = samples_generator.make_classification(
    ...     n_informative=5,n_redundant=0,random_state=42)
    # ANOVA SVM-C
    >>> anova_filter = SelectKBest(f_regression,k=5)
    >>> clf = svm.SVC(kernel='linear')
    >>> anova_svm = Pipeline([('anova',anova_filter),0); Box-sizing: border-Box;">'svc',clf)])
    # You can set the parameters using the names issued
    # For instance,fit using a k of 10 in the SelectKBest
    # and a parameter 'C' of the svm
    >>> anova_svm.set_params(anova__k=10,svc__C=.1).fit(X,y)
    ...                                              
    Pipeline(steps=[...])
    >>> prediction = anova_svm.predict(X)
    >>> anova_svm.score(X,y)                        
    0.77...
    # getting the selected features chosen by anova_filter
    >>> anova_svm.named_steps['anova'].get_support()
    ...
    array([ True],dtype=bool)
    原文链接:https://www.f2er.com/javaschema/283446.html

    猜你在找的设计模式相关文章