ipioneer 2019-01-27
在我们进行项目时,最好是测试不同的模型,以确定最佳机器学习模型,该机器学习模型可根据手头的问题在准确性、复杂性和执行效率之间取得良好的平衡。一些软件,如RapidMiner,提供了这种功能。然而,为此目的使用软件产品会导致在调优模型和探索一些复杂性方面采用黑箱方法。我们可以创建一个简单的python脚本,使用足够的模块化和参数化来测试和调优许多广泛使用的回归算法。
在Python中以最小的手工干预测试、调优和比较各种回归模型。
本模块包含的机器学习模型有:
下面是关键的输入。
在接受这些输入后,我们会为每一个考虑中的模型形式执行以下操作:
创建了一个pandas数据框“结果”,它为您正在测试的每个模型提供以下指标
该表有助于比较各种模型形式,而训练和测试指标可以作为发现过度拟合的良好指标。
该模块绝不涉及特征工程,仅根据输入数据执行特征选择。执行有效的特征工程以提高任何模型的结果非常重要。用户可能会观察到其中一个模型比另一个更好,但是随着预测变量的改进,任何模型的整体性能都可以得到显着改善。
# importing general purpose libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sb import dfply as dp import math import random import warnings from sklearn import datasets # importing model selection and evaluation libraries # train-test-validation dataset creation from sklearn.model_selection import train_test_split # data normalization from sklearn.preprocessing import MinMaxScaler, StandardScaler # Pipeline from sklearn.pipeline import Pipeline # feature selection from mlxtend.feature_selection import SequentialFeatureSelector from mlxtend.plotting import plot_sequential_feature_selection # hyperparameter tuning from sklearn.model_selection import GridSearchCV, RandomizedSearchCV # crossvalidation from sklearn.model_selection import cross_val_score, KFold # accuracy testing from sklearn.metrics import mean_absolute_error, r2_score, mean_squared_error # Importing models # linear models from sklearn.linear_model import LinearRegression, Ridge, Lasso from sklearn.linear_model import BayesianRidge # non-parametric models from sklearn.neighbors import KNeighborsRegressor # Decision tree from sklearn.tree import DecisionTreeRegressor # Support vectr machine from sklearn.svm import SVR # ensemble models # bagging from sklearn.ensemble import BaggingRegressor, RandomForestRegressor # tree based boosting from sklearn.ensemble import GradientBoostingRegressor from xgboost import XGBRegressor # stacking from mlxtend.regressor import StackingRegressor
第一个函数根据用户在控制面板中指定的条件创建用于归一化和网格搜索的管道。
def create_pipeline(norm, model): if norm == 1: scale = StandardScaler() pipe = Pipeline([('norm', scale), ('reg', model)]) elif norm == 2: scale = MinMaxScaler() pipe = Pipeline([('norm', scale), ('reg', model)]) else: pipe = Pipeline([('reg', model)]) return pipe
第二个函数执行forward 特征选择并返回最佳特征的索引。
def select_features(model, X_train, Y_train, selection, score_criteria, see_details, norm=0): pipe = create_pipeline(norm, model) sfs = SequentialFeatureSelector(pipe, forward=selection, k_features='best', scoring=score_criteria, verbose=see_details) sfs = sfs.fit(X_train, Y_train) return list(sfs.k_feature_idx_)
下面的函数对提供的参数网格执行网格搜索并返回最佳机器学习模型对象。
def run_model(model, param_grid, X_train, Y_train, X, Y, score_criteria, folds, see_details, norm=0): pipe = create_pipeline(norm, model) model_grid = GridSearchCV(pipe, param_grid, cv=folds, scoring=score_criteria, verbose=see_details) model_grid.fit(X_train, Y_train) return model_grid.best_estimator_
最后一个函数计算最佳超参数组合的所有相关度量,并返回这些度量的pandas 序列。
def get_model_eval(model, X_train, Y_train, X_test, Y_test): return pd.Series([model, mean_squared_error(Y_train, model.predict(X_train)), mean_squared_error(Y_test, model.predict(X_test)), (abs(model.predict(X_train) - Y_train) / Y_train).mean(), (abs(model.predict(X_test) - Y_test) / Y_test).mean()])
这是这个模块中所有模型的各种模型参数的全局字典。基于diabetes 数据集的典型范围的代码中填充了一些默认值。这个字典包含了每个模型的一些关键的超参数。用户可以访问scikit-learn文档,获取所有参数的列表,并根据需求添加到下面的字典中。
PARAM_DICT = { LinearRegression: {'reg__copy_X': [True, False], 'reg__fit_intercept': [True, False], 'reg__n_jobs': [10, 20]}, Ridge: {'reg__alpha': [0.1, 1, 100], 'reg__copy_X': [True, False], 'reg__fit_intercept': [True, False], 'reg__tol': [0.1, 1], 'reg__solver': ['auto', 'svd', 'cholesky', 'lsqr', 'sparse_cg', 'sag', 'saga']}, Lasso: {'reg__alpha': [0.1, 1, 100], 'reg__copy_X': [True, False], 'reg__fit_intercept': [True, False], 'reg__tol': [0.1, 1]}, KNeighborsRegressor: {'reg__n_neighbors': [5, 30, 100]}, BayesianRidge: {'reg__alpha_1': [10**-6, 10**-3], 'reg__alpha_2': [10**-6, 10**-3], 'reg__copy_X': [True, False], 'reg__fit_intercept': [True, False], 'reg__lambda_1': [10**-6, 10**-3], 'reg__lambda_2': [10**-6, 10**-3], 'reg__n_iter': [300, 500, 1000], 'reg__tol': [0.001, 0.01, 0.1]}, DecisionTreeRegressor: {'reg__max_depth': [5, 10, 20], 'reg__max_features': [0.3, 0.7, 1.0], 'reg__max_leaf_nodes': [10, 50, 100], 'reg__splitter': ['best', 'random']}, BaggingRegressor: { 'reg__bootstrap': [True, False], 'reg__bootstrap_features': [True, False], 'reg__max_features': [0.3, 0.7, 1.0], 'reg__max_samples': [0.3, 0.7, 1.0], 'reg__n_estimators': [10, 50, 100]}, RandomForestRegressor: {'reg__bootstrap': [True, False], 'reg__max_depth': [5, 10, 20], 'reg__max_features': [0.3, 0.7, 1.0], 'reg__max_leaf_nodes': [10, 50, 100], 'reg__min_impurity_decrease': [0, 0.1, 0.2], 'reg__n_estimators': [10, 50, 100]}, SVR: {'reg__C': [10**-3, 1, 1000], 'reg__kernel': ['linear', 'poly', 'rbf'], 'reg__shrinking': [True, False]}, GradientBoostingRegressor: {'reg__learning_rate': [0.1, 0.2, 0.5], 'reg__loss': ['ls', 'lad', 'huber', 'quantile'], 'reg__max_depth': [10, 20, 50], 'reg__max_features': [0.5, 0.8, 1.0], 'reg__max_leaf_nodes': [10, 50, 100], 'reg__min_impurity_decrease': [0, 0.1, 0.2], 'reg__min_samples_leaf': [5, 10, 20], 'reg__min_samples_split': [5, 10, 20], 'reg__n_estimators': [10, 50, 100]}, XGBRegressor: {'reg__booster': ['gbtree', 'gblinear', 'dart'], 'reg__learning_rate': [0.2, 0.5, 0.8], 'reg__max_depth': [5, 10, 20], 'reg__n_estimators': [10, 50, 100], 'reg__reg_alpha': [0.1, 1, 10], 'reg__reg_lambda': [0.1, 1, 10], 'reg__subsample': [0.3, 0.5, 0.8]}, }
可以在此处更改模块的输入。这是此脚本的控制面板,可以更改介绍中提到的所有变量以测试各种方案。Python代码如下:
# ---------------------------------------------------------- # USER CONTROL PANEL, CHANGE THE VARIABLES, MODEL FORMS ETC. HERE # Read data here, define X (features) and Y (Target variable) data = datasets.load_diabetes() X = pd.DataFrame(data['data']) X.columns = data['feature_names'] Y = data['target'] # Specify size of test data (%) size = 0.3 # Set random seed for sampling consistency random.seed(100) # Set type of normalization you want to perform # 0 - No Normalization, 1 - Min-max scaling, 2 - Zscore scaling norm = 0 # Mention all model forms you want to run - Model Objects to_run = [LinearRegression, Ridge, Lasso, KNeighborsRegressor, DecisionTreeRegressor, BaggingRegressor, SVR, XGBRegressor] # Specify number of crossvalidation folds folds = 5 # Specify model selection criteria # Possible values are: # ‘explained_variance’ # ‘neg_mean_absolute_error’ # ‘neg_mean_squared_error’ # ‘neg_mean_squared_log_error’ # ‘neg_median_absolute_error’ # ‘r2’ score_criteria = 'neg_mean_absolute_error' # Specify details of terminal output you'd like to see # 0 - No output, 1 - All details, 2 - Progress bar # Outputs might vary based on individual functions see_details = 1 # ----------------------------------------------------------
本节迭代地为用户指定的每个机器学习模型找到最佳的超参数集,计算度量并填充结果表以供进一步分析/实验。
# Model execution part, resuts will be stored in the dataframe 'results' # Best model can be selected based on these criteria results = pd.DataFrame(columns=['ModelForm', 'TrainRMSE', 'TestRMSE', 'TrainMAPE', 'TestMAPE']) X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=size) for model in to_run: with warnings.catch_warnings(): warnings.simplefilter('ignore') best_feat = select_features(model(), X_train, Y_train, True, score_criteria, see_details, norm) model = run_model(model(), PARAM_DICT[model], X_train.iloc[:, best_feat], Y_train, X.iloc[:, best_feat], Y, score_criteria, folds, see_details, norm) stats = get_model_eval(model, X_train.iloc[:, best_feat], Y_train, X_test.iloc[:, best_feat], Y_test) stats.index = results.columns results = results.append(stats, ignore_index=True) print(results)
从结果表中可以看出,在本场景中测试的所有模型形式中,最基本的线性回归模型提供了最好且一致的性能。这也突出了特征工程的重要性,因为我们期望集成模型总体上显示更好的性能。另一方面,基于训练和测试指标,XGB回归显示出过度拟合的迹象。所有其他模型都提供了类似的性能。这表明还需要测试不同范围的超参数。