日本免费高清视频-国产福利视频导航-黄色在线播放国产-天天操天天操天天操天天操|www.shdianci.com

學(xué)無先后,達(dá)者為師

網(wǎng)站首頁 編程語言 正文

Python數(shù)據(jù)分析之使用scikit-learn構(gòu)建模型_python

作者:阿呆小記??????? ? 更新時(shí)間: 2022-10-12 編程語言

一、使用sklearn轉(zhuǎn)換器處理

sklearn提供了model_selection模型選擇模塊、preprocessing數(shù)據(jù)預(yù)處理模塊、decompisition特征分解模塊,通過這三個(gè)模塊能夠?qū)崿F(xiàn)數(shù)據(jù)的預(yù)處理和模型構(gòu)建前的數(shù)據(jù)標(biāo)準(zhǔn)化、二值化、數(shù)據(jù)集的分割、交叉驗(yàn)證和PCA降維處理等工作。

1.加載datasets中的數(shù)據(jù)集

sklearn庫的datasets模塊集成了部分?jǐn)?shù)據(jù)分析的經(jīng)典數(shù)據(jù)集,可以選用進(jìn)行數(shù)據(jù)預(yù)處理、建模的操作。

常見的數(shù)據(jù)集加載函數(shù)(器):

數(shù)據(jù)集加載函數(shù)(器)

數(shù)據(jù)集任務(wù)類型

load_digits

分類

load_wine

分類

load_iris

分類、聚類

load_breast_cancer

分類、聚類

load_boston

回歸

fetch_california_housing

回歸

加載后的數(shù)據(jù)集可以看成是一個(gè)字典,幾乎所有的sklearn數(shù)據(jù)集均可以使用data、target、feature_names、DESCR分別獲取數(shù)據(jù)集的數(shù)據(jù)、標(biāo)簽、特征名稱、描述信息。

以load_breast_cancer為例:

from sklearn.datasets import load_breast_cancer

cancer = load_breast_cancer()##將數(shù)據(jù)集賦值給iris變量

print('breast_cancer數(shù)據(jù)集的長度為:',len(cancer))
print('breast_cancer數(shù)據(jù)集的類型為:',type(cancer))
#breast_cancer數(shù)據(jù)集的長度為: 6
#breast_cancer數(shù)據(jù)集的類型為: <class 'sklearn.utils.Bunch'>

cancer_data = cancer['data']
print('breast_cancer數(shù)據(jù)集的數(shù)據(jù)為:','\n',cancer_data)
#breast_cancer數(shù)據(jù)集的數(shù)據(jù)為:
[[1.799e+01 1.038e+01 1.228e+02 ... 2.654e-01 4.601e-01 1.189e-01]
[2.057e+01 1.777e+01 1.329e+02 ... 1.860e-01 2.750e-01 8.902e-02]
[1.969e+01 2.125e+01 1.300e+02 ... 2.430e-01 3.613e-01 8.758e-02]
...
[1.660e+01 2.808e+01 1.083e+02 ... 1.418e-01 2.218e-01 7.820e-02]
[2.060e+01 2.933e+01 1.401e+02 ... 2.650e-01 4.087e-01 1.240e-01]
[7.760e+00 2.454e+01 4.792e+01 ... 0.000e+00 2.871e-01 7.039e-02]]

cancer_target = cancer['target'] ## 取出數(shù)據(jù)集的標(biāo)簽
print('breast_cancer數(shù)據(jù)集的標(biāo)簽為:\n',cancer_target)
#breast_cancer數(shù)據(jù)集的標(biāo)簽為:
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 1 0 1 1 1 1 1 0 0 1 0 0 1 1 1 1 0 1 0 0 1 1 1 1 0 1 0 0
1 0 1 0 0 1 1 1 0 0 1 0 0 0 1 1 1 0 1 1 0 0 1 1 1 0 0 1 1 1 1 0 1 1 0 1 1
1 1 1 1 1 1 0 0 0 1 0 0 1 1 1 0 0 1 0 1 0 0 1 0 0 1 1 0 1 1 0 1 1 1 1 0 1
1 1 1 1 1 1 1 1 0 1 1 1 1 0 0 1 0 1 1 0 0 1 1 0 0 1 1 1 1 0 1 1 0 0 0 1 0
1 0 1 1 1 0 1 1 0 0 1 0 0 0 0 1 0 0 0 1 0 1 0 1 1 0 1 0 0 0 0 1 1 0 0 1 1
1 0 1 1 1 1 1 0 0 1 1 0 1 1 0 0 1 0 1 1 1 1 0 1 1 1 1 1 0 1 0 0 0 0 0 0 0
0 0 0 0 0 0 0 1 1 1 1 1 1 0 1 0 1 1 0 1 1 0 1 0 0 1 1 1 1 1 1 1 1 1 1 1 1
1 0 1 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 0 1 0 1 1 1 1 0 0 0 1 1
1 1 0 1 0 1 0 1 1 1 0 1 1 1 1 1 1 1 0 0 0 1 1 1 1 1 1 1 1 1 1 1 0 0 1 0 0
0 1 0 0 1 1 1 1 1 0 1 1 1 1 1 0 1 1 1 0 1 1 0 0 1 1 1 1 1 1 0 1 1 1 1 1 1
1 0 1 1 1 1 1 0 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 0 1 0 1 1 1 1 1 0 1 1
0 1 0 1 1 0 1 0 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 0 1
1 1 1 1 1 1 0 1 0 1 1 0 1 1 1 1 1 0 0 1 0 1 0 1 1 1 1 1 0 1 1 0 1 0 1 0 0
1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 0 1 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 0 0 0 0 0 0 1]

cancer_names = cancer['feature_names'] ## 取出數(shù)據(jù)集的特征名
print('breast_cancer數(shù)據(jù)集的特征名為:\n',cancer_names)
#breast_cancer數(shù)據(jù)集的特征名為:
['mean radius' 'mean texture' 'mean perimeter' 'mean area'
'mean smoothness' 'mean compactness' 'mean concavity'
'mean concave points' 'mean symmetry' 'mean fractal dimension'
'radius error' 'texture error' 'perimeter error' 'area error'
'smoothness error' 'compactness error' 'concavity error'
'concave points error' 'symmetry error' 'fractal dimension error'
'worst radius' 'worst texture' 'worst perimeter' 'worst area'
'worst smoothness' 'worst compactness' 'worst concavity'
'worst concave points' 'worst symmetry' 'worst fractal dimension']

cancer_desc = cancer['DESCR'] ## 取出數(shù)據(jù)集的描述信息
print('breast_cancer數(shù)據(jù)集的描述信息為:\n',cancer_desc)
#breast_cancer數(shù)據(jù)集的描述信息為:
.. _breast_cancer_dataset:

Breast cancer wisconsin (diagnostic) dataset
--------------------------------------------

**Data Set Characteristics:**

:Number of Instances: 569

:Number of Attributes: 30 numeric, predictive attributes and the class

:Attribute Information:
- radius (mean of distances from center to points on the perimeter)
- texture (standard deviation of gray-scale values)
- perimeter
- area
- smoothness (local variation in radius lengths)
- compactness (perimeter^2 / area - 1.0)
- concavity (severity of concave portions of the contour)
- concave points (number of concave portions of the contour)
- symmetry
- fractal dimension ("coastline approximation" - 1)

The mean, standard error, and "worst" or largest (mean of the three
largest values) of these features were computed for each image,
resulting in 30 features. For instance, field 3 is Mean Radius, field
13 is Radius SE, field 23 is Worst Radius.

- class:
- WDBC-Malignant
- WDBC-Benign

:Summary Statistics:

===================================== ====== ======
Min Max
===================================== ====== ======
radius (mean): 6.981 28.11
texture (mean): 9.71 39.28
perimeter (mean): 43.79 188.5
area (mean): 143.5 2501.0
smoothness (mean): 0.053 0.163
compactness (mean): 0.019 0.345
concavity (mean): 0.0 0.427
concave points (mean): 0.0 0.201
symmetry (mean): 0.106 0.304
fractal dimension (mean): 0.05 0.097
radius (standard error): 0.112 2.873
texture (standard error): 0.36 4.885
perimeter (standard error): 0.757 21.98
area (standard error): 6.802 542.2
smoothness (standard error): 0.002 0.031
compactness (standard error): 0.002 0.135
concavity (standard error): 0.0 0.396
concave points (standard error): 0.0 0.053
symmetry (standard error): 0.008 0.079
fractal dimension (standard error): 0.001 0.03
radius (worst): 7.93 36.04
texture (worst): 12.02 49.54
perimeter (worst): 50.41 251.2
area (worst): 185.2 4254.0
smoothness (worst): 0.071 0.223
compactness (worst): 0.027 1.058
concavity (worst): 0.0 1.252
concave points (worst): 0.0 0.291
symmetry (worst): 0.156 0.664
fractal dimension (worst): 0.055 0.208
===================================== ====== ======

:Missing Attribute Values: None

:Class Distribution: 212 - Malignant, 357 - Benign

:Creator: Dr. William H. Wolberg, W. Nick Street, Olvi L. Mangasarian

:Donor: Nick Street

:Date: November, 1995

This is a copy of UCI ML Breast Cancer Wisconsin (Diagnostic) datasets.
https://goo.gl/U2Uwz2

Features are computed from a digitized image of a fine needle
aspirate (FNA) of a breast mass. They describe
characteristics of the cell nuclei present in the image.

Separating plane described above was obtained using
Multisurface Method-Tree (MSM-T) [K. P. Bennett, "Decision Tree
Construction Via Linear Programming." Proceedings of the 4th
Midwest Artificial Intelligence and Cognitive Science Society,
pp. 97-101, 1992], a classification method which uses linear
programming to construct a decision tree. Relevant features
were selected using an exhaustive search in the space of 1-4
features and 1-3 separating planes.

The actual linear program used to obtain the separating plane
in the 3-dimensional space is that described in:
[K. P. Bennett and O. L. Mangasarian: "Robust Linear
Programming Discrimination of Two Linearly Inseparable Sets",
Optimization Methods and Software 1, 1992, 23-34].

This database is also available through the UW CS ftp server:

ftp ftp.cs.wisc.edu
cd math-prog/cpo-dataset/machine-learn/WDBC/

.. topic:: References

- W.N. Street, W.H. Wolberg and O.L. Mangasarian. Nuclear feature extraction
for breast tumor diagnosis. IS&T/SPIE 1993 International Symposium on
Electronic Imaging: Science and Technology, volume 1905, pages 861-870,
San Jose, CA, 1993.
- O.L. Mangasarian, W.N. Street and W.H. Wolberg. Breast cancer diagnosis and
prognosis via linear programming. Operations Research, 43(4), pages 570-577,
July-August 1995.
- W.H. Wolberg, W.N. Street, and O.L. Mangasarian. Machine learning techniques
to diagnose breast cancer from fine-needle aspirates. Cancer Letters 77 (1994)
163-171.

2.劃分?jǐn)?shù)據(jù)集:訓(xùn)練集、測試集

在數(shù)據(jù)分析的過程中,為了保證模型在實(shí)際系統(tǒng)中能夠起到預(yù)期的作用,一般需要將樣本分成獨(dú)立的三部分:訓(xùn)練集(train set)、驗(yàn)證集(validation set)、測試集(test set)。
訓(xùn)練集—50%:用于估計(jì)模型
驗(yàn)證集—25%:用于確定網(wǎng)絡(luò)結(jié)構(gòu)或控制模型復(fù)雜程度的參數(shù)
測試集—25%:用于檢驗(yàn)最優(yōu)模型的性能

當(dāng)數(shù)據(jù)總量較少的時(shí)候,使用上述方法劃分就不合適。常用的方法是留少部分做測試集,然后對其余N個(gè)樣本采用K折交叉驗(yàn)證法:
將樣本打亂,并均勻分成K份,輪流選擇其中K-1份做訓(xùn)練,剩余一份做檢驗(yàn),計(jì)算預(yù)測誤差平方和,最后把K次的預(yù)測誤差平方和的均值作為選擇最優(yōu)模型結(jié)構(gòu)的依據(jù)。

sklearn.model_selection.train_test_split(*arrays,**options)

參數(shù)名稱

說明

*arrays

接受一個(gè)或者多個(gè)數(shù)據(jù)集。代表需要?jiǎng)澐值臄?shù)據(jù)集。若為分類、回歸,則傳入數(shù)據(jù)、標(biāo)簽;若為聚類,則傳入數(shù)據(jù)

test_size

代表測試集的大小。若傳入為float類型數(shù)據(jù),需要限定在0-1之間,代表測試集在總數(shù)中的占比;若傳入的為int型數(shù)據(jù),則表示測試集記錄的絕對數(shù)目。該參數(shù)與train_size可以只傳入一個(gè)。

train_size

與test_size相同

random_state

接受int。代表隨機(jī)種子編號,相同隨機(jī)種子編號產(chǎn)生相同的隨機(jī)結(jié)果。

shuffle

接受boolean。代表是否進(jìn)行有回放抽樣,若為True,則stratify參數(shù)必須不能為空。

stratify

接受array或None。若不為None,則使用傳入的標(biāo)簽進(jìn)行分層抽樣。

print('原始數(shù)據(jù)集數(shù)據(jù)的形狀為:',cancer_data.shape)
print('原始數(shù)據(jù)集標(biāo)簽的形狀為:',cancer_target.shape)
原始數(shù)據(jù)集數(shù)據(jù)的形狀為: (569, 30)
原始數(shù)據(jù)集標(biāo)簽的形狀為: (569,)

from sklearn.model_selection import train_test_split

cancer_data_train,cancer_data_test,cancer_target_train,cancer_target_test = train_test_split(cancer_data,cancer_target,
test_size=0.2,random_state=42)
print('訓(xùn)練集數(shù)據(jù)的形狀為:',cancer_data_train.shape)
print('訓(xùn)練集數(shù)據(jù)的標(biāo)簽形狀為:',cancer_target_train.shape)
print('測試集數(shù)據(jù)的形狀為:',cancer_data_test.shape)
print('測試集數(shù)據(jù)的標(biāo)簽形狀為:',cancer_target_test.shape)
訓(xùn)練集數(shù)據(jù)的形狀為: (455, 30)
訓(xùn)練集數(shù)據(jù)的標(biāo)簽形狀為: (455,)
測試集數(shù)據(jù)的形狀為: (114, 30)
測試集數(shù)據(jù)的標(biāo)簽形狀為: (114,)

該函數(shù)分別將傳入的數(shù)據(jù)劃分為訓(xùn)練集和測試集。如果傳入的是一組數(shù)據(jù),那么生成的就是這一組數(shù)據(jù)隨機(jī)劃分后的訓(xùn)練集和測試集,總共兩組;如果傳入的是兩組數(shù)據(jù),那么生成的訓(xùn)練集和測試集分別兩組,總共四組。train_test_split方法僅是最常用的數(shù)據(jù)劃分方法,在model_selection模塊中還有其他的劃分函數(shù),例如PredefinedSplit、ShuffleSplit等。

3.使用sklearn轉(zhuǎn)換器進(jìn)行數(shù)據(jù)預(yù)處理與降維

sklearn將相關(guān)的功能封裝為轉(zhuǎn)換器,轉(zhuǎn)換器主要包含有3個(gè)方法:fit、transform、fit_trainsform:

import numpy as np
from sklearn.preprocessing import MinMaxScaler

# 生成規(guī)則
Scaler = MinMaxScaler().fit(cancer_data_train)
# 將規(guī)則應(yīng)用于訓(xùn)練集
cancer_trainScaler = Scaler.transform(cancer_data_train)
# 將規(guī)則應(yīng)用于測試集
cancer_testScaler = Scaler.transform(cancer_data_test)

print('離差標(biāo)準(zhǔn)化前訓(xùn)練集數(shù)據(jù)的最小值:',cancer_data_train.min())
print('離差標(biāo)準(zhǔn)化后訓(xùn)練集數(shù)據(jù)的最小值:',np.min(cancer_trainScaler))
print('離差標(biāo)準(zhǔn)化前訓(xùn)練集數(shù)據(jù)的最大值:',np.max(cancer_data_train))
print('離差標(biāo)準(zhǔn)化后訓(xùn)練集數(shù)據(jù)的最大值:',np.max(cancer_trainScaler))
print('離差標(biāo)準(zhǔn)化前測試集數(shù)據(jù)的最小值:',np.min(cancer_data_test))
print('離差標(biāo)準(zhǔn)化后測試集數(shù)據(jù)的最小值:',np.min(cancer_testScaler))
print('離差標(biāo)準(zhǔn)化前測試集數(shù)據(jù)的最大值:',np.max(cancer_data_test))
print('離差標(biāo)準(zhǔn)化后測試集數(shù)據(jù)的最大值:',np.max(cancer_testScaler))
離差標(biāo)準(zhǔn)化前訓(xùn)練集數(shù)據(jù)的最小值: 0.0
離差標(biāo)準(zhǔn)化后訓(xùn)練集數(shù)據(jù)的最小值: 0.0
離差標(biāo)準(zhǔn)化前訓(xùn)練集數(shù)據(jù)的最大值: 4254.0
離差標(biāo)準(zhǔn)化后訓(xùn)練集數(shù)據(jù)的最大值: 1.0000000000000002
離差標(biāo)準(zhǔn)化前測試集數(shù)據(jù)的最小值: 0.0
離差標(biāo)準(zhǔn)化后測試集數(shù)據(jù)的最小值: -0.057127602776294695
離差標(biāo)準(zhǔn)化前測試集數(shù)據(jù)的最大值: 3432.0
離差標(biāo)準(zhǔn)化后測試集數(shù)據(jù)的最大值: 1.3264399566986453

目前利用sklearn能夠?qū)崿F(xiàn)對傳入的numpy數(shù)組進(jìn)行標(biāo)準(zhǔn)化處理、歸一化處理、、二值化處理和PCA降維處理。前面基于pandas庫介紹的標(biāo)準(zhǔn)化處理在日常數(shù)據(jù)分析過程中,各類特征處理相關(guān)的操作都需要對訓(xùn)練集和測試集分開進(jìn)行,需要將訓(xùn)練集中的操作規(guī)則、權(quán)重系數(shù)等應(yīng)用到測試集中,利用pandas會(huì)使得過程繁瑣,而sklearn轉(zhuǎn)換器可以輕松實(shí)現(xiàn)。
除了上面展示的離差標(biāo)準(zhǔn)化函數(shù)MinMaxScaler外,還提供了一系列的數(shù)據(jù)預(yù)處理函數(shù):

PCA降維處理:

sklearn.decomposition.PCA(n_components=None, *, copy=True, whiten=False, svd_solver='auto', tol=0.0, iterated_power='auto', random_state=None)

from sklearn.decomposition import PCA
# 生成規(guī)則
pca_model=PCA(n_components=10).fit(cancer_trainScaler)
# 將規(guī)則應(yīng)用到訓(xùn)練集
cancer_trainPca = pca_model.transform(cancer_trainScaler)
# 將規(guī)則應(yīng)用到測試集
cancer_testPca = pca_model.transform(cancer_testScaler)

print('PCA降維前訓(xùn)練集數(shù)據(jù)的形狀為:',cancer_trainScaler.shape)
print('PCA降維后訓(xùn)練集數(shù)據(jù)的形狀為:',cancer_trainPca.shape)
print('PCA降維前測試集數(shù)據(jù)的形狀為:',cancer_testScaler.shape)
print('PCA降維后測試集數(shù)據(jù)的形狀為:',cancer_testPca.shape)
PCA降維前訓(xùn)練集數(shù)據(jù)的形狀為: (455, 30)
PCA降維后訓(xùn)練集數(shù)據(jù)的形狀為: (455, 10)
PCA降維前測試集數(shù)據(jù)的形狀為: (114, 30)
PCA降維后測試集數(shù)據(jù)的形狀為: (114, 10)

二、構(gòu)建評價(jià)聚類模型

聚類分析是在沒有給定劃分類別的情況下,根據(jù)數(shù)據(jù)相似度進(jìn)行樣本分組的一種方法。

1.使用sklearn估計(jì)器構(gòu)建聚類模型

聚類的輸入是一組未被標(biāo)記的樣本,聚類根據(jù)數(shù)據(jù)自身的距離或相似度將它們劃分為若干組,劃分的原則是:組內(nèi)距離最小化,組間距離最大化。

sklearn常用的聚類算法模塊cluster提供的聚類算法:

聚類算法的實(shí)現(xiàn)需要sklearn估計(jì)器(Estimnator),其擁有fit和predict兩個(gè)方法:

方法名稱

說明

fit

fit方法主要適用于訓(xùn)練算法。該方法可以有效接收用于有監(jiān)督學(xué)習(xí)的訓(xùn)練集及其標(biāo)簽兩個(gè)參數(shù),也可以接收用于無監(jiān)督學(xué)習(xí)的數(shù)據(jù)

predict

用于預(yù)測有監(jiān)督學(xué)習(xí)的測試集標(biāo)簽,也可以用于劃分傳入數(shù)據(jù)的類別

以iris數(shù)據(jù)為例,使用sklearn估計(jì)器構(gòu)建K-Means聚類模型:

from sklearn.datasets import load_iris
from sklearn.preprocessing import MinMaxScaler
from sklearn.cluster import KMeans

iris = load_iris() # 加載iris數(shù)據(jù)集
iris_data = iris['data'] # 提取iris數(shù)據(jù)集中的特征
iris_target = iris['target'] # 提取iris數(shù)據(jù)集中的標(biāo)簽
iris_feature_names = iris['feature_names'] #提取iris數(shù)據(jù)集中的特征名稱

scale = MinMaxScaler().fit(iris_data) # 對數(shù)據(jù)集中的特征設(shè)定訓(xùn)練規(guī)則
iris_dataScale = scale.transform(iris_data) # 應(yīng)用規(guī)則

kmeans = KMeans(n_clusters=3,random_state=123).fit(iris_dataScale) # 構(gòu)建并訓(xùn)練模型
print('構(gòu)建的K-Means模型為:\n',kmeans)
#構(gòu)建的K-Means模型為:
KMeans(algorithm='auto', copy_x=True, init='k-means++', max_iter=300,
n_clusters=3, n_init=10, n_jobs=None, precompute_distances='auto',
random_state=123, tol=0.0001, verbose=0)

聚類完成后可以通過sklearn的manifold模塊中的TXNE函數(shù)實(shí)現(xiàn)多維數(shù)據(jù)的可視化展現(xiàn)。

import pandas as pd
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt

# 使用TSNE進(jìn)行數(shù)據(jù)降維,降成2維
tsne = TSNE(n_components=2,init='random',random_state=177).fit(iris_data)

df = pd.DataFrame(tsne.embedding_) # 將原始數(shù)據(jù)轉(zhuǎn)換為DataFrame
df['labels'] = kmeans.labels_ # 將聚類結(jié)果存儲進(jìn)df數(shù)據(jù)表

# 提取不同標(biāo)簽的數(shù)據(jù)
df1 = df[df['labels']==0]
df2 = df[df['labels']==1]
df3 = df[df['labels']==2]

# 繪制圖形
# 繪制畫布大小
fig = plt.figure(figsize=(9,6))
# 用不同顏色表示不同數(shù)據(jù)
plt.plot(df1[0],df1[1],'bo',df2[0],df2[1],'r*',df3[0],df3[1],'gD')
# 保存圖片
plt.savefig('tmp/聚類結(jié)果.png')
# 展示
plt.show()

2.評價(jià)聚類模型

聚類評價(jià)的標(biāo)準(zhǔn)是組內(nèi)的對象相互之間是相似的,而不同組間的對象是不同的,即組內(nèi)相似性越大,組間差別性越大,聚類效果越好。

注意:

1.前四種方法需要真實(shí)值的配合才能夠評價(jià)聚類算法的優(yōu)劣,更具有說服力,并且在實(shí)際操作中,有真實(shí)值參考下,聚類方法的評價(jià)可以等同于分類算法的評價(jià)。

2.除了輪廓系數(shù)評價(jià)法以外的評價(jià)方法,在不考慮業(yè)務(wù)場景的情況下都是分?jǐn)?shù)越高越好,最高分為1,而輪廓系數(shù)評價(jià)法需要判斷不同類別數(shù)目情況下的輪廓系數(shù)的走勢,尋找最優(yōu)的聚類數(shù)目。

FMI評價(jià)法

from sklearn.datasets import load_irisiris = load_iris() # 加載iris數(shù)據(jù)集iris_data = iris['data'] # 提取數(shù)據(jù)集特征 iris_target = iris['target'] # 提取數(shù)據(jù)集標(biāo)簽 from sklearn.metrics import fowlkes_mallows_score from sklearn.cluster import KMeans for i in range(2,7): # 構(gòu)建并訓(xùn)練模型 kmeans = KMeans(n_clusters=i,random_state=123).fit(iris_data) score = fowlkes_mallows_score(iris_target,kmeans.labels_) print('iris數(shù)據(jù)聚%d類FMI評價(jià)分值為:%f'%(i,score)) iris數(shù)據(jù)聚2類FMI評價(jià)分值為:0.750473 iris數(shù)據(jù)聚3類FMI評價(jià)分值為:0.820808 iris數(shù)據(jù)聚4類FMI評價(jià)分值為:0.756593 iris數(shù)據(jù)聚5類FMI評價(jià)分值為:0.725483 iris數(shù)據(jù)聚6類FMI評價(jià)分值為:0.614345

通過結(jié)果可以看出來,當(dāng)聚類為3時(shí)FMI評價(jià)分最高,所以當(dāng)聚類3的時(shí)候,K-Means模型最好。

輪廓系數(shù)評價(jià)法

from sklearn.datasets import load_irisiris = load_iris() # 加載iris數(shù)據(jù)集iris_data = iris['data'] # 提取數(shù)據(jù)集特征 iris_target = iris['target'] # 提取數(shù)據(jù)集標(biāo)簽 from sklearn.metrics import silhouette_score from sklearn.cluster import KMeans import matplotlib.pyplot as plt silhouettteScore = [] for i in range(2,15): ##構(gòu)建并訓(xùn)練模型 kmeans = KMeans(n_clusters = i,random_state=123).fit(iris_data) score = silhouette_score(iris_data,kmeans.labels_) silhouettteScore.append(score) plt.figure(figsize=(10,6)) plt.plot(range(2,15),silhouettteScore,linewidth=1.5, linestyle="-") plt.show()

從圖形可以看出,聚類數(shù)目為2、3和5、6時(shí)平均畸變程度最大。由于iris數(shù)據(jù)本身就是3種鳶尾花的花瓣、花萼長度和寬度的數(shù)據(jù),側(cè)面說明了聚類數(shù)目為3的時(shí)候效果最佳。

Calinski_Harabasz指數(shù)評價(jià)法

from sklearn.datasets import load_irisiris = load_iris() # 加載iris數(shù)據(jù)集iris_data = iris['data'] # 提取數(shù)據(jù)集特征 iris_target = iris['target'] # 提取數(shù)據(jù)集標(biāo)簽 from sklearn.metrics import silhouette_score from sklearn.cluster import KMeans from sklearn.metrics import calinski_harabasz_score for i in range(2,7): ##構(gòu)建并訓(xùn)練模型 kmeans = KMeans(n_clusters = i,random_state=123).fit(iris_data) score = calinski_harabasz_score(iris_data,kmeans.labels_) print('iris數(shù)據(jù)聚%d類calinski_harabaz指數(shù)為:%f'%(i,score)) iris數(shù)據(jù)聚2類calinski_harabaz指數(shù)為:513.924546 iris數(shù)據(jù)聚3類calinski_harabaz指數(shù)為:561.627757 iris數(shù)據(jù)聚4類calinski_harabaz指數(shù)為:530.765808 iris數(shù)據(jù)聚5類calinski_harabaz指數(shù)為:495.541488 iris數(shù)據(jù)聚6類calinski_harabaz指數(shù)為:469.836633

同樣可以看出在聚類為3時(shí),K-Means模型為最優(yōu)。綜合以上評價(jià)方法的使用,在有真實(shí)值參考時(shí),幾種方法都能有效的展示評估聚合模型;在沒有真實(shí)值參考時(shí),可以將輪廓系數(shù)評價(jià)與Calinski_Harabasz指數(shù)評價(jià)相結(jié)合使用。

三、構(gòu)建評價(jià)分類模型

分類是指構(gòu)造一個(gè)分類模型,輸入樣本的特征值,輸出對應(yīng)類別,將每個(gè)樣本映射到預(yù)先定義好的類別。分類模型是建立在自己已有類標(biāo)記的數(shù)據(jù)集上,屬于有監(jiān)督學(xué)習(xí)。在實(shí)際應(yīng)用場景中,分類算法被應(yīng)用在行為分析、物品識別、圖像檢測等。

1.使用sklearn估計(jì)器構(gòu)建分類模型

以breast_cancer數(shù)據(jù)集為例,使用sklearn估計(jì)器構(gòu)建支持向量機(jī)(SVM)模型:

import numpy as np
from sklearn.datasets import load_breast_cancer
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler

cancer = load_breast_cancer()
cancer_data = cancer['data']
cancer_target = cancer['target']
cancer_names = cancer['feature_names']

## 將數(shù)據(jù)劃分為訓(xùn)練集測試集
cancer_data_train,cancer_data_test,cancer_target_train,cancer_target_test = \
train_test_split(cancer_data,cancer_target,test_size = 0.2,random_state = 22)

## 數(shù)據(jù)標(biāo)準(zhǔn)化
stdScaler = StandardScaler().fit(cancer_data_train) # 設(shè)定標(biāo)準(zhǔn)化規(guī)則
cancer_trainStd = stdScaler.transform(cancer_data_train) # 將標(biāo)準(zhǔn)化規(guī)則應(yīng)用到訓(xùn)練集
cancer_testStd = stdScaler.transform(cancer_data_test) # 將標(biāo)準(zhǔn)化規(guī)則應(yīng)用到測試集

## 建立SVM模型
svm = SVC().fit(cancer_trainStd,cancer_target_train)
print('建立的SVM模型為:\n',svm)
#建立的SVM模型為:
SVC(C=1.0, break_ties=False, cache_size=200, class_weight=None, coef0=0.0,
decision_function_shape='ovr', degree=3, gamma='scale', kernel='rbf',
max_iter=-1, probability=False, random_state=None, shrinking=True,
tol=0.001, verbose=False)

## 預(yù)測訓(xùn)練集結(jié)果
cancer_target_pred = svm.predict(cancer_testStd)
print('預(yù)測前20個(gè)結(jié)果為:\n',cancer_target_pred[:20])
#預(yù)測前20個(gè)結(jié)果為:
[1 0 0 0 1 1 1 1 1 1 1 1 0 1 1 1 0 0 1 1]
## 求出預(yù)測和真實(shí)一樣的數(shù)目
true = np.sum(cancer_target_pred == cancer_target_test )
print('預(yù)測對的結(jié)果數(shù)目為:', true)
print('預(yù)測錯(cuò)的的結(jié)果數(shù)目為:', cancer_target_test.shape[0]-true)
print('預(yù)測結(jié)果準(zhǔn)確率為:', true/cancer_target_test.shape[0])
預(yù)測對的結(jié)果數(shù)目為: 111
預(yù)測錯(cuò)的的結(jié)果數(shù)目為: 3
預(yù)測結(jié)果準(zhǔn)確率為: 0.9736842105263158

2.評價(jià)分類模型

分類模型對測試集進(jìn)行預(yù)測而得出的準(zhǔn)確率并不能很好地反映模型的性能,為了有效判斷一個(gè)預(yù)測模型的性能表現(xiàn),需要結(jié)合真實(shí)值計(jì)算出精確率、召回率、F1值、Cohen’s Kappa系數(shù)等指標(biāo)來衡量。

使用單一評價(jià)指標(biāo)(Precision、Recall、F1值、Cohen’s Kappa系數(shù))

from sklearn.metrics import accuracy_score,precision_score,recall_score,f1_score,cohen_kappa_scoreprint('使用SVM預(yù)測breast_cancer數(shù)據(jù)的準(zhǔn)確率為:', accuracy_score(cancer_target_test,cancer_target_pred)) print('使用SVM預(yù)測breast_cancer數(shù)據(jù)的精確率為:', precision_score(cancer_target_test,cancer_target_pred)) print('使用SVM預(yù)測breast_cancer數(shù)據(jù)的召回率為:', recall_score(cancer_target_test,cancer_target_pred)) print('使用SVM預(yù)測breast_cancer數(shù)據(jù)的F1值為:', f1_score(cancer_target_test,cancer_target_pred)) print('使用SVM預(yù)測breast_cancer數(shù)據(jù)的Cohen’s Kappa系數(shù)為:', cohen_kappa_score(cancer_target_test,cancer_target_pred)) 使用SVM預(yù)測breast_cancer數(shù)據(jù)的準(zhǔn)確率為: 0.9736842105263158 使用SVM預(yù)測breast_cancer數(shù)據(jù)的精確率為: 0.9594594594594594 使用SVM預(yù)測breast_cancer數(shù)據(jù)的召回率為: 1.0 使用SVM預(yù)測breast_cancer數(shù)據(jù)的F1值為:0.9793103448275862 使用SVM預(yù)測breast_cancer數(shù)據(jù)的Cohen’s Kappa系數(shù)為: 0.9432082364662903

sklearn模塊除了提供了Precision等單一評價(jià)指標(biāo)外,還提供了一個(gè)能夠輸出分類模型評價(jià)報(bào)告的函數(shù)classification_report:??????python sklearn.metrics.classification_report(y_true, y_pred, *, labels=None, target_names=None, sample_weight=None, digits=2, output_dict=False, zero_division='warn')??

print('使用SVM預(yù)測iris數(shù)據(jù)的分類報(bào)告為:\n', classification_report(cancer_target_test,cancer_target_pred))#使用SVM預(yù)測iris數(shù)據(jù)的分類報(bào)告為: precision recall f1-score support 0 1.00 0.93 0.96 43 1 0.96 1.00 0.98 71 accuracy 0.97 114 macro avg 0.98 0.97 0.97 114 weighted avg 0.97 0.97 0.97 114

繪制ROC曲線

from sklearn.metrics import roc_curve import matplotlib.pyplot as plt## 求出ROC曲線的x軸和y軸 fpr, tpr, thresholds = roc_curve(cancer_target_test,cancer_target_pred) # 設(shè)置畫布 plt.figure(figsize=(10,6)) plt.xlim(0,1) ##設(shè)定x軸的范圍 plt.ylim(0.0,1.1) ## 設(shè)定y軸的范圍 plt.xlabel('FalsePostive Rate') plt.ylabel('True Postive Rate') x = [0,0.2,0.4,0.6,0.8,1] y = [0,0.2,0.4,0.6,0.8,1] # 繪圖 plt.plot(x,y,linestyle='-.',color='green') plt.plot(fpr,tpr,linewidth=2, linestyle="-",color='red') # 展示 plt.show()

ROC曲線橫縱坐標(biāo)范圍是[0,1],通常情況下,ROC曲線與x軸形成的面積越大,表示模型性能越好。當(dāng)ROC曲線如虛線所示時(shí),表明模型的計(jì)算結(jié)果基本都是隨機(jī)得來的,此時(shí)模型起到的作用幾乎為0.

四、構(gòu)建評價(jià)回歸模型

回歸算法的實(shí)現(xiàn)過程與分類算法相似,原理相差不大。分類和回歸的主要區(qū)別在于,分類算法的標(biāo)簽是離散的,但是回歸算法的標(biāo)簽是連續(xù)的。回歸算法在交通、物流、社交、網(wǎng)絡(luò)等領(lǐng)域發(fā)揮作用巨大。

1.使用sklearn估計(jì)器構(gòu)建回歸模型

在回歸模型中,自變量和因變量具有相關(guān)關(guān)系,自變量的值是已知的,因變量的值是要預(yù)測的。回歸算法的實(shí)現(xiàn)步驟和分類算法基本相同,分為學(xué)習(xí)和預(yù)測兩個(gè)步驟。
學(xué)習(xí):通過訓(xùn)練樣本來擬合回歸方程
預(yù)測:利用學(xué)習(xí)過程中擬合出的方程,將測試數(shù)據(jù)放入方程中求出預(yù)測值。

from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression

# 加載boston數(shù)據(jù)集
boston = load_boston()
# 提取數(shù)據(jù)
x = boston['data']
y = boston['target']
names = boston['feature_names']

# 將數(shù)據(jù)劃分為訓(xùn)練集和測試集
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.2,random_state=125)

# 建立線性回歸模型
clf = LinearRegression().fit(x_train,y_train)
print('建立的Linear Regression模型為:\n',clf)
#建立的 Linear Regression模型為:
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None, normalize=False)

# 預(yù)測測試集結(jié)果
y_pred = clf.predict(x_test)
print('預(yù)測前20個(gè)結(jié)果為:\n',y_pred[:20])
#預(yù)測前20個(gè)結(jié)果為:
[21.16289134 19.67630366 22.02458756 24.61877465 14.44016461 23.32107187
16.64386997 14.97085403 33.58043891 17.49079058 25.50429987 36.60653092
25.95062329 28.49744469 19.35133847 20.17145783 25.97572083 18.26842082
16.52840639 17.08939063]

回歸結(jié)果可視化

# 回歸結(jié)果可視化
import matplotlib.pyplot as plt
from matplotlib import rcParams

# 設(shè)置中文顯示
rcParams['font.sans-serif'] = 'SimHei'

# 設(shè)置畫布
plt.figure(figsize=(10,6))

# 繪圖
plt.plot(range(y_test.shape[0]),y_test,color='blue',linewidth=1.5,linestyle='-')
plt.plot(range(y_test.shape[0]),y_pred,color='red',linewidth=1.5,linestyle='-.')

# 設(shè)置圖像屬性
plt.xlim((0,102))
plt.ylim((0,55))
plt.legend(['真實(shí)值','預(yù)測值'])

# 保存圖片
plt.savefig('tmp/聚回歸類結(jié)果.png')

#展示
plt.show()

2.評價(jià)回歸模型

回歸模型的性能評價(jià)不同于分類模型,雖然都是對照真實(shí)值進(jìn)行評價(jià),但是由于回歸模型的預(yù)測結(jié)果和真實(shí)值都是連續(xù)地,所以不能夠用之前的精確率、召回率、F1值進(jìn)行評價(jià)。

使用explained_variance_score, mean_absolute_error, mean_squared_error, r2_score, median_absolute_error進(jìn)行回歸評價(jià)

from sklearn.metrics import explained_variance_score,mean_absolute_error,mean_squared_error,\median_absolute_error,r2_score print('Boston數(shù)據(jù)線性回歸模型的平均絕對誤差為:', mean_absolute_error(y_test,y_pred)) print('Boston數(shù)據(jù)線性回歸模型的均方誤差為:', mean_squared_error(y_test,y_pred)) print('Boston數(shù)據(jù)線性回歸模型的中值絕對誤差為:', median_absolute_error(y_test,y_pred)) print('Boston數(shù)據(jù)線性回歸模型的可解釋方差值為:', explained_variance_score(y_test,y_pred)) print('Boston數(shù)據(jù)線性回歸模型的R方值為:', r2_score(y_test,y_pred)) #Boston數(shù)據(jù)線性回歸模型的平均絕對誤差為: 3.3775517360082032 #Boston數(shù)據(jù)線性回歸模型的均方誤差為: 31.15051739031563 #Boston數(shù)據(jù)線性回歸模型的中值絕對誤差為: 1.7788996425420773 #Boston數(shù)據(jù)線性回歸模型的可解釋方差值為: 0.710547565009666 #Boston數(shù)據(jù)線性回歸模型的R方值為: 0.7068961686076838

原文鏈接:https://blog.51cto.com/u_15749390/5577207

欄目分類
最近更新