原文链接:
http://hubqoaing.github.io/2016/03/03/SparkMLlibClassification
欢迎关注本人博客。
摘录:
[BigData-Spark]Classification using Spark.By Boqiang Hu on 03 March 2016 | View on Github
Classification using SparkLearning note for Machine learning with spark.
Besides, thanks to Zeppelin. Although it is not so user-friendly like RStudio or Jupyter, it really makes the learning of Spark much easier.
1. Data Loading from HDFSFirst, download the data from https://www.kaggle.com/c/stumbleupon.
Then upload data to HDFS:
tail -n +2 train.tsv >train_noheader.tsvhdfs dfs -mkdir hdfs://tanglab1:9000/user/hadoop/stumbleuponhdfs dfs -put train_noheader.tsv hdfs://tanglab1:9000/user/hadoop/stumbleupon
val rawData = sc.textFile("/user/hadoop/stumbleupon/train_noheader.tsv")val records = rawData.map(line => line.split("\t"))records.first()
2. Data ProcessSelect the column for label(last column) and Feature(5 ~ last but one column) Data cleanning and convert NA to 0.0 Save the label and feature in vector into MLlib.
As naive bayesian model do not accept negative input value, convert negtive input into 0
import org.apache.spark.mllib.regression.LabeledPointimport org.apache.spark.mllib.linalg.Vectorsval data = records.map{ r => val trimmed = r.map(_.replaceAll("\"", "")) val label = trimmed(r.size - 1).toInt val features = trimmed.slice(4, r.size - 1).map(d => if (d=="?") 0.0 else d.toDouble) LabeledPoint(label, Vectors.dense(features))}val nbData = records.map{ r => val trimmed = r.map(_.replaceAll("\"", "")) val label = trimmed(r.size - 1).toInt val features = trimmed.slice(4, r.size - 1).map(d => if(d=="?") 0.0 else d.toDouble).map( d=> if(d<0.0) 0.0 else d) LabeledPoint(label, Vectors.dense(features))}data.cachedata.count
3. Model trainingImport modules required. Then define the parameters required by the models.
import org.apache.spark.mllib.classification.LogisticRegressionWithSGDimport org.apache.spark.mllib.classification.SVMWithSGDimport org.apache.spark.mllib.classification.NaiveBayesimport org.apache.spark.mllib.tree.DecisionTreeimport org.apache.spark.mllib.tree.configuration.Algoimport org.apache.spark.mllib.tree.impurity.Entropyval numIterations = 10val maxTreeDepth = 5
3.1 Training logistic regressionval lrModel = LogisticRegressionWithSGD.train(data, numIterations)val dataPoint = data.firstval prediction = lrModel.predict(dataPoint.features)val trueLabel = dataPoint.label