全部版块 我的主页
论坛 提问 悬赏 求职 新闻 读书 功能一区 学习笔记1.0
798 0
2016-03-03
原文链接:
http://hubqoaing.github.io/2016/03/03/SparkMLlibClassification

欢迎关注本人博客。

摘录:

[BigData-Spark]Classification using Spark.By Boqiang Hu on 03 March 2016   |   View on Github

Classification using Spark

Learning note for Machine learning with spark.

Besides, thanks to Zeppelin. Although it is not so user-friendly like RStudio or Jupyter, it really makes the learning of Spark much easier.

1. Data Loading from HDFS

First, download the data from https://www.kaggle.com/c/stumbleupon.

Then upload data to HDFS:

tail -n +2 train.tsv >train_noheader.tsvhdfs dfs -mkdir hdfs://tanglab1:9000/user/hadoop/stumbleuponhdfs dfs -put train_noheader.tsv hdfs://tanglab1:9000/user/hadoop/stumbleupon
val rawData = sc.textFile("/user/hadoop/stumbleupon/train_noheader.tsv")val records = rawData.map(line => line.split("\t"))records.first()
2. Data Process

Select the column for label(last column) and Feature(5 ~ last but one column) Data cleanning and convert NA to 0.0 Save the label and feature in vector into MLlib.

As naive bayesian model do not accept negative input value, convert negtive input into 0

import org.apache.spark.mllib.regression.LabeledPointimport org.apache.spark.mllib.linalg.Vectorsval data = records.map{ r =>     val trimmed = r.map(_.replaceAll("\"", ""))    val label = trimmed(r.size - 1).toInt    val features = trimmed.slice(4, r.size - 1).map(d =>             if (d=="?") 0.0 else d.toDouble)    LabeledPoint(label, Vectors.dense(features))}val nbData = records.map{ r =>     val trimmed = r.map(_.replaceAll("\"", ""))    val label = trimmed(r.size - 1).toInt    val features = trimmed.slice(4, r.size - 1).map(d =>             if(d=="?") 0.0 else d.toDouble).map( d=> if(d<0.0) 0.0 else d)    LabeledPoint(label, Vectors.dense(features))}data.cachedata.count
3. Model training

Import modules required. Then define the parameters required by the models.

import org.apache.spark.mllib.classification.LogisticRegressionWithSGDimport org.apache.spark.mllib.classification.SVMWithSGDimport org.apache.spark.mllib.classification.NaiveBayesimport org.apache.spark.mllib.tree.DecisionTreeimport org.apache.spark.mllib.tree.configuration.Algoimport org.apache.spark.mllib.tree.impurity.Entropyval numIterations = 10val maxTreeDepth = 5
3.1 Training logistic regressionval lrModel = LogisticRegressionWithSGD.train(data, numIterations)val dataPoint = data.firstval prediction = lrModel.predict(dataPoint.features)val trueLabel = dataPoint.label




二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

相关推荐
栏目导航
热门文章
推荐文章

说点什么

分享

扫码加好友,拉您进群
各岗位、行业、专业交流群