全部版块 我的主页
论坛 计量经济学与统计论坛 五区 计量经济学与统计软件 winbugs及其他软件专版
1208 3
2017-04-18
Redshift Data Source for Apache Spark

本帖隐藏的内容



A library to load data into Spark SQL DataFrames from Amazon Redshift, and write them back to Redshift tables. Amazon S3 is used to efficiently transfer data in and out of Redshift, and JDBC is used to automatically trigger the appropriate COPY andUNLOAD commands on Redshift.

This library is more suited to ETL than interactive queries, since large amounts of data could be extracted to S3 for each query execution. If you plan to perform many queries against the same Redshift tables then we recommend saving the extracted data in a format such as Parquet.

Installation

This library requires Apache Spark 2.0+ and Amazon Redshift 1.0.963+.

For version that works with Spark 1.x, please check for the 1.x branch.

You may use this library in your applications with the following dependency information:

Scala 2.10

groupId: com.databricksartifactId: spark-redshift_2.10version: 3.0.0-preview1

Scala 2.11

groupId: com.databricksartifactId: spark-redshift_2.11version: 3.0.0-preview1

You will also need to provide a JDBC driver that is compatible with Redshift. Amazon recommend that you use their driver, which is distributed as a JAR that is hosted on Amazon's website. This library has also been successfully tested using the Postgres JDBC driver.

Note on Hadoop versions: This library depends on spark-avro, which should automatically be downloaded because it is declared as a dependency. However, you may need to provide the corresponding avro-mapred dependency which matches your Hadoop distribution. In most deployments, however, this dependency will be automatically provided by your cluster's Spark assemblies and no additional action will be required.

Note on Amazon SDK dependency: This library declares a provided dependency on components of the AWS Java SDK. In most cases, these libraries will be provided by your deployment environment. However, if you get ClassNotFoundExceptions for Amazon SDK classes then you will need to add explicit dependencies on com.amazonaws.aws-java-sdk-core andcom.amazonaws.aws-java-sdk-s3 as part of your build / runtime configuration. See the comments inproject/SparkRedshiftBuild.scala for more details.

Snapshot builds

Master snapshot builds of this library are built using jitpack.io. In order to use these snapshots in your build, you'll need to add the JitPack repository to your build file.

  • In Maven:

    <repositories>   <repository>     <id>jitpack.io</id>     <url>https://jitpack.io</url>   </repository></repositories>

    then

    <dependency>  <groupId>com.github.databricks</groupId>  <artifactId>spark-redshift_2.10</artifactId>  <!-- For Scala 2.11, use spark-redshift_2.11 instead -->  <version>master-SNAPSHOT</version></dependency>
  • In SBT:

    resolvers += "jitpack" at "https://jitpack.io"

    then

    libraryDependencies += "com.github.databricks" %% "spark-redshift" % "master-SNAPSHOT"

二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

全部回复
2017-4-18 06:56:45
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2017-4-18 07:07:11
谢谢楼主分享!
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2017-4-18 07:07:44
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

相关推荐
栏目导航
热门文章
推荐文章

说点什么

分享

扫码加好友,拉您进群
各岗位、行业、专业交流群