0778-7.0.3-如何在CDP中实现你的第一个Spark例子

2020-06-04 10:03:13 浏览数 (1)

文档编写目的

本文主要描写如何在CDH7.0.3上开发Spark程序

IntelliJ IDEA新建Maven项目

添加Pom文件的Dependency

代码语言:javascript复制
<properties>
    <maven.compiler.source>1.5</maven.compiler.source>
    <maven.compiler.target>1.5</maven.compiler.target>
    <encoding>UTF-8</encoding>
    <scala.version>2.11.8</scala.version>
    <spark.version>2.4.0</spark.version>
    <hadoop.version>3.1.1</hadoop.version>
</properties>

<dependencies>

    <!--scala-->
    <dependency>
        <groupId>org.scala-lang</groupId>
        <artifactId>scala-library</artifactId>
        <version>${scala.version}</version>
        <!--
        <scope>provided</scope>
        -->
    </dependency>

    <dependency>
        <groupId>org.scala-lang</groupId>
        <artifactId>scala-xml</artifactId>
        <version>2.11.0-M4</version>
    </dependency>

    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-core_2.11</artifactId>
        <version>${spark.version}</version>
    </dependency>

    <dependency>
        <groupId>org.apache.hadoop</groupId>
        <artifactId>hadoop-hdfs</artifactId>
        <version>${hadoop.version}</version>
    </dependency>

    <dependency>
        <groupId>org.apache.hadoop</groupId>
        <artifactId>hadoop-client</artifactId>
        <version>${hadoop.version}</version>
    </dependency>

    <dependency>
        <groupId>mysql</groupId>
        <artifactId>mysql-connector-java</artifactId>
        <version>5.1.38</version>
    </dependency>


</dependencies>

<build>
    <sourceDirectory>src/main/scala</sourceDirectory>
    <testSourceDirectory>src/test/scala</testSourceDirectory>
    <plugins>
        <plugin>
            <groupId>org.scala-tools</groupId>
            <artifactId>maven-scala-plugin</artifactId>
            <version>2.15.0</version>
            <executions>
                <execution>
                    <goals>
                        <goal>compile</goal>
                        <goal>testCompile</goal>
                    </goals>
                    <configuration>
                        <args>
                            <arg>-dependencyfile</arg>
                            <arg>${project.build.directory}/.scala_dependencies</arg>
                        </args>
                    </configuration>
                </execution>
            </executions>
        </plugin>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-surefire-plugin</artifactId>
            <version>2.6</version>
            <configuration>
                <useFile>false</useFile>
                <disableXmlReport>true</disableXmlReport>
                <!-- If you have classpath issue like NoDefClassError,... -->
                <!-- useManifestOnlyJar>false</useManifestOnlyJar -->
                <includes>
                    <include>**/*Test.*</include>
                    <include>**/*Suite.*</include>
                </includes>
            </configuration>
        </plugin>

        <plugin>
            <artifactId>maven-assembly-plugin</artifactId>
            <configuration>
                <archive>
                    <manifest>
                        <mainClass></mainClass>
                    </manifest>
                </archive>
                <descriptorRefs>
                    <descriptorRef>jar-with-dependencies</descriptorRef>
                </descriptorRefs>
            </configuration>
        </plugin>
    </plugins>
</build>

分别添加Main和Test里的Scala source包

先new -> directory 再 make directory as -> sources Root

添加Scala代码

新建一个Scala Object

例如代码如下:

代码功能为简单地读取HDFS上的一个文件,进行wordcount,然后将结果输出到HDFS中。

代码语言:javascript复制
package com

import org.apache.spark.{SparkConf, SparkContext}

object WordCount {
  def main(args: Array[String]): Unit = {
    val conf = new SparkConf()
      //      .setMaster("local[2]")
      .setAppName("WordCount")
    val sc = new SparkContext(conf)
    val wordCount =  sc.textFile("hdfs://cdh2.macro.com:8020/tmp/The_Man_of_Property.txt")
      .flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey(_   _)
    wordCount.saveAsSequenceFile("hdfs://cdh2.macro.com:8020/user/shengwen/output")
    sc.stop()
  }
}

MVN打包上传

在项目目录下运行mvn命令打包

代码语言:javascript复制
mvn assembly:assembly

在target目录下生成了jar包

将sparkdemo-1.0-SNAPSHOT.jar上传至服务器

运行spark作业

通过spark-submit将作业运行到YARN

代码语言:javascript复制
spark-submit --master yarn --deploy-mode cluster /data/sparkdemo/sparkdemo-1.0-SNAPSHOT.jar

作业成功运行并在指定HDFS目录成功生成了文件

YARN Web页面显示如下

0 人点赞