博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Spark:java api实现word count统计
阅读量:7102 次
发布时间:2019-06-28

本文共 10674 字,大约阅读时间需要 35 分钟。

方案一:使用reduceByKey

数据word.txt

张三李四王五李四王五李四王五李四王五王五李四李四李四李四李四

代码:

import org.apache.spark.api.java.JavaPairRDD;import org.apache.spark.api.java.JavaRDD;import org.apache.spark.api.java.JavaSparkContext;import org.apache.spark.api.java.function.Function2;import org.apache.spark.api.java.function.PairFunction;import org.apache.spark.rdd.RDD;import org.apache.spark.sql.SparkSession;import scala.Tuple2;public class HelloWord {    public static void main(String[] args) {        SparkSession spark = SparkSession.builder().master("local[*]").appName("Spark").getOrCreate();        final JavaSparkContext ctx = JavaSparkContext.fromSparkContext(spark.sparkContext());        RDD
rdd = spark.sparkContext().textFile("C:\\Users\\boco\\Desktop\\word.txt", 1); JavaRDD
javaRDD = rdd.toJavaRDD(); JavaPairRDD
javaRDDMap = javaRDD.mapToPair(new PairFunction
() { public Tuple2
call(String s) { return new Tuple2
(s, 1); } }); JavaPairRDD
result = javaRDDMap.reduceByKey(new Function2
() { @Override public Integer call(Integer integer, Integer integer2) throws Exception { return integer + integer2; } }); System.out.println(result.collect()); }}

输出:

[(张三,1), (李四,9), (王五,5)]

方案二:使用spark sql

使用spark sql实现代码:

import org.apache.spark.api.java.JavaRDD;import org.apache.spark.api.java.JavaSparkContext;import org.apache.spark.sql.Dataset;import org.apache.spark.sql.Row;import org.apache.spark.sql.SparkSession;import org.apache.spark.sql.types.DataTypes;import org.apache.spark.sql.types.StructField;import org.apache.spark.sql.types.StructType;import java.util.ArrayList;public class HelloWord {    public static void main(String[] args) {        SparkSession spark = SparkSession.builder().master("local[*]").appName("Spark").getOrCreate();        final JavaSparkContext ctx = JavaSparkContext.fromSparkContext(spark.sparkContext());        JavaRDD
rows = spark.read().text("C:\\Users\\boco\\Desktop\\word.txt").toJavaRDD(); ArrayList
fields = new ArrayList
(); StructField field = null; field = DataTypes.createStructField("key", DataTypes.StringType, true); fields.add(field); StructType schema = DataTypes.createStructType(fields); Dataset
ds = spark.createDataFrame(rows, schema); ds.createOrReplaceTempView("words"); Dataset
result = spark.sql("select key,count(0) as key_count from words group by key"); result.show(); }}

结果:

+---+---------+|key|key_count|+---+---------+| 王五|        5|| 李四|        9|| 张三|        1|+---+---------+

方案二:使用spark streaming实时流分析

参考《http://spark.apache.org/docs/latest/streaming-programming-guide.html》

First, we create a  object, which is the main entry point for all streaming functionality. We create a local StreamingContext with two execution threads, and a batch interval of 1 second.

import org.apache.spark.*;import org.apache.spark.api.java.function.*;import org.apache.spark.streaming.*;import org.apache.spark.streaming.api.java.*;import scala.Tuple2;// Create a local StreamingContext with two working thread and batch interval of 1 secondSparkConf conf = new SparkConf().setMaster("local[2]").setAppName("NetworkWordCount");JavaStreamingContext jssc = new JavaStreamingContext(conf, Durations.seconds(1));

Using this context, we can create a DStream that represents streaming data from a TCP source, specified as hostname (e.g. localhost) and port (e.g. 9999).

// Create a DStream that will connect to hostname:port, like localhost:9999JavaReceiverInputDStream
lines = jssc.socketTextStream("localhost", 9999);

This lines DStream represents the stream of data that will be received from the data server. Each record in this stream is a line of text. Then, we want to split the lines by space into words.

// Split each line into wordsJavaDStream
words = lines.flatMap(x -> Arrays.asList(x.split(" ")).iterator());

flatMap is a DStream operation that creates a new DStream by generating multiple new records from each record in the source DStream. In this case, each line will be split into multiple words and the stream of words is represented as the words DStream. Note that we defined the transformation using a  object. As we will discover along the way, there are a number of such convenience classes in the Java API that help defines DStream transformations.

Next, we want to count these words.

// Count each word in each batchJavaPairDStream
pairs = words.mapToPair(s -> new Tuple2<>(s, 1));JavaPairDStream
wordCounts = pairs.reduceByKey((i1, i2) -> i1 + i2);// Print the first ten elements of each RDD generated in this DStream to the consolewordCounts.print();

The words DStream is further mapped (one-to-one transformation) to a DStream of (word, 1) pairs, using a  object. Then, it is reduced to get the frequency of words in each batch of data, using a  object. Finally, wordCounts.print() will print a few of the counts generated every second.

Note that when these lines are executed, Spark Streaming only sets up the computation it will perform after it is started, and no real processing has started yet. To start the processing after all the transformations have been setup, we finally call start method.

jssc.start();              // Start the computationjssc.awaitTermination();   // Wait for the computation to terminate

The complete code can be found in the Spark Streaming example . 

If you have already  and  Spark, you can run this example as follows. You will first need to run Netcat (a small utility found in most Unix-like systems) as a data server by using

$ nc -lk 9999

Then, in a different terminal, you can start the example by using

$ ./bin/run-example streaming.JavaNetworkWordCount localhost 9999

完整代码:

import java.util.Arrays;import org.apache.spark.SparkConf;import org.apache.spark.api.java.JavaSparkContext;import org.apache.spark.streaming.Durations;import org.apache.spark.streaming.api.java.JavaDStream;import org.apache.spark.streaming.api.java.JavaPairDStream;import org.apache.spark.streaming.api.java.JavaReceiverInputDStream;import org.apache.spark.streaming.api.java.JavaStreamingContext;import scala.Tuple2;public class HelloWord {    public static void main(String[] args) throws InterruptedException {        // Create a local StreamingContext with two working thread and batch interval of        // 1 second        SparkConf conf = new SparkConf().setMaster("local[*]").setAppName("NetworkWordCount");        JavaSparkContext jsc=new JavaSparkContext(conf);        jsc.setLogLevel("WARN");        JavaStreamingContext jssc = new JavaStreamingContext(jsc, Durations.seconds(60));                // Create a DStream that will connect to hostname:port, like localhost:9999        JavaReceiverInputDStream
lines = jssc.socketTextStream("xx.xx.xx.xx", 19999); // Split each line into words JavaDStream
words = lines.flatMap(x -> Arrays.asList(x.split(" ")).iterator()); // Count each word in each batch JavaPairDStream
pairs = words.mapToPair(s -> new Tuple2<>(s, 1)); JavaPairDStream
wordCounts = pairs.reduceByKey((i1, i2) -> i1 + i2); // Print the first ten elements of each RDD generated in this DStream to the // console wordCounts.print(); jssc.start(); // Start the computation jssc.awaitTermination(); // Wait for the computation to terminate }}
View Code

测试:

[root@abced dx]# nc -lk 19999hellow wrdhello wordhello wordhello dkkhlhellohellohello wordhello wordhello javahello c@hello hadoop]hello sparkhello wordhello kafkahello chello c#hello .net corenet creworkdhlehello wordshke hjhhek 23hel 23hl3 323hhk 68hke 84

程序执行结果:

-------------------------------------------Time: 1533781920000 ms-------------------------------------------(c,1)(spark,1)(kafka,1)(c#,1)(hello,9)(java,1)(c@,1)(hadoop],1)(word,2)18/08/09 10:32:05 WARN RandomBlockReplicationPolicy: Expecting 1 replicas with only 0 peer/s.18/08/09 10:32:05 WARN BlockManager: Block input-0-1533781925200 replicated to only 0 peer(s) instead of 1 peers18/08/09 10:32:08 WARN RandomBlockReplicationPolicy: Expecting 1 replicas with only 0 peer/s.18/08/09 10:32:08 WARN BlockManager: Block input-0-1533781928000 replicated to only 0 peer(s) instead of 1 peers18/08/09 10:32:11 WARN RandomBlockReplicationPolicy: Expecting 1 replicas with only 0 peer/s.18/08/09 10:32:11 WARN BlockManager: Block input-0-1533781931200 replicated to only 0 peer(s) instead of 1 peers18/08/09 10:32:14 WARN RandomBlockReplicationPolicy: Expecting 1 replicas with only 0 peer/s.18/08/09 10:32:14 WARN BlockManager: Block input-0-1533781934600 replicated to only 0 peer(s) instead of 1 peers-------------------------------------------Time: 1533781980000 ms-------------------------------------------(hle,1)(words,1)(.net,1)(hello,2)(workd,1)(cre,1)(net,1)(core,1)18/08/09 10:33:08 WARN RandomBlockReplicationPolicy: Expecting 1 replicas with only 0 peer/s.18/08/09 10:33:08 WARN BlockManager: Block input-0-1533781988000 replicated to only 0 peer(s) instead of 1 peers18/08/09 10:33:11 WARN RandomBlockReplicationPolicy: Expecting 1 replicas with only 0 peer/s.18/08/09 10:33:11 WARN BlockManager: Block input-0-1533781991000 replicated to only 0 peer(s) instead of 1 peers18/08/09 10:33:14 WARN RandomBlockReplicationPolicy: Expecting 1 replicas with only 0 peer/s.18/08/09 10:33:14 WARN BlockManager: Block input-0-1533781994200 replicated to only 0 peer(s) instead of 1 peers18/08/09 10:33:17 WARN RandomBlockReplicationPolicy: Expecting 1 replicas with only 0 peer/s.18/08/09 10:33:17 WARN BlockManager: Block input-0-1533781997400 replicated to only 0 peer(s) instead of 1 peers18/08/09 10:33:20 WARN RandomBlockReplicationPolicy: Expecting 1 replicas with only 0 peer/s.18/08/09 10:33:20 WARN BlockManager: Block input-0-1533782000400 replicated to only 0 peer(s) instead of 1 peers18/08/09 10:33:25 WARN RandomBlockReplicationPolicy: Expecting 1 replicas with only 0 peer/s.18/08/09 10:33:25 WARN BlockManager: Block input-0-1533782005600 replicated to only 0 peer(s) instead of 1 peers-------------------------------------------Time: 1533782040000 ms-------------------------------------------(68,1)(hhk,1)(hek,1)(hel,1)(84,1)(hjh,1)(23,2)(hke,2)(323,1)(hl3,1)

结论:是一批一批的处理的,不进行累加,每一批统计并不是累加之前的数据,而是针对当前接收到这一批数据的处理。

 

转载地址:http://pxkhl.baihongyu.com/

你可能感兴趣的文章
7 Django的模板层
查看>>
EF中Json序列化对象时检测到循环引用的解决办法
查看>>
词向量概况
查看>>
css3 画圆记录
查看>>
javascript中级
查看>>
《CLR Via C# 第3版》笔记之(十五) - 接口
查看>>
golang实现ios推送
查看>>
【Linux】linux常用基本命令
查看>>
libsvm使用说明
查看>>
CodeForces 595A Vitaly and Night
查看>>
秒杀读后感2
查看>>
插入排序
查看>>
Session机制详解
查看>>
【转】使用PHP导入和导出CSV文件
查看>>
面向对象概念思想再理解 2.0
查看>>
VS code 格式化插件, 仅需一步, 无须配置
查看>>
EL表达式的一些知识
查看>>
web 中的认证方式
查看>>
node模块之path——path.join和path.resolve的区别
查看>>
SDNU 1292.圣诞老人
查看>>