我想在Apache Spark连接中包含空值。 Spark默认情况下不包含null行。
这是默认的Spark行为。
val numbersDf = Seq( ("123"),("456"),(null),("") ).toDF("numbers") val lettersDf = Seq( ("123","abc"),("456","def"),(null,"zzz"),("","hhh") ).toDF("numbers","letters") val joinedDf = numbersDf.join(lettersDf,Seq("numbers"))
这是joinedDf.show()的输出:
+-------+-------+ |numbers|letters| +-------+-------+ | 123| abc| | 456| def| | | hhh| +-------+-------+
这是我想要的输出:
+-------+-------+ |numbers|letters| +-------+-------+ | 123| abc| | 456| def| | | hhh| | null| zzz| +-------+-------+
解决方法
Scala提供了一个特殊的NULL安全等于运算符:
numbersDf .join(lettersDf,numbersDf("numbers") <=> lettersDf("numbers")) .drop(lettersDf("numbers"))
+-------+-------+ |numbers|letters| +-------+-------+ | 123| abc| | 456| def| | null| zzz| | | hhh| +-------+-------+
小心不要在Spark 1.5或更早版本中使用它。在Spark 1.6之前,它需要一个笛卡尔积(SPARK-11111 – 快速零安@R_502_432@)。
在Spark 2.3.0或更高版本中,您可以在PySpark中使用Column.eqNullSafe:
numbers_df = sc.parallelize([ ("123",),(None,) ]).toDF(["numbers"]) letters_df = sc.parallelize([ ("123","hhh") ]).toDF(["numbers","letters"]) numbers_df.join(letters_df,numbers_df.numbers.eqNullSafe(letters_df.numbers))
+-------+-------+-------+ |numbers|numbers|letters| +-------+-------+-------+ | 456| 456| def| | null| null| zzz| | | | hhh| | 123| 123| abc| +-------+-------+-------+
和SparkR中的%< =>%:
numbers_df <- createDataFrame(data.frame(numbers = c("123","456",NA,""))) letters_df <- createDataFrame(data.frame( numbers = c("123",""),letters = c("abc","def","zzz","hhh") )) head(join(numbers_df,letters_df,numbers_df$numbers %<=>% letters_df$numbers))
numbers numbers letters 1 456 456 def 2 <NA> <NA> zzz 3 hhh 4 123 123 abc
使用sql(Spark 2.2.0),您可以使用IS NOT DISTINCT FROM:
SELECT * FROM numbers JOIN letters ON numbers.numbers IS NOT DISTINCT FROM letters.numbers
这也可以与DataFrame API一起使用:
numbersDf.alias("numbers") .join(lettersDf.alias("letters")) .where("numbers.numbers IS NOT DISTINCT FROM letters.numbers")