日本免费高清视频-国产福利视频导航-黄色在线播放国产-天天操天天操天天操天天操|www.shdianci.com

學(xué)無先后,達(dá)者為師

網(wǎng)站首頁(yè) 編程語(yǔ)言 正文

sparkstreaming寫入hive表報(bào)錯(cuò)問題解決

作者:qq_32457341 更新時(shí)間: 2022-08-13 編程語(yǔ)言

問題

當(dāng)前采用python利用structstreaming寫入hive table時(shí),始終沒有寫入的權(quán)限,報(bào)錯(cuò)如下:

Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):Permission denied : user=xxx, access="WRITE", inode="/":hdfs:supergroup:drwxr-xr-x

根據(jù)查詢了解到,structstreaming持久化寫入數(shù)據(jù)時(shí),需要配置checkpointLocation參數(shù),不然會(huì)寫入默認(rèn)路徑:根目錄下面,但是根目錄沒有寫入權(quán)限,所以會(huì)報(bào)錯(cuò);
以下是查詢的結(jié)果:

This scenario is ideal for long-term persistence of output. Unlike memory and console sinks, files and directories are fault-tolerant. As such, this option requires a checkpoint directory, where state is maintained for fault-tolerance.

在這里插入圖片描述
附上查詢結(jié)果網(wǎng)址如下:https://databricks.com/blog/2017/04/04/real-time-end-to-end-integration-with-apache-kafka-in-apache-sparks-structured-streaming.html

原文鏈接:https://blog.csdn.net/qq_32457341/article/details/125514822

欄目分類
最近更新