SELECT * FROM db_name."table_name$partitions" ORDER BY column_name DESC https://docs.aws.amazon.com/ko_kr/athena/latest/ug/show-partitions.html SHOW PARTITIONS - Amazon Athena 이 페이지에 작업이 필요하다는 점을 알려 주셔서 감사합니다. 실망시켜 드려 죄송합니다. 잠깐 시간을 내어 설명서를 향상시킬 수 있는 방법에 대해 말씀해 주십시오. docs.aws.amazon.com https://github.com/awsdocs/amazon-athena-user-guide/pull/89 (#88) feat: add db_name by seunggabi · Pull Request..
variables = ['first', 'second', 'third'] def run_dag_task(variable): task = dag_task(variable) return task task_arr=[] task_arr.append(run_dag_task(variable[0])) for variable in variables[1:]: task=run_dag_task(variable) task_arr[-1]>>task task_arr.append(task) https://stackoverflow.com/questions/70002086/how-to-run-tasks-sequentially-in-a-loop-in-an-airflow-dag How to run tasks sequentially in ..
https://stackoverflow.com/questions/36747268/why-does-conf-setspark-app-name-appname-not-set-the-name-in-the-ui Why does conf.set("spark.app.name", appName) not set the name in the UI? I am calling val appName : String = arguments.getNameFromConfig val conf = new SparkConf() conf.set("spark.driver.maxResultSize", "30G") conf.set("spark.app.name", appName) println("Master: " + stackoverflow.com
grep -v 'exclude_word' file egrep -v '(main|master)' file https://stackoverflow.com/questions/4538253/how-can-i-exclude-one-word-with-grep How can I exclude one word with grep? I need something like: grep ^"unwanted_word"XXXXXXXX stackoverflow.com https://www.warp.dev/terminus/grep-exclude How To Exclude Patterns or Files With Grep [#excluding-single-pattern]Excluding a single pattern[#excluding..
PARTITIONED BY (dt string) CLUSTERED BY (user_key) SORTED BY (user_key ASC) INTO 256 BUCKETS CLUSTERED BY ~ SORTED BY ~ INTO {size} BUCKETS 을 사용해도, spark sql plan partitioning 작업에는 영향 없음. 비용이 많이 나온 것과 관련해서는, 로드되는 data size가 커서 발생하는 것 같음. 향후에, small files merge 를 통해서, 비용을 최적화할 수 있음. https://sparkbyexamples.com/apache-hive/hive-partitioning-vs-bucketing-with-examples/ Hive Partitioning vs Bucketin..
scp -i keypair-asdf.pem -r hadoop@asdf:~/tez.tar.gz . scp -i keypair-qwer.pem -r tez.tar.gz hadoop@qwer-emr:~/tez.tar.gz https://doheejin.github.io/linux/2021/03/03/linux-scp.html [Linux] scp 명령어로 (로컬↔서버) 파일 전송 scp는 SecureCopy의 약자로, 원격서버에 있는 파일과 폴더를 전송하거나 가져오기 위해 사용하는 명령어이다.ssh 원격 접속 프로토콜을 기반으로 하며, ssh와 동일한 22번 포트를 이용하기 때문에 passw doheejin.github.io
https://stackoverflow.com/questions/51933568/how-to-retrieve-hive-table-partition-location How to retrieve Hive table Partition Location? Show Partitions --> In Hive/Spark, this command only provides the Partition, without providing the location information on hdfs/s3 Since we maintain different location for each partition in a tab... stackoverflow.com
https://www.projectpro.io/recipes/explain-study-of-spark-query-execution-plans-using-explain Explain Study of Spark query execution plans using explain() - This recipe explains Study of Spark query execution plans using explain() www.projectpro.io
aws s3 sync . s3://asdf/a/b/c/ --delete aws s3 sync s3://my-bucket s3://my-other-bucket \ --exclude 'customers/*' \ --exclude 'orders/*' \ --exclude 'reportTemplate/*' https://stackoverflow.com/questions/32393026/exclude-multiple-folders-using-aws-s3-sync Exclude multiple folders using AWS S3 sync How to exclude multiple folders while using aws s3 syn ? I tried : # aws s3 sync s3://inksedge-app..
--conf spark.driver.maxResultSize=4g https://wooono.tistory.com/41 [Spark] spark.driver.maxResultSize 오류 오류 org.apache.spark.SparkException: Job aborted due to stage failure: Total size of serialized results of XXXX tasks (X.0 GB) is bigger than spark.driver.maxResultSize (X.0 GB) 원인 RDD로 분산 돼 있던 데이터를 collect() 등을 사용해 dri wooono.tistory.com
https://stackoverflow.com/questions/27932345/downloading-folders-from-aws-s3-cp-or-sync Downloading folders from aws s3, cp or sync? If I want to download all the contents of a directory on S3 to my local PC, which command should I use cp or sync ? Any help would be highly appreciated. For example, if I want to download all... stackoverflow.com
https://community.cloudera.com/t5/Support-Questions/How-to-set-yarn-application-name-of-hive-job/td-p/185524 How to set yarn application name of hive job I use HDP 2.4,use hive on Tez. And i want to set the job name show in yarn resource manger page, Now the hive job name like HIVE-2f58f71e-4c29-4092-ac04-6e63c15ee223 and application is tez. How should I set the name of hive job name to let it s..
hdfs dfsadmin -report hdfs fsck -list-corruptfileblocks hdfs fsck -delete https://118k.tistory.com/469 [hadoop][fsck] HDFS의 상태를 점검 할 수 있는 명령어 HDFS의 fsck 명령- HDFS 상의 다양한 불일치(블록 누락, 복제 되지 않은 블록)를 확인- 오류를 발견하고 수정하지는 않음(NameNode가 복구가능한 오류는 자동으로 수정)- 열린 파일은 무시함 > hadoop fsck / 118k.tistory.com
fun main(args: Array){ val array: Array = arrayOf("a", "b", "c", "d", "e") val list: List = array.toList() list.forEach { println(it) } } https://codechacha.com/ko/kotlin-convert-list-to-array/ Kotlin - Array를 List로 변환 코틀린에서 배열을 리스트로 변환하는 방법을 소개합니다. `toList()`는 List를 Array로 변환합니다. `toMutableList()`는 List가 아닌 MutableList로 리턴합니다. 다음과 같이 `listOf()`로 변환할 수 있습니다. codechacha.com
aws s3 ls --summarize --human-readable --recursive s3://bucket-name/ https://serverfault.com/questions/84815/how-can-i-get-the-size-of-an-amazon-s3-bucket How can I get the size of an Amazon S3 bucket? I'd like to graph the size (in bytes, and # of items) of an Amazon S3 bucket and am looking for an efficient way to get the data. The s3cmd tools provide a way to get the total file size using s3c..
aws emr list-clusters --active | jq -r ".Clusters[].Id" | while read id ; do dns=$(aws emr describe-cluster --cluster-id $id | jq -r ".Cluster.MasterPublicDnsName") dns=$(echo $dns | sed -r "s/ip-([0-9]+)-([0-9]+)-([0-9]+)-([0-9]+)\.ap-northeast-2\.compute\.internal/\1.\2.\3.\4/g") name=$(aws emr describe-cluster --cluster-id $id | jq -r ".Cluster.Name") echo $dns $name done # sudo vi /etc/hosts..
location.href = document.querySelector('#reload-button') .url .replace(/ip-(\d+)-(\d+)-(\d+)-(\d+)/,"$1.$2.$3.$4") .replace(".ap-northeast-2.compute.internal", "") https://stackoverflow.com/questions/29989031/getting-the-current-domain-name-in-chrome-when-the-page-fails-to-load Getting the current domain name in Chrome when the page fails to load If you try to load with Chrome: http://sdqdsqdqsd..
cmd="rm .gitignore" echo "$cmd" eval "$cmd" https://unix.stackexchange.com/questions/356534/how-to-run-string-with-values-as-a-command-in-bash How to run string with values as a command in bash? Here is my small bash script snippet. i=5 command='echo $i' $command I want this script to print 5 i.e., I want it to run echo and print 5. But it instead keeps printing $i. So how do I go about unix.sta..
- Total
- Today
- Yesterday
- 할인
- 유투브
- wlw
- 테슬라 리퍼럴 코드 혜택
- 연애학개론
- 테슬라 리퍼럴 코드
- 책그림
- 모델 Y 레퍼럴
- 테슬라 레퍼럴
- 테슬라
- 테슬라 크레딧 사용
- 김달
- 인스타그램
- COUNT
- 테슬라 레퍼럴 적용 확인
- 메디파크 내과 전문의 의학박사 김영수
- 팔로워 수 세기
- 모델y
- Kluge
- 레퍼럴
- 어떻게 능력을 보여줄 것인가?
- 테슬라 추천
- 테슬라 레퍼럴 코드 확인
- follower
- Bot
- 개리마커스
- 테슬라 리퍼럴 코드 생성
- 클루지
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | |
7 | 8 | 9 | 10 | 11 | 12 | 13 |
14 | 15 | 16 | 17 | 18 | 19 | 20 |
21 | 22 | 23 | 24 | 25 | 26 | 27 |
28 | 29 | 30 |