git remote set-url origin https://github.com/username/repository.git https://stackoverflow.com/questions/25927914/git-error-please-make-sure-you-have-the-correct-access-rights-and-the-reposito Git error: "Please make sure you have the correct access rights and the repository exists" I am using TortoiseGit on Windows. When I am trying to Clone from the context menu of the standard Windows Explore..
https://stackoverflow.com/questions/12464636/how-to-set-variables-in-hive-scripts How to set variables in HIVE scripts I'm looking for the SQL equivalent of SET varname = value in Hive QL I know I can do something like this: SET CURRENT_DATE = '2012-09-16'; SELECT * FROM foo WHERE day >= @CURRENT_DATE But th... stackoverflow.com
import tableauserverclient as TSC SITE_LUID = "..." auth = TSC.PersonalAccessTokenAuth( TABLEAU["TOKEN_NAME"], TABLEAU["TOKEN_VALUE"], ) server = TSC.Server( TABLEAU["SERVER_URL"], use_server_version=True ) server.auth.sign_in(auth) for w in TSC.Pager(server.workbooks): server.workbooks.populate_connections(w) for c in w.connections: if c.datasource_id == SITE_LUID: server.workbooks.populate_vie..
https://stackoverflow.com/questions/35789412/spark-sql-difference-between-gzip-vs-snappy-vs-lzo-compression-formats Spark SQL - difference between gzip vs snappy vs lzo compression formats I am trying to use Spark SQL to write parquet file. By default Spark SQL supports gzip, but it also supports other compression formats like snappy and lzo. What is the difference between these stackoverflow.co..
https://dojang.io/mod/page/view.php?id=2400 파이썬 코딩 도장: 38.3 예외 발생시키기 지금까지 숫자를 0으로 나눴을 때 에러, 리스트의 범위를 벗어난 인덱스에 접근했을 때 에러 등 파이썬에서 정해진 예외만 처리했습니다. 이번에는 우리가 직접 예외를 발생시켜 보겠습니다. 예 dojang.io raise BadRequest(f"[Error] name: {name}; not found")
https://www.bangseongbeom.com/sys-path-pythonpath.html sys.path, PYTHONPATH: 파이썬 파일 탐색 경로 import 문을 통해 다른 파이썬 파일을 불러올 때, 파이썬은 내부적으로 파일을 찾기 위해 sys.path와 PYTHONPATH에 있는 경로를 탐색합니다. 이 두 변수를 적절히 수정해 임의의 디렉터리에 있는 파이썬 www.bangseongbeom.com
sudo systemctl stop zeppelin sudo systemctl status zeppelin sudo systemctl start zeppelin sudo systemctl status zeppelin https://aws.amazon.com/ko/premiumsupport/knowledge-center/restart-service-emr/ Amazon EMR에서 서비스 다시 시작하기 닫기 Sindhuri 씨의 동영상을 보고 자세히 알아보기(3:35) aws.amazon.com
from config import TABLEAU import tableauserverclient as TSC from util import timestamp auth = TSC.PersonalAccessTokenAuth( TABLEAU["TOKEN_NAME"], TABLEAU["TOKEN_VALUE"], ) server = TSC.Server( TABLEAU["SERVER_URL"], use_server_version=True ) target = "asdf" with server.auth.sign_in(auth): for v in TSC.Pager(server.views): if target == v.name: view = v break server.views.populate_pdf(view, TSC.I..
https://www.linkedin.com/pulse/orc-vs-parquet-vivek-singh/ ORC vs Parquet ORC and Parquet are both columnar formats and there has been a lot of debate on which performs better in terms of compression and performance. Dataset used in this benchmark is a publicly available dataset. www.linkedin.com https://medium.com/@dhareshwarganesh/benchmarking-parquet-vs-orc-d52c39849aef Benchmarking PARQUET v..
SELECT * FROM db_name."table_name$partitions" ORDER BY column_name DESC https://docs.aws.amazon.com/ko_kr/athena/latest/ug/show-partitions.html SHOW PARTITIONS - Amazon Athena 이 페이지에 작업이 필요하다는 점을 알려 주셔서 감사합니다. 실망시켜 드려 죄송합니다. 잠깐 시간을 내어 설명서를 향상시킬 수 있는 방법에 대해 말씀해 주십시오. docs.aws.amazon.com https://github.com/awsdocs/amazon-athena-user-guide/pull/89 (#88) feat: add db_name by seunggabi · Pull Request..
variables = ['first', 'second', 'third'] def run_dag_task(variable): task = dag_task(variable) return task task_arr=[] task_arr.append(run_dag_task(variable[0])) for variable in variables[1:]: task=run_dag_task(variable) task_arr[-1]>>task task_arr.append(task) https://stackoverflow.com/questions/70002086/how-to-run-tasks-sequentially-in-a-loop-in-an-airflow-dag How to run tasks sequentially in ..
https://stackoverflow.com/questions/36747268/why-does-conf-setspark-app-name-appname-not-set-the-name-in-the-ui Why does conf.set("spark.app.name", appName) not set the name in the UI? I am calling val appName : String = arguments.getNameFromConfig val conf = new SparkConf() conf.set("spark.driver.maxResultSize", "30G") conf.set("spark.app.name", appName) println("Master: " + stackoverflow.com
grep -v 'exclude_word' file egrep -v '(main|master)' file https://stackoverflow.com/questions/4538253/how-can-i-exclude-one-word-with-grep How can I exclude one word with grep? I need something like: grep ^"unwanted_word"XXXXXXXX stackoverflow.com https://www.warp.dev/terminus/grep-exclude How To Exclude Patterns or Files With Grep [#excluding-single-pattern]Excluding a single pattern[#excluding..
PARTITIONED BY (dt string) CLUSTERED BY (user_key) SORTED BY (user_key ASC) INTO 256 BUCKETS CLUSTERED BY ~ SORTED BY ~ INTO {size} BUCKETS 을 사용해도, spark sql plan partitioning 작업에는 영향 없음. 비용이 많이 나온 것과 관련해서는, 로드되는 data size가 커서 발생하는 것 같음. 향후에, small files merge 를 통해서, 비용을 최적화할 수 있음. https://sparkbyexamples.com/apache-hive/hive-partitioning-vs-bucketing-with-examples/ Hive Partitioning vs Bucketin..
scp -i keypair-asdf.pem -r hadoop@asdf:~/tez.tar.gz . scp -i keypair-qwer.pem -r tez.tar.gz hadoop@qwer-emr:~/tez.tar.gz https://doheejin.github.io/linux/2021/03/03/linux-scp.html [Linux] scp 명령어로 (로컬↔서버) 파일 전송 scp는 SecureCopy의 약자로, 원격서버에 있는 파일과 폴더를 전송하거나 가져오기 위해 사용하는 명령어이다.ssh 원격 접속 프로토콜을 기반으로 하며, ssh와 동일한 22번 포트를 이용하기 때문에 passw doheejin.github.io
https://stackoverflow.com/questions/51933568/how-to-retrieve-hive-table-partition-location How to retrieve Hive table Partition Location? Show Partitions --> In Hive/Spark, this command only provides the Partition, without providing the location information on hdfs/s3 Since we maintain different location for each partition in a tab... stackoverflow.com
https://www.projectpro.io/recipes/explain-study-of-spark-query-execution-plans-using-explain Explain Study of Spark query execution plans using explain() - This recipe explains Study of Spark query execution plans using explain() www.projectpro.io
aws s3 sync . s3://asdf/a/b/c/ --delete aws s3 sync s3://my-bucket s3://my-other-bucket \ --exclude 'customers/*' \ --exclude 'orders/*' \ --exclude 'reportTemplate/*' https://stackoverflow.com/questions/32393026/exclude-multiple-folders-using-aws-s3-sync Exclude multiple folders using AWS S3 sync How to exclude multiple folders while using aws s3 syn ? I tried : # aws s3 sync s3://inksedge-app..
--conf spark.driver.maxResultSize=4g https://wooono.tistory.com/41 [Spark] spark.driver.maxResultSize 오류 오류 org.apache.spark.SparkException: Job aborted due to stage failure: Total size of serialized results of XXXX tasks (X.0 GB) is bigger than spark.driver.maxResultSize (X.0 GB) 원인 RDD로 분산 돼 있던 데이터를 collect() 등을 사용해 dri wooono.tistory.com
https://stackoverflow.com/questions/27932345/downloading-folders-from-aws-s3-cp-or-sync Downloading folders from aws s3, cp or sync? If I want to download all the contents of a directory on S3 to my local PC, which command should I use cp or sync ? Any help would be highly appreciated. For example, if I want to download all... stackoverflow.com
https://community.cloudera.com/t5/Support-Questions/How-to-set-yarn-application-name-of-hive-job/td-p/185524 How to set yarn application name of hive job I use HDP 2.4,use hive on Tez. And i want to set the job name show in yarn resource manger page, Now the hive job name like HIVE-2f58f71e-4c29-4092-ac04-6e63c15ee223 and application is tez. How should I set the name of hive job name to let it s..
hdfs dfsadmin -report hdfs fsck -list-corruptfileblocks hdfs fsck -delete https://118k.tistory.com/469 [hadoop][fsck] HDFS의 상태를 점검 할 수 있는 명령어 HDFS의 fsck 명령- HDFS 상의 다양한 불일치(블록 누락, 복제 되지 않은 블록)를 확인- 오류를 발견하고 수정하지는 않음(NameNode가 복구가능한 오류는 자동으로 수정)- 열린 파일은 무시함 > hadoop fsck / 118k.tistory.com
fun main(args: Array){ val array: Array = arrayOf("a", "b", "c", "d", "e") val list: List = array.toList() list.forEach { println(it) } } https://codechacha.com/ko/kotlin-convert-list-to-array/ Kotlin - Array를 List로 변환 코틀린에서 배열을 리스트로 변환하는 방법을 소개합니다. `toList()`는 List를 Array로 변환합니다. `toMutableList()`는 List가 아닌 MutableList로 리턴합니다. 다음과 같이 `listOf()`로 변환할 수 있습니다. codechacha.com
aws s3 ls --summarize --human-readable --recursive s3://bucket-name/ https://serverfault.com/questions/84815/how-can-i-get-the-size-of-an-amazon-s3-bucket How can I get the size of an Amazon S3 bucket? I'd like to graph the size (in bytes, and # of items) of an Amazon S3 bucket and am looking for an efficient way to get the data. The s3cmd tools provide a way to get the total file size using s3c..
aws emr list-clusters --active | jq -r ".Clusters[].Id" | while read id ; do dns=$(aws emr describe-cluster --cluster-id $id | jq -r ".Cluster.MasterPublicDnsName") dns=$(echo $dns | sed -r "s/ip-([0-9]+)-([0-9]+)-([0-9]+)-([0-9]+)\.ap-northeast-2\.compute\.internal/\1.\2.\3.\4/g") name=$(aws emr describe-cluster --cluster-id $id | jq -r ".Cluster.Name") echo $dns $name done # sudo vi /etc/hosts..
location.href = document.querySelector('#reload-button') .url .replace(/ip-(\d+)-(\d+)-(\d+)-(\d+)/,"$1.$2.$3.$4") .replace(".ap-northeast-2.compute.internal", "") https://stackoverflow.com/questions/29989031/getting-the-current-domain-name-in-chrome-when-the-page-fails-to-load Getting the current domain name in Chrome when the page fails to load If you try to load with Chrome: http://sdqdsqdqsd..
cmd="rm .gitignore" echo "$cmd" eval "$cmd" https://unix.stackexchange.com/questions/356534/how-to-run-string-with-values-as-a-command-in-bash How to run string with values as a command in bash? Here is my small bash script snippet. i=5 command='echo $i' $command I want this script to print 5 i.e., I want it to run echo and print 5. But it instead keeps printing $i. So how do I go about unix.sta..
- Total
- Today
- Yesterday
- 레퍼럴
- 테슬라 레퍼럴 적용 확인
- 테슬라 추천
- 김달
- 모델 Y 레퍼럴
- 클루지
- 유투브
- Kluge
- 연애학개론
- Bot
- 팔로워 수 세기
- wlw
- 인스타그램
- 테슬라 리퍼럴 코드 혜택
- 메디파크 내과 전문의 의학박사 김영수
- COUNT
- 테슬라 리퍼럴 코드
- 테슬라 레퍼럴 코드 확인
- 어떻게 능력을 보여줄 것인가?
- 테슬라 레퍼럴
- 책그림
- 개리마커스
- 할인
- 테슬라
- 모델y
- 테슬라 리퍼럴 코드 생성
- follower
- 테슬라 크레딧 사용
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | |
7 | 8 | 9 | 10 | 11 | 12 | 13 |
14 | 15 | 16 | 17 | 18 | 19 | 20 |
21 | 22 | 23 | 24 | 25 | 26 | 27 |
28 | 29 | 30 |