S3=s3://bucket/asdf/ aws s3 rm ${S3} --recursive BUCKET=`echo ${S3} | egrep -o 's3://[^/]*' | sed -e s/s3:\\\\/\\\\///g` PREFIX=`echo ${S3} | sed -e s/s3:\\\\/\\\\/${BUCKET}\\\\///g` aws s3api list-object-versions \ --bucket ${BUCKET} \ --prefix ${PREFIX} | jq -r '.Versions[] | .Key + " " + .VersionId' | while read key id ; do aws s3api delete-object \ --bucket ${BUCKET} \ --key ${key} \ --versi..
cp -r asdf/* qwer/ cp -r asdf/.[^.]* qwer/ 2>/dev/null | true https://superuser.com/questions/61611/how-to-copy-with-cp-to-include-hidden-files-and-hidden-directories-and-their-con/1761062#1761062 How to copy with cp to include hidden files and hidden directories and their contents? How can I make cp -r copy absolutely all of the files and directories in a directory Requirements: Include hidden ..
FROM amazonlinux:latest as app WORKDIR /app COPY . /app/ ARG SH ENV SH=${SH} RUN yum install -y awscli RUN curl -L https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64 -o ./jq RUN chmod a+x ./jq CMD ./static/sh/${SH}.sh RUN yum install -y awscli https://stackoverflow.com/questions/64164308/why-is-the-aws-cli-not-found-on-amazonlinux2-ami Why is the aws cli not found on amazonlinux2..
cp ./src/*/*.h ./aaa | true https://serverfault.com/questions/153875/how-to-let-cp-command-dont-fire-an-error-when-source-file-does-not-exist How to let 'cp' command don't fire an error when source file does not exist? I'm using Mac OS X. I'm trying to copying some files with cp command for a build script like this. cp ./src/*/*.h ./aaa But this command fires an error if there is no .h file in ...
https://joodev.tistory.com/19 실행은 되는데, Cron에서 돌지 않을 때 ********************************Python - BeautifulSoup********************************Python에서 BeautifulSoup라이브러리를 사용한 Script 실행시 (크론에서) (1) 그냥 실행하면 -> 잘 돌아간다. (2) cron에 넣은 명령을 command l joodev.tistory.com
insert overwrite table your_table select * from your_table where id 1 ; https://stackoverflow.com/questions/17810537/how-to-delete-and-update-a-record-in-hive How to delete and update a record in Hive I have installed Hadoop, Hive, Hive JDBC. which are running fine for me. But I still have a problem. How to delete or update a single record using Hive because delete or update command of MySQL is ..
cmd > stdin.txt 0>&1 cmd >> stdin.txt 0>&1 cmd > stdout.txt cmd >> stdout.txt cmd > stdout.txt 1>&1 cmd >> stdout.txt 1>&1 cmd > stderr.txt 2>&1 cmd >> stderr.txt 2>&1 https://june98.tistory.com/102 Crontab 로그(log) 남기는 방법 (feat. 출력 리다이렉션, 2>&1의 의미) Crontab 로그(log) 남기는 방법 (feat. 출력 리다이렉션, 2>&1의 의미) 2021.08.09 - [개발자] - 맥북 작업 스케줄러 (feat. Crontab사용방법, Cron과 Crontab 차이, 파이썬 파일 자동실행) 저번 글에서 Crontab을 ..
![](http://i1.daumcdn.net/thumb/C148x148/?fname=https://blog.kakaocdn.net/dn/cG1FjO/btrVALfZIL0/FlPcEHdS8kWxlSePi7R0xK/img.png)
[ { "Classification": "spark-env", "Configurations": [ { "Classification": "export", "Properties": { "JAVA_HOME": "/usr/lib/jvm/java-11-amazon-corretto.x86_64" } } ] }, { "Classification": "spark-defaults", "Properties": { "spark.executorEnv.JAVA_HOME": "/usr/lib/jvm/java-11-amazon-corretto.x86_64", "spark.dynamicAllocation.enabled": "True" } }, { "Classification": "hive-site", "Properties": { "..
${varName//$'\n'/\\n} echo $LINE | sed -e "s/12345678/${replace}/g" https://stackoverflow.com/questions/3306007/replace-a-string-in-shell-script-using-a-variable Replace a string in shell script using a variable I am using the below code for replacing a string inside a shell script. echo $LINE | sed -e 's/12345678/"$replace"/g' but it's getting replaced with $replace instead of the value of that..
git remote set-url origin https://github.com/username/repository.git https://stackoverflow.com/questions/25927914/git-error-please-make-sure-you-have-the-correct-access-rights-and-the-reposito Git error: "Please make sure you have the correct access rights and the repository exists" I am using TortoiseGit on Windows. When I am trying to Clone from the context menu of the standard Windows Explore..
https://stackoverflow.com/questions/12464636/how-to-set-variables-in-hive-scripts How to set variables in HIVE scripts I'm looking for the SQL equivalent of SET varname = value in Hive QL I know I can do something like this: SET CURRENT_DATE = '2012-09-16'; SELECT * FROM foo WHERE day >= @CURRENT_DATE But th... stackoverflow.com
import tableauserverclient as TSC SITE_LUID = "..." auth = TSC.PersonalAccessTokenAuth( TABLEAU["TOKEN_NAME"], TABLEAU["TOKEN_VALUE"], ) server = TSC.Server( TABLEAU["SERVER_URL"], use_server_version=True ) server.auth.sign_in(auth) for w in TSC.Pager(server.workbooks): server.workbooks.populate_connections(w) for c in w.connections: if c.datasource_id == SITE_LUID: server.workbooks.populate_vie..
https://stackoverflow.com/questions/35789412/spark-sql-difference-between-gzip-vs-snappy-vs-lzo-compression-formats Spark SQL - difference between gzip vs snappy vs lzo compression formats I am trying to use Spark SQL to write parquet file. By default Spark SQL supports gzip, but it also supports other compression formats like snappy and lzo. What is the difference between these stackoverflow.co..
https://dojang.io/mod/page/view.php?id=2400 파이썬 코딩 도장: 38.3 예외 발생시키기 지금까지 숫자를 0으로 나눴을 때 에러, 리스트의 범위를 벗어난 인덱스에 접근했을 때 에러 등 파이썬에서 정해진 예외만 처리했습니다. 이번에는 우리가 직접 예외를 발생시켜 보겠습니다. 예 dojang.io raise BadRequest(f"[Error] name: {name}; not found")
https://www.bangseongbeom.com/sys-path-pythonpath.html sys.path, PYTHONPATH: 파이썬 파일 탐색 경로 import 문을 통해 다른 파이썬 파일을 불러올 때, 파이썬은 내부적으로 파일을 찾기 위해 sys.path와 PYTHONPATH에 있는 경로를 탐색합니다. 이 두 변수를 적절히 수정해 임의의 디렉터리에 있는 파이썬 www.bangseongbeom.com
sudo systemctl stop zeppelin sudo systemctl status zeppelin sudo systemctl start zeppelin sudo systemctl status zeppelin https://aws.amazon.com/ko/premiumsupport/knowledge-center/restart-service-emr/ Amazon EMR에서 서비스 다시 시작하기 닫기 Sindhuri 씨의 동영상을 보고 자세히 알아보기(3:35) aws.amazon.com
from config import TABLEAU import tableauserverclient as TSC from util import timestamp auth = TSC.PersonalAccessTokenAuth( TABLEAU["TOKEN_NAME"], TABLEAU["TOKEN_VALUE"], ) server = TSC.Server( TABLEAU["SERVER_URL"], use_server_version=True ) target = "asdf" with server.auth.sign_in(auth): for v in TSC.Pager(server.views): if target == v.name: view = v break server.views.populate_pdf(view, TSC.I..
https://www.linkedin.com/pulse/orc-vs-parquet-vivek-singh/ ORC vs Parquet ORC and Parquet are both columnar formats and there has been a lot of debate on which performs better in terms of compression and performance. Dataset used in this benchmark is a publicly available dataset. www.linkedin.com https://medium.com/@dhareshwarganesh/benchmarking-parquet-vs-orc-d52c39849aef Benchmarking PARQUET v..
SELECT * FROM db_name."table_name$partitions" ORDER BY column_name DESC https://docs.aws.amazon.com/ko_kr/athena/latest/ug/show-partitions.html SHOW PARTITIONS - Amazon Athena 이 페이지에 작업이 필요하다는 점을 알려 주셔서 감사합니다. 실망시켜 드려 죄송합니다. 잠깐 시간을 내어 설명서를 향상시킬 수 있는 방법에 대해 말씀해 주십시오. docs.aws.amazon.com https://github.com/awsdocs/amazon-athena-user-guide/pull/89 (#88) feat: add db_name by seunggabi · Pull Request..
variables = ['first', 'second', 'third'] def run_dag_task(variable): task = dag_task(variable) return task task_arr=[] task_arr.append(run_dag_task(variable[0])) for variable in variables[1:]: task=run_dag_task(variable) task_arr[-1]>>task task_arr.append(task) https://stackoverflow.com/questions/70002086/how-to-run-tasks-sequentially-in-a-loop-in-an-airflow-dag How to run tasks sequentially in ..
https://stackoverflow.com/questions/36747268/why-does-conf-setspark-app-name-appname-not-set-the-name-in-the-ui Why does conf.set("spark.app.name", appName) not set the name in the UI? I am calling val appName : String = arguments.getNameFromConfig val conf = new SparkConf() conf.set("spark.driver.maxResultSize", "30G") conf.set("spark.app.name", appName) println("Master: " + stackoverflow.com
grep -v 'exclude_word' file egrep -v '(main|master)' file https://stackoverflow.com/questions/4538253/how-can-i-exclude-one-word-with-grep How can I exclude one word with grep? I need something like: grep ^"unwanted_word"XXXXXXXX stackoverflow.com https://www.warp.dev/terminus/grep-exclude How To Exclude Patterns or Files With Grep [#excluding-single-pattern]Excluding a single pattern[#excluding..
![](http://i1.daumcdn.net/thumb/C148x148/?fname=https://blog.kakaocdn.net/dn/caRwpH/btrUavecF3V/TtdTXkfTvReCrxk0FHZPu0/img.png)
PARTITIONED BY (dt string) CLUSTERED BY (user_key) SORTED BY (user_key ASC) INTO 256 BUCKETS CLUSTERED BY ~ SORTED BY ~ INTO {size} BUCKETS 을 사용해도, spark sql plan partitioning 작업에는 영향 없음. 비용이 많이 나온 것과 관련해서는, 로드되는 data size가 커서 발생하는 것 같음. 향후에, small files merge 를 통해서, 비용을 최적화할 수 있음. https://sparkbyexamples.com/apache-hive/hive-partitioning-vs-bucketing-with-examples/ Hive Partitioning vs Bucketin..
- Total
- Today
- Yesterday
- 책그림
- 인스타그램
- 레퍼럴
- wlw
- 테슬라 추천
- 테슬라 레퍼럴 적용 확인
- 팔로워 수 세기
- 클루지
- 개리마커스
- Bot
- 할인
- 테슬라 레퍼럴
- 테슬라 리퍼럴 코드 생성
- 김달
- 테슬라 크레딧 사용
- 테슬라 리퍼럴 코드 혜택
- 모델y
- Kluge
- 모델 Y 레퍼럴
- 테슬라 레퍼럴 코드 확인
- 유투브
- 어떻게 능력을 보여줄 것인가?
- follower
- 테슬라
- COUNT
- 연애학개론
- 메디파크 내과 전문의 의학박사 김영수
- 테슬라 리퍼럴 코드
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | ||||||
2 | 3 | 4 | 5 | 6 | 7 | 8 |
9 | 10 | 11 | 12 | 13 | 14 | 15 |
16 | 17 | 18 | 19 | 20 | 21 | 22 |
23 | 24 | 25 | 26 | 27 | 28 |