hdfs dfsadmin -report hdfs fsck -list-corruptfileblocks hdfs fsck -delete https://118k.tistory.com/469 [hadoop][fsck] HDFS의 상태를 점검 할 수 있는 명령어 HDFS의 fsck 명령- HDFS 상의 다양한 불일치(블록 누락, 복제 되지 않은 블록)를 확인- 오류를 발견하고 수정하지는 않음(NameNode가 복구가능한 오류는 자동으로 수정)- 열린 파일은 무시함 > hadoop fsck / 118k.tistory.com
fun main(args: Array){ val array: Array = arrayOf("a", "b", "c", "d", "e") val list: List = array.toList() list.forEach { println(it) } } https://codechacha.com/ko/kotlin-convert-list-to-array/ Kotlin - Array를 List로 변환 코틀린에서 배열을 리스트로 변환하는 방법을 소개합니다. `toList()`는 List를 Array로 변환합니다. `toMutableList()`는 List가 아닌 MutableList로 리턴합니다. 다음과 같이 `listOf()`로 변환할 수 있습니다. codechacha.com
aws s3 ls --summarize --human-readable --recursive s3://bucket-name/ https://serverfault.com/questions/84815/how-can-i-get-the-size-of-an-amazon-s3-bucket How can I get the size of an Amazon S3 bucket? I'd like to graph the size (in bytes, and # of items) of an Amazon S3 bucket and am looking for an efficient way to get the data. The s3cmd tools provide a way to get the total file size using s3c..
aws emr list-clusters --active | jq -r ".Clusters[].Id" | while read id ; do dns=$(aws emr describe-cluster --cluster-id $id | jq -r ".Cluster.MasterPublicDnsName") dns=$(echo $dns | sed -r "s/ip-([0-9]+)-([0-9]+)-([0-9]+)-([0-9]+)\.ap-northeast-2\.compute\.internal/\1.\2.\3.\4/g") name=$(aws emr describe-cluster --cluster-id $id | jq -r ".Cluster.Name") echo $dns $name done # sudo vi /etc/hosts..
location.href = document.querySelector('#reload-button') .url .replace(/ip-(\d+)-(\d+)-(\d+)-(\d+)/,"$1.$2.$3.$4") .replace(".ap-northeast-2.compute.internal", "") https://stackoverflow.com/questions/29989031/getting-the-current-domain-name-in-chrome-when-the-page-fails-to-load Getting the current domain name in Chrome when the page fails to load If you try to load with Chrome: http://sdqdsqdqsd..
cmd="rm .gitignore" echo "$cmd" eval "$cmd" https://unix.stackexchange.com/questions/356534/how-to-run-string-with-values-as-a-command-in-bash How to run string with values as a command in bash? Here is my small bash script snippet. i=5 command='echo $i' $command I want this script to print 5 i.e., I want it to run echo and print 5. But it instead keeps printing $i. So how do I go about unix.sta..
TABLE target PARTITION (YEAR, MONTH) SELECT A, B, C, YEAR, MONTH FROM temp https://stackoverflow.com/questions/40143249/hive-insert-overwrite-into-a-partitioned-table HIVE Insert overwrite into a partitioned Table I ran a insert overwrite on a partitioned table. After the command, say for example the below partitions are created. a,b,c,d,e Now when I rerun the Insert overwrite table, but this ti..
val df = spark.read.option("header","true").orc("/DATA/UNIVERSITY/DEPT/STUDENT/part-00000.orc") df.printSchema() https://stackoverflow.com/questions/58288941/how-to-get-the-schema-columns-and-their-types-of-orc-files-stored-in-hdfs How to get the schema (columns and their types) of ORC files stored in HDFS? I have ORC files stored in different folders on HDFS as follows: /DATA/UNIVERSITY/DEPT/ST..
https://docs.snowflake.com/ko/sql-reference/sql/create-function.html CREATE FUNCTION — Snowflake Documentation 함수가 NULL 값을 반환할 수 있거나 NON-NULL 값만 반환해야 할지 여부를 지정합니다. 기본값은 NULL입니다(즉, 이 함수는 NULL을 반환할 수 있음). 참고 현재, NOT NULL 절은 SQL UDFs에 대해 적용되지 않습 docs.snowflake.com https://stackoverflow.com/questions/51743367/hive-create-function-if-not-exists Hive: Create function if not exists At the start of my h..
https://leetcode.com/problems/decode-ways/description/ Decode Ways - LeetCode Level up your coding skills and quickly land a job. This is the best place to expand your knowledge and get prepared for your next interview. leetcode.com https://www.techiedelight.com/ko/count-decodings-sequence-of-digits/ 주어진 숫자 시퀀스의 카운트 디코딩 양수가 주어지면 해당 숫자를 매핑 테이블의 해당 알파벳에 매핑합니다. [(1, 'A'), (2, 'B'), … (26, 'Z')], 가능..
https://simon-aubury.medium.com/kafka-with-avro-vs-kafka-with-protobuf-vs-kafka-with-json-schema-667494cbb2af Kafka with AVRO vs., Kafka with Protobuf vs., Kafka with JSON Schema Experiments with Kafka serialisation schemes — playing with AVRO, Protobuf, JSON Schema in Confluent Streaming Platform. The code for… simon-aubury.medium.com
from inspect import signature def my_func(a, b, c, param_name='apple'): pass value = signature(my_func).parameters['param_name'].default print(value == 'apple') # True value = signature(my_func).parameters['param_name'].default https://stackoverflow.com/questions/12627118/get-a-function-arguments-default-value Get a function argument's default value? For this function def eat_dog(name, should_di..
master region server zookeeper hfile row key memstore wal rolling replay flush https://it-sunny-333.tistory.com/175 [HBase] 데이터 Read/Write 과정 (memstore, WAL, HFile) HDFS는 데이터를 읽을 때 랜덤 엑세스가 아니고 Full scan을 한다. 그리고 update가 불가하며 append만 가능하다. HBase는 HDFS에 있는 데이터를 랜덤 엑세스 하게 해주고 데이터를 update 할 수 있게 해준다. it-sunny-333.tistory.com
https://medium.com/@limgyumin/%EC%BD%94%ED%8B%80%EB%A6%B0-%EC%9D%98-apply-with-let-also-run-%EC%9D%80-%EC%96%B8%EC%A0%9C-%EC%82%AC%EC%9A%A9%ED%95%98%EB%8A%94%EA%B0%80-4a517292df29 코틀린 의 apply, with, let, also, run 은 언제 사용하는가? 원문 : “Kotlin Scoping Functions apply vs. with, let, also, and run” medium.com
https://support.atlassian.com/bitbucket-cloud/docs/set-up-or-run-parallel-steps/ Set up or run parallel steps | Bitbucket Cloud | Atlassian Support In Bitbucket Cloud, parallel steps enable you to build and test faster, by running a set of self-contained steps at the same time. support.atlassian.com
https://dydwnsekd.tistory.com/62 Airflow에서 Jinja template 사용하기 Airflow에서는 Jinja2 template를 내장하고 있어 이를 활용할 수 있는데, Jinja2 template에 대한 자세한 내용은 Jinja Document를 참고하기 바란다. https://jinja.palletsprojects.com/en/3.0.x/ Jinja2 template를 활용할 수 있 dydwnsekd.tistory.com yesterday = date_utils.date_to_string(kwargs['logical_date'].in_timezone("Asia/Seoul")) today = date_utils.add_date(yesterday, 1)
https://stackoverflow.com/questions/49234471/when-to-execute-refresh-table-my-table-in-spark/72970987#72970987 When to execute REFRESH TABLE my_table in spark? Consider a code; import org.apache.spark.sql.hive.orc._ import org.apache.spark.sql._ val path = ... val dataFrame:DataFramew = ... val hiveContext = new org.apache.spark.sql.hive.HiveContext( stackoverflow.com
SELECT column FROM table AS T1 INNER JOIN Params AS P1 ON T1.column LIKE '%' + P1.param + '%'; https://stackoverflow.com/questions/4612282/dynamic-like-statement-in-sql Dynamic Like Statement in SQL I've been racking my brain on how to do this for a while, and i know that some genius on this site will have the answer. Basically i'm trying to do this: SELECT column FROM table WHERE [tabl... stack..
https://stackoverflow.com/questions/42723604/how-to-set-spark-job-staging-location How to set Spark job staging location My spark job is failing because the user doesn't have access to directory where spark is trying to write staging or temp dataset. 2017-03-10 10:25:47,0928 ERROR JniCommon fs/client/fileclient... stackoverflow.com https://doc.hcs.huawei.com/en-us/usermanual/mrs/mrs_03_0298.html..
- Total
- Today
- Yesterday
- 테슬라
- 테슬라 레퍼럴
- 클루지
- Bot
- 할인
- wlw
- 개리마커스
- 테슬라 추천
- COUNT
- 인스타그램
- 테슬라 리퍼럴 코드 생성
- 테슬라 리퍼럴 코드
- 김달
- 메디파크 내과 전문의 의학박사 김영수
- 테슬라 리퍼럴 코드 혜택
- follower
- 모델 Y 레퍼럴
- 테슬라 크레딧 사용
- 테슬라 레퍼럴 코드 확인
- Kluge
- 테슬라 레퍼럴 적용 확인
- 모델y
- 팔로워 수 세기
- 어떻게 능력을 보여줄 것인가?
- 유투브
- 레퍼럴
- 연애학개론
- 책그림
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | |
7 | 8 | 9 | 10 | 11 | 12 | 13 |
14 | 15 | 16 | 17 | 18 | 19 | 20 |
21 | 22 | 23 | 24 | 25 | 26 | 27 |
28 | 29 | 30 |