withType { enabled = true isZip64 = true duplicatesStrategy = DuplicatesStrategy.EXCLUDE archiveFileName.set("$project.jar") from(sourceSets.main.get().output) dependsOn(configurations.compileClasspath) from({ configurations.compileClasspath.get().filter { it.name.endsWith("jar") }.map { zipTree(it) } }) { exclude("META-INF/*.RSA", "META-INF/*.SF", "META-INF/*.DSA") } } https://github.com/johnre..
pip install lxml html5lib beautifulsoup4 import pandas as pd url = 'https://en.wikipedia.org/wiki/History_of_Python' dfs = pd.read_html(url) print(len(dfs)) print(dfs[0]['Version']) print(dfs[0]['Release date']) # Load pandas import pandas as pd # Webpage url url = 'https://en.wikipedia.org/wiki/History_of_Python' # Extract tables dfs = pd.read_html(url) # Get first table df = dfs[0] # Extract c..
https://stackoverflow.com/questions/26546299/result-type-of-an-implicit-conversion-must-be-more-specific-than-anyref/26549898#26549898 Result type of an implicit conversion must be more specific than AnyRef Let def h(a: AnyRef*) = a.mkString(",") h: (a: AnyRef*)String and so h("1","2") res: String = 1,2 However, h(1,2) error: the result type of an implicit conversion must be more specific than A..
fun flatMap( map: Map, flat: MutableMap? = mutableMapOf(), prefix: String? = null ): Map { flat!! for (k in map.keys) { val key = if (prefix == null) { k } else { "$prefix.$k" } if (map[k] is Map) { val m = map[k] as Map flatMap(m, flat, key) continue } flat[key] = map[k]!! } return flat }
https://itstory.tk/entry/Spring-Webflux-JDBC%ED%98%B9%EC%9D%80-blocking-call-%ED%95%B8%EB%93%A4%EB%A7%81-%EB%B0%A9%EB%B2%95 Spring Webflux + JDBC(혹은 blocking call) 핸들링 방법 스프링 5부터 Spring Webflux를 통해 reactive 개발이 가능하게 됐습니다. 요청당 스레드가 하나씩 차지했던 기존의 패러다임과 달리 Webflux는 non-blocking 시스템이 가능하게 해줍니다. 하지만 non-blocking itstory.tk
import time ts = time.time() print(ts) # 1594819641.9622827 import datetime; ct = datetime.datetime.now() print(ct) # 2020-07-15 14:30:26.159446 ts = ct.timestamp() print(ts) # 1594823426.159446 import time; gmt = time.gmtime() print(gmt) # time.struct_time(tm_year=2020, tm_mon=7, tm_mday=15, tm_hour=19, tm_min=21, tm_sec=6, tm_wday=2, tm_yday=197, tm_isdst=0) https://www.geeksforgeeks.org/get-c..
https://stackoverflow.com/questions/54875767/intellij-runs-kotlin-tests-annotated-with-ignore IntelliJ runs Kotlin tests annotated with @Ignore I have a Kotlin project that uses JUnit 5.2.0. When I use IntelliJ to run tests, it runs all tests, even those annotated with @org.junit.Ignore. package my.package import org.junit.Ignore import ... stackoverflow.com
Map map = new LinkedHashMap(); map.put("key", "value"); Properties properties = new Properties(); properties.putAll(map); https://stackoverflow.com/questions/8036332/converting-a-java-map-object-to-a-properties-object Converting a Java Map object to a Properties object Is anyone able to provide me with a better way than the below for converting a Java Map object to a Properties object? Map map =..
https://stackoverflow.com/questions/43664110/kotlin-if-item-not-in-list-proper-syntax Kotlin: "if item not in list" proper syntax Given Kotlin's list lookup syntax, if (x in myList) as opposed to idiomatic Java, if (myList.contains(x)) how can one express negation? The compiler doesn't like either of these: if (x not in stackoverflow.com
StringWriter sw = new StringWriter(); e.printStackTrace(new PrintWriter(sw)); String exceptionAsString = sw.toString(); https://stackoverflow.com/questions/1149703/how-can-i-convert-a-stack-trace-to-a-string How can I convert a stack trace to a string? What is the easiest way to convert the result of Throwable.getStackTrace() to a string that depicts the stacktrace? stackoverflow.com fun Excepti..
permissions: actions: read|write|none checks: read|write|none contents: read|write|none deployments: read|write|none id-token: read|write|none issues: read|write|none discussions: read|write|none packages: read|write|none pages: read|write|none pull-requests: read|write|none repository-projects: read|write|none security-events: read|write|none statuses: read|write|none permissions: read-all|writ..
@Transactional @Modifying(clearAutomatically = true) @Query( value = "" + "UPDATE recommendation_history SET " + "request_count = request_count + 1, " + "updated_date_time = now() " + "WHERE email = :#{#response.email} ", nativeQuery = true ) int increase(@Param("response") Response response); https://stackoverflow.com/questions/17121620/spring-data-jpa-update-query-not-updating
for index, row in rche_df.iterrows(): if isinstance(row.wgs1984_latitude, float): row = row.copy() target = row.address_chi dict_temp = geocoding(target) rche_df.loc[index, 'wgs1984_latitude'] = dict_temp['lat'] rche_df.loc[index, 'wgs1984_longitude'] = dict_temp['long'] https://stackoverflow.com/questions/25478528/updating-value-in-iterrow-for-pandas Updating value in iterrow for pandas I am do..
import org.hibernate.annotations.CreationTimestamp; import org.hibernate.annotations.UpdateTimestamp; import javax.persistence.*; import java.time.LocalDateTime; import java.time.format.DateTimeFormatter; @Entity @Table(name = "table", uniqueConstraints = { @UniqueConstraint( name = "table__email", columnNames = {"email"} ) } ) public class Response { private static final DateTimeFormatter forma..
import pandas as pd import numpy as np #create DataFrame df = pd.DataFrame({'team': ['A', 'A', 'A', 'B', 'B'], 'position': ['Guard', 'Guard', np.nan, 'Guard', 'Forward'], 'points': [22, 28, 14, 13, 19]}) #view DataFrame print(df) team position points 0 A Guard 22 1 A Guard 28 2 A NaN 14 3 B Guard 13 4 B Forward 19 https://www.statology.org/cannot-mask-with-non-boolean-array-containing-na-nan-val..
@ActiveProfiles("local") @SpringBootTest @AutoConfigureMockMvc class DefaultControllerTest { @Autowired private MockMvc mockMvc; @Test void recommend_400() throws Exception { // given String body = "{\n" + " \"name\": \"test\"\n" + "}"; // then mockMvc.perform(MockMvcRequestBuilders.post(DefaultController.PATH) .contentType(MediaType.APPLICATION_JSON) .content(body) ) .andExpect(status().isBadRe..
df.columns = df.columns.str.lower() https://stackoverflow.com/questions/19726029/how-can-i-make-pandas-dataframe-column-headers-all-lowercase How can I make pandas dataframe column headers all lowercase? I want to make all column headers in my pandas data frame lower case Example If I have: data = country country isocode year XRAT tcgdp 0 Canada CAN 2001 1.54876 stackoverflow.com
def version_name = '1.0.0' def (major, minor, patch) = version_name.tokenize('.') https://mrgamza.tistory.com/493 gradle. string split하기 거의 모든 프로그램들이 그렇지만 스트링에 대한 잘라서 처리를 할 수 있습니다. gradle에서 string을 잘라서 쓰는 방법은 다음과 같습니다. 1 2 def version_name = '1.0.0' def (major, minor, patch) =.. mrgamza.tistory.com
@Service public class Sha256CipherService { private String bytesToHex(byte[] bytes) { StringBuilder builder = new StringBuilder(); for (byte b : bytes) { builder.append(String.format("%02x", b)); } return builder.toString(); } public String encrypt(String plain) throws NoSuchAlgorithmException { MessageDigest md = MessageDigest.getInstance("SHA-256"); md.update(plain.getBytes()); return bytesToH..
docker build --build-arg HTTP_PROXY=http://10.20.30.2:1234 --build-arg FTP_PROXY=http://40.50.60.5:4567 . https://docs.docker.com/engine/reference/commandline/build/#set-build-time-variables---build-arg docker build docker build: The `docker build` command builds Docker images from a Dockerfile and a "context". A build's context is the set of files located in the specified `PATH` or `URL`.... do..
![](http://i1.daumcdn.net/thumb/C148x148/?fname=https://blog.kakaocdn.net/dn/bBM8FP/btrJikotRux/eF2KkH9KJdULIOMtMtgxik/img.png)
https://stackoverflow.com/questions/26994025/whats-the-meaning-of-locality-levelon-spark-cluster What's the meaning of "Locality Level"on Spark cluster What's the meaning of the title "Locality Level" and the 5 status Data local --> process local --> node local --> rack local --> Any? stackoverflow.com https://dark0096.github.io/spark/2018/09/04/spark-data-locality.html Dark Tech Blog This blog ..
import pandas as pd df1 = pd.DataFrame({'a':['a0','a1','a2','a3'], 'b':['b0','b1','b2','b3'], 'c':['c0','c1','c2','c3']}, index = [0,1,2,3]) df2 = pd.DataFrame({'a':['a2','a3','a4','a5'], 'b':['b2','b3','b4','b5'], 'c':['c2','c3','c4','c5'], 'd':['d2','d3','d4','d5']}, index = [2,3,4,5]) print(df1, '\n') print(df2) result1 = pd.concat([df1,df2]) print(result1) https://yganalyst.github.io/data_ha..
@Profile("test | local") @Profile("!dev & !prof1 & !prof2") https://stackoverflow.com/questions/43168881/can-i-negate-a-collection-of-spring-profiles Can I negate (!) a collection of spring profiles? Is it possible to configure a bean in such a way that it wont be used by a group of profiles? Currently I can do this (I believe): @Profile("!dev, !qa, !local") Is there a neater notation to achi.....
- Total
- Today
- Yesterday
- 테슬라 크레딧 사용
- 연애학개론
- 테슬라 리퍼럴 코드 혜택
- 테슬라 추천
- 메디파크 내과 전문의 의학박사 김영수
- 테슬라 리퍼럴 코드
- 레퍼럴
- 유투브
- 테슬라 레퍼럴 적용 확인
- follower
- 김달
- wlw
- 테슬라 리퍼럴 코드 생성
- 어떻게 능력을 보여줄 것인가?
- 테슬라 레퍼럴 코드 확인
- 팔로워 수 세기
- 책그림
- 할인
- Kluge
- 개리마커스
- 테슬라 레퍼럴
- Bot
- 모델y
- 모델 Y 레퍼럴
- 테슬라
- COUNT
- 인스타그램
- 클루지
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | ||||||
2 | 3 | 4 | 5 | 6 | 7 | 8 |
9 | 10 | 11 | 12 | 13 | 14 | 15 |
16 | 17 | 18 | 19 | 20 | 21 | 22 |
23 | 24 | 25 | 26 | 27 | 28 |