반응형

powershell 에서 해당 vpn 의 SplitTunneling 을 false 에서 true 로 변경 후 재접속

PS C:\Users\hskim> Get-VpnConnection                                                    


Name                  : VPN_NAME
ServerAddress         : XXX.XXX.XXX.XXX
AllUserConnection     : False
Guid                  : {12345678-ABCD-1234-ABCD-1234567890AB}
TunnelType            : L2tp
AuthenticationMethod  : {MsChapv2}
EncryptionLevel       : Optional
L2tpIPsecAuth         : Psk
UseWinlogonCredential : False
EapConfigXmlStream    :
ConnectionStatus      : Disconnected
RememberCredential    : True
SplitTunneling        : False
DnsSuffix             :
IdleDisconnectSeconds : 0


PS C:\Users\hskim> Set-VpnConnection -name VPN_NAME -splitTunneling $true
PS C:\Users\hskim> Get-VpnConnection


Name                  : VPN_NAME
ServerAddress         : XXX.XXX.XXX.XXX
AllUserConnection     : False
Guid                  : {12345678-ABCD-1234-ABCD-1234567890AB}
TunnelType            : L2tp
AuthenticationMethod  : {MsChapv2}
EncryptionLevel       : Optional
L2tpIPsecAuth         : Psk
UseWinlogonCredential : False
EapConfigXmlStream    :
ConnectionStatus      : Disconnected
RememberCredential    : True
SplitTunneling        : True
DnsSuffix             :
IdleDisconnectSeconds : 0



PS C:\Users\hskim> 
반응형
Posted by FeliZ_하늘..
,
반응형

한줄요약: 최신 드라이버를 받고 설치해서 "전원 끄기 상태의 Wake on 매직 패킷" 설정을 활성화 하여 해결함

 

일단 내 환경은 아래와 같았는데 구글링 해서 Wake On Lan 설정을 해도 기능이 동작하지 않았음

 

메인보드: ASRock FATAL1TY X470 Gaming K4

내장랜카드: Intel I211

OS: Windows 10

 

ASRock BIOS 에서는 다음과 같이 설정했음

Advanced > ACPI Configuration 에서 PCI Devices Power On: Enabled
Boot 에서 Boot From Onboard LAN: Enabled

 

장치관리자에서 WOL 관련 속성은 다음과 같이 설정했음

Wake on 매직 패킷: 활성화

Wake on 패턴 일치: 활성화

전원 차단 시 속도 감소: 비활성화

장치관리자에서 확인할 수 있는 속성들

그래도 WOL 이 동작하지 않아서 최신 드라이버를 받아봄

 

https://www.intel.com/content/www/us/en/products/details/ethernet/gigabit-controllers/i211-controllers/downloads.html

 

Intel® Ethernet Controller I211 - Prod Info, Docs, Support and...

View products for Intel® Ethernet Controller I211. Find product specifications, technical documentation, downloads and support and more from Intel.

www.intel.com

 

구글링 하면 한글 사이트도 나오는데 습관적으로 영어 사이트로 들어가서 받았음

(나중에 확인해보니 한글 사이트에서 검색하면 버전이 낮고, 영어 사이트에서 최신 버전이 나오는듯 함)

 

Intel Network Adapter Driver for Windows 10 항목에서 Wired_PROSet_26.6_x64.zip 파일을 받음

 

압축해제 후 설치

 

Intel PROSet Adapter Configuration Utility 실행

 

전원 끄기 상태의 Wake on 매직 패킷: 활성화됨

이후에 PC 켜기가 가능해졌음

나는 PC 켜기 기능은 안드로이드에서 Wake On Lan 이란 앱으로 실행했음

반응형
Posted by FeliZ_하늘..
,
반응형

# [Apache Flink](https://flink.apache.org/)

## [Architecture](https://flink.apache.org/flink-architecture.html)

* [scala](https://www.scala-lang.org/) 로 만들어짐
* Apache top level project
* data stream 에 대한 상태 저장 계산을 위한 framework
* unbounded, bounded data stream 을 처리하는 분산 처리 엔진
* Hadoop YARN, Apache Mesos, Kubernetes, standalone 으로 구성 가능

## [Distributed Runtime Environment](https://ci.apache.org/projects/flink/flink-docs-release-1.9/concepts/runtime.html)

### Tasks and Operator Chains

* 분산 실행을 위해 operator subtask 를 task 로 묶음
* 각 task 는 하나의 thread 로 실행됨

### Job Managers, Task Managers, Clients

|Role|Process|
|:---:|:---:|
|master|JobManager|
|slave|TaskManager|

#### Job Managers

* master 라고도 함
* 분산 실행을 조정함
* schedule task, coordinate checkpoint, 실패시 coordinate recovery 등을 담당함
* 적어도 하나 존재해야 함
* HA 구성시 여러개일 수 있음
* leader 는 항상 하나
* 나머지는 전부 standby

#### Task Managers

* worker 라고도 함
* dataflow task(subtask) 를 실행함
* data stream 을 저장하고 exchange 함
* 적어도 하나 존재해야 함

#### Clients

* 런타임이나 프로그램의 일부는 아님
* dataflow 를 준비함
* dataflow 를 JobManager 에 보내고 끊어지거나 progress report 를 받음
* java 나 scala program 으로 실행되거나 cli 에서 `./bin/flink` 로 실행됨
* spark 이나 tensorflow 처럼 job 을 submit 하면 graph 를 생성 후 master 에 전달 후 worker 가 작업함

##### Task Slots and Resources

* TaskManager 는 JVM process
* Thread 로 하나 이상의 subtask 를 실행함
* 여러 task 를 관리하기 위해 적어도 하나의 task slot 을 갖고 있음
* task slot 수를 바꿔서 subtask 가 분리되는 방법을 정의할 수 있음
* task slot 수는 cpu 개수로 하는것이 좋음
* hyper threading 사용 시 각 slot 에 2개 이상의 hardware thread context 가 필요함

### State Backends

* inMemory, filesystem, RocksDB 사용 가능
* 특정 시점 snapshot 생성
* 생성된 snapshot 을 checkpoint 의 부분으로 저장

### Savepoints

* DataStream API 로 작성된 프로그램은 savepoint 에서 resume 가능
* 상태 손실 없이, 프로그램과 flink cluster 모두 update 가능
* 수동으로 실행되는 checkpoint
* 주기적으로 worker 에서 실행됨
* 마지막만 남기고 버림, 이것만 빼면 주기적으로 실행되는 checkpoint 와 같음
* command line 으로 생성 가능
* REST API 로 작업을 취소할 때 생성됨

## [HA](https://ci.apache.org/projects/flink/flink-docs-release-1.9/ops/jobmanager_high_availability.html)

* standalone, yarn, mesos, k8s, ... 사용가능

## [Downloads](https://flink.apache.org/downloads.html)

### Apache Flink 1.9.1

* [Apache Flink 1.9.1 for Scala 2.12](https://www.apache.org/dyn/closer.lua/flink/flink-1.9.1/flink-1.9.1-bin-scala_2.12.tgz)

### Optional components

* [Avro SQL Format](https://repo.maven.apache.org/maven2/org/apache/flink/flink-avro/1.9.1/flink-avro-1.9.1.jar)
* [CSV SQL Format](https://repo.maven.apache.org/maven2/org/apache/flink/flink-csv/1.9.1/flink-csv-1.9.1.jar)
* [JSON SQL Format](https://repo.maven.apache.org/maven2/org/apache/flink/flink-json/1.9.1/flink-json-1.9.1.jar)
* [Pre-bundled Hadoop 2.7.5](https://repo.maven.apache.org/maven2/org/apache/flink/flink-shaded-hadoop-2-uber/2.7.5-7.0/flink-shaded-hadoop-2-uber-2.7.5-7.0.jar)
* 저장소로 hdfs 를 사용하려면 필요

Optional components 를 ${FLINK_HOME}/lib 으로 복사

### masters

```shell script
cd ${FLINK_HOME}/conf
cat masters
```

### slaves

```shell script
cat slaves
```

### 키교환

master 에서 slave 로 password 없이 ssh 접속 가능해야 함

### [Flink configs](https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html)

#### [Common configs](https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#common-options)

```yaml
# default: 1
parallelism.default: 1
```

#### [JobManager](https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#jobmanager)

```yaml
# default: none
jobmanager.archive.fs.dir:
# default: null, full|region, task 실패로부터 연산 복구를 어떻게 할지. full: 복구를 위해 모든 task 를 재실행. region: task 실패의 영향을 받을 수 있는 모든 task 를 다시 시작. https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/task_failure_recovery.html#restart-pipelined-region-failover-strategy
jobmanager.execution.failover-strategy: full
# default: 1024m
jobmanager.heap.size: 1024m
# default: none
jobmanager.rpc.address:
# default: 6123
jobmanager.rpc.port: 6123
```

#### [TaskManager](https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#taskmanager)

```yaml
# default: 30000, task 취소 시간 시도 간격(밀리초)
task.cancellation.interval: 30000
# default: 180000, task 취소 시간이 초과되어 fatal TaskManager 오류각 발생하는 시간(밀리초), 0 은 비활성화
task.cancellation.timeout: 180000
# default: 7500, stream task 가 취소됐을 때, 모든 타이머 쓰레드가 끝날때까지 기다리는 타이머 시간(밀리초)
task.cancellation.timers.timeout: 7500
# default: -1, checkpoint alignment buffer 최대 바이트수, buffer 를 초과하면 무시함, -1 이면 제한 없음
task.checkpoint.alignment.max-size: -1
# default: 1024m
taskmanager.heap.size: 1024m
# default: 1, TaskManager 에서 실행할 수 있는 user function instance 나 병렬 실행 개수. https://ci.apache.org/projects/flink/flink-docs-release-1.9/ops/config.html#configuring-taskmanager-processing-slots
taskmanager.numberOfTaskSlots: 1
```

#### [Distributed Coordination](https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#distributed-coordination)

#### [Distributed Coordination (via Akka)](https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#distributed-coordination-via-akka)

#### [REST](https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#rest)

```yaml
rest.address: 0.0.0.0
rest.port: 8081
```

#### [Blob Server](https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#blob-server)

#### [Heartbeat Manager](https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#heartbeat-manager)

#### [SSL Settings](https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#ssl-settings)

#### [Netty Shuffle Environment](https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#netty-shuffle-environment)

#### [Network Communication (via Netty)](https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#network-communication-via-netty)

#### [Web Frontend](https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#web-frontend)

#### [File Systems](https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#file-systems)

```yaml
# default: none, scheme 을 명시적으로 지정하지 않으면 사용함. hdfs 또는 file
fs.default-scheme:
```

#### [Compiler/Optimizer](https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#compileroptimizer)

#### [Runtime Algorithms](https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#runtime-algorithms)

#### [Resource Manager](https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#resource-manager)

#### [Shuffle Service](https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#shuffle-service)

#### [YARN](https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#yarn)

#### [Mesos](https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#mesos)

#### [Mesos TaskManager](https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#mesos-taskmanager)

#### [High Availability (HA)](https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#high-availability-ha)

```yaml
# default: "NONE", NONE|ZOOKEEPER|<specify FQN of factory class>
high-availability: NONE
# default: /default
high-availability.cluster-id: /default
# default: none, HA 구성시 flink metadata 유지 file system 경로
high-availability.storageDir:
# ZooKeeper-based HA Mode https://ci.apache.org/projects/flink/flink-docs-release-1.9/ops/config.html#zookeeper-based-ha-mode
# default: /flink, zookeeper 에서 flink 저장 root path
high-availability.zookeeper.path.root: /flink
# default: none
high-availability.zookeeper.quorum:
```

#### [ZooKeeper Security](https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#zooKeeper-security)

#### [Kerberos-based Security](https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#kerberos-based-security)

#### [Environment](https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#environment)

```yaml
# default: none
env.java.opts.jobmanager:
# default: none
env.java.opts.taskmanager:
```

#### [Checkpointing](https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#checkpointing)

```yaml
# default: none, none|jobmanager|filesystem|rocksdb|<class-name-of-factory>
state.backend:
# default: none, checkpoint data file 과 metadata 를 저장하는 디렉토리. 모든 TaskManager, JobManager 에서 접근 가능해야 함
state.checkpoints.dir:
# default: none, state backend 에서 savepoint 를 filesystem 에 쓰는데 사용함
state.savepoints.dir:
```

#### [RocksDB State Backend](https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#rocksdb-state-backend)

#### [RocksDB Configurable Options](https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#rocksdb-configurable-options)

#### [Queryable State](https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#queryable-state)

#### [Metrics](https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#metrics)

* 2019-12-11 metricbeat 의 flink module 은 아직 없음

#### [RocksDB Native Metrics](https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#rocksdb-native-metrics)

#### [History Server](https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#history-server)

```yaml
# default: none, Comma separated list of directories to monitor for completed jobs.
historyserver.archive.fs.dir: hdfs:///completed-jobs/
historyserver.web.address: 0.0.0.0
historyserver.web.port: 8082
```

#### [Legacy](https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#legacy)

#### [Background](https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#background)

### cluster configs

```shell script
vim ${FLINK_HOME}/conf/flink-conf.yaml
```

```yaml
taskmanager.heap.size: 1024m
parallelism.default: 1
jobmanager.archive.fs.dir: file:///data/flink/completed-jobs/
jobmanager.execution.failover-strategy: region
jobmanager.heap.size: 1024m
jobmanager.rpc.address: localhost
jobmanager.rpc.port: 6123
taskmanager.numberOfTaskSlots: 2
rest.address: 0.0.0.0
rest.port: 8081
fs.default-scheme: file:///data
high-availability: zookeeper
high-availability.cluster-id: /myflink
high-availability.storageDir: file:///data/flink/ha
high-availability.zookeeper.path.root: /flink
high-availability.zookeeper.quorum: host1:2181,host2:2181,host3:2181
env.hadoop.conf.dir: /etc/hadoop/conf
env.pid.dir: /usr/local/flink/default
state.backend: filesystem
state.checkpoints.dir: file:///data/flink/checkpoints
state.savepoints.dir: file:///data/flink/checkpoints
historyserver.web.address: 0.0.0.0
historyserver.web.port: 8082
historyserver.archive.fs.dir: file:///data/flink/completed-jobs/
```

### start flink cluster

```shell script
./bin/start-cluster.sh
```

```
[hskimsky@host1:/usr/local/flink/default]$ ./bin/start-cluster.sh
Starting HA cluster with 3 masters.
Starting standalonesession daemon on host host1.
Starting standalonesession daemon on host host2.
Starting standalonesession daemon on host host3.
Starting taskexecutor daemon on host host1.
Starting taskexecutor daemon on host host2.
Starting taskexecutor daemon on host host3.
[hskimsky@host1:/usr/local/flink/default]$
```

#### trouble shooting

##### Could not find a file system implementation for scheme 'hdfs'.

```
...
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded.
...
```

`Optional components` `Pre-bundled Hadoop X.X.X` `${FLINK_HOME}lib` 으로 복사했는지 확인

### start 확인

wbe browser 에서 http://${JobManager의hostname}:${rest.port}/ 접속하여 화면이 뜨면 성공.

master 에 지정한 모든 hostname 모두 접속되는지 확인

* http://host1:8081/
* http://host2:8081/
* http://host3:8081/

## [Seoul Apache Flink Meetup](https://www.meetup.com/ko-KR/Seoul-Apache-Flink-Meetup/)

https://www.meetup.com/ko-KR/Seoul-Apache-Flink-Meetup/events/266824815/

### 일시 및 장소
* 일시 : 2019년 12월 17일 (화) 18:00 - 21:00
* 장소 : NAVER D2 STARTUP FACTORY (서울 강남구 강남대로 382 메리츠타워 16층)
* 약도보기 : http://naver.me/Fa4qOxfD
* 장소는 'LINE'에서 지원해주셨습니다.

### Session Program
* 18:00 ~ 18:20 행사장 개방, 등록
* 18:20 ~ 18:40 Interval join 뽀개기 - 광고 데이터 처리 파이프라인에 적용한 사례 (염지혜, 카카오)
* 18:40 ~ 19:00 운전습관: 진화의 시작 - T map에 Flink 이식하기 (오승철, SKT)
* 19:00 ~ 19:20 Break
* 19:20 ~ 19:40 Do Flink on Web with FLOW (김동원, SKT)
* 19:40 ~ 20:00 Source 부터 Sink 까지 (강한구, 카카오 모빌리티)
* 20:00 ~ 21:00 Q&A, Networking

#### 광고플랫폼에 flink 적용
* 카카오 염지혜
* kafka -> flink -> kafka
* interval join 은 event time 만 지원
* LeftIntervalJoij 을 새로 구현
* sideOutputTag 를 적용함
* 한쪽 스트림이 없으면 메모리 꽉차서 죽음. 현재시간을 timestamp assigner 에서 넣도록
* 시간차가 크면 한쪽 버퍼가 계속 참

#### tmap 에 flink
* skt 오승철
* kafka -> flink -> kafka
* Schema evolution 을 쓰면 스키마 버전업에 대응할 수 있음

#### Do flink on wrb with FLOW(FLink On Web)
* skt 김동원
* sql 로 코딩하던걸 web 으로..

#### flink source 부터 sink 까지
* 카카오 모빌리티 강한구
* Actor
* trigger 를 둬서 좀 더 강점이 있음
* Source
* element 에 watermark 를 심음
* Transformation

#### QnA
* flow 어찌 사용? 오픈소스는 아직 아님
* 베를린에서만 하다가 전세계로 퍼지는 중
* 외국 갔다와보니 sql 위주로 쓰더라
* 2.0 에서는 DataSet 없어지고 DataStream Table 로 갈 것

## pom.xml

```xml
<?xml version="1.0" encoding="UTF-8"?>

xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>

<groupId>com.tistory.hskimsky</groupId>
<artifactId>flinktest</artifactId>
<version>1.0-SNAPSHOT</version>

<properties>
<kafka.scala.version>2.13</kafka.scala.version>
<kafka.version>2.4.0</kafka.version>
</properties>

<dependencies>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>1.2.3</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>log4j-over-slf4j</artifactId>
<version>1.7.29</version>
</dependency>


<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>2.13.1</version>
</dependency>

<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>2.12.10</version>
</dependency>

<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>2.11.12</version>
</dependency>


<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_${kafka.scala.version}</artifactId>
<version>${kafka.version}</version>
</dependency>

<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>${kafka.version}</version>
</dependency>

<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-streams</artifactId>
<version>${kafka.version}</version>
</dependency>

<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-core</artifactId>
<version>1.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-java</artifactId>
<version>1.9.1</version>
<exclusions>
<exclusion>
<groupId>log4j</groupId>
<artifactId>*</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-scala_2.11</artifactId>
<version>1.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-scala_2.12</artifactId>
<version>1.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-scala_2.11</artifactId>
<version>1.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-scala_2.12</artifactId>
<version>1.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-java_2.11</artifactId>
<version>1.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-java_2.12</artifactId>
<version>1.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients_2.11</artifactId>
<version>1.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients_2.12</artifactId>
<version>1.9.1</version>
</dependency>

<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-table</artifactId>
<version>1.9.1</version>
<type>pom</type>
<scope>provided</scope>
</dependency>


<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-shaded-jackson</artifactId>
<version>2.9.8-8.0</version>
</dependency>

<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-avro</artifactId>
<version>1.9.1</version>
</dependency>

<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-table-planner_2.11</artifactId>
<version>1.9.1</version>
<!--<scope>provided</scope>-->
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-table-planner_2.12</artifactId>
<version>1.9.1</version>
<!--<scope>provided</scope>-->
</dependency>

<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-shaded-hadoop2</artifactId>
<version>2.6.5-1.8.2</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-shaded-hadoop2</artifactId>
<version>2.7.5-1.8.2</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-shaded-hadoop2</artifactId>
<version>2.8.3-1.8.2</version>
</dependency>

<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-shaded-guava</artifactId>
<version>18.0-8.0</version>
</dependency>

<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-metrics-core</artifactId>
<version>1.9.1</version>
<!--<scope>provided</scope>-->
</dependency>

<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-json</artifactId>
<version>1.9.1</version>
</dependency>

<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-table-common</artifactId>
<version>1.9.1</version>
<!--<scope>provided</scope>-->
</dependency>

<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-table-planner-blink_2.11</artifactId>
<version>1.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-table-planner-blink_2.12</artifactId>
<version>1.9.1</version>
</dependency>


<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-statebackend-rocksdb_2.11</artifactId>
<version>1.9.1</version>
<!--<scope>test</scope>-->
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-statebackend-rocksdb_2.12</artifactId>
<version>1.9.1</version>
<!--<scope>test</scope>-->
</dependency>

<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-table-api-java-bridge_2.11</artifactId>
<version>1.9.1</version>
<!--<scope>provided</scope>-->
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-table-api-java-bridge_2.12</artifactId>
<version>1.9.1</version>
<!--<scope>provided</scope>-->
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-table-api-scala-bridge_2.11</artifactId>
<version>1.9.1</version>
<!--<scope>provided</scope>-->
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-table-api-scala-bridge_2.12</artifactId>
<version>1.9.1</version>
<!--<scope>provided</scope>-->
</dependency>

<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-hadoop-compatibility_2.11</artifactId>
<version>1.9.1</version>
<!--<scope>test</scope>-->
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-hadoop-compatibility_2.12</artifactId>
<version>1.9.1</version>
<!--<scope>test</scope>-->
</dependency>

<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-annotations</artifactId>
<version>1.9.1</version>
<!--<scope>provided</scope>-->
</dependency>

<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-optimizer_2.11</artifactId>
<version>1.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-optimizer_2.12</artifactId>
<version>1.9.1</version>
</dependency>

<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-shaded-hadoop-2</artifactId>
<version>2.6.5-7.0</version>
<!--<scope>provided</scope>-->
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-shaded-hadoop-2</artifactId>
<version>2.6.5-8.0</version>
<!--<scope>provided</scope>-->
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-shaded-hadoop-2</artifactId>
<version>2.7.5-8.0</version>
<!--<scope>provided</scope>-->
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-shaded-hadoop-2</artifactId>
<version>2.8.3-8.0</version>
<!--<scope>provided</scope>-->
</dependency>

<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-shaded-netty</artifactId>
<version>4.1.32.Final-7.0</version>
</dependency>

<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-gelly_2.11</artifactId>
<version>1.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-gelly_2.12</artifactId>
<version>1.9.1</version>
</dependency>

<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-cep_2.11</artifactId>
<version>1.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-cep_2.12</artifactId>
<version>1.9.1</version>
</dependency>

<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-elasticsearch-base_2.11</artifactId>
<version>1.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-elasticsearch-base_2.12</artifactId>
<version>1.9.1</version>
</dependency>

<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-hbase_2.11</artifactId>
<version>1.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-hbase_2.12</artifactId>
<version>1.9.1</version>
</dependency>

<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-jdbc_2.11</artifactId>
<version>1.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-jdbc_2.12</artifactId>
<version>1.9.1</version>
</dependency>

<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-kafka-base_2.11</artifactId>
<version>1.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-kafka-base_2.12</artifactId>
<version>1.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-kafka-0.11_2.11</artifactId>
<version>1.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-kafka-0.11_2.12</artifactId>
<version>1.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-kafka_2.11</artifactId>
<version>1.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-kafka_2.12</artifactId>
<version>1.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-csv</artifactId>
<version>1.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-json</artifactId>
<version>1.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-jdbc_2.11</artifactId>
<version>1.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-jdbc_2.12</artifactId>
<version>1.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-hbase_2.11</artifactId>
<version>1.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-hbase_2.12</artifactId>
<version>1.9.1</version>
</dependency>
</dependencies>

<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.8.1</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
<encoding>UTF-8</encoding>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>3.1.1</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<artifactSet>
<excludes>
<exclude>com.google.code.findbugs:jsr305</exclude>
<exclude>org.slf4j:*</exclude>
<exclude>log4j:*</exclude>
</excludes>
</artifactSet>
<filters>
<filter>
<!-- Do not copy the signatures in the META-INF folder.
Otherwise, this might cause SecurityExceptions when using the JAR. -->
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
<!--<transformers>
<transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
<mainClass>my.programs.main.clazz</mainClass>
</transformer>
</transformers>-->
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
```

반응형
Posted by FeliZ_하늘..
,