반응형

종류

종류는 아래 2개가 있는듯 한데, 저는 Solutions Architect Champion 을 봤습니다.

자격 요건

  • 2개 이상 Databricks PoC/Project
  • databricks cert 적어도 1개
    • Associate Data Engineer Cert
    • Associate Machine Learning Cert
    • Data Engineer Learning Plan
    • Data Scientist Learning Plan
    • Professional Data Engineer Cert
    • Professional Machine Learning Cert
  • PSA(Partner Solutions Architect) team 의 Partners Champions Program 완료 및 수료
    • Solutions Architect Essentials Badge
  • Panel Interview

혜택

  • Badge
  • 자켓
  • 매년 Data and AI Summit 초대
  • Databricks Champion Slack Channel 초대

절차

  • 2개 이상 Databricks PoC/Project
    • 이전 글에서도 말씀드렸다시피, 일하면서 Databricks PoC/Project 는 자연스럽게 계속 진행하였습니다.
  • databricks cert 적어도 1개
    • 틈 나는대로 자격증 준비해서 Professional Data Engineer Cert, Associate Data Engineer Cert 를 취득했습니다.
  • PSA(Partner Solutions Architect) team 의 Partners Champions Program 완료 및 수료
    • 엄청 가끔 열리기도 하고, professional cert 덕분인지 통과되었습니다.
  • Panel Interview

Interview

  • panel interview 는 1시간정도 진행되었습니다.
  • 약 10개 Domain 별 질문을 몇가지씩 진행합니다.
  • panel 로는 한국분들만 참가했고, databricks partner SA, databricks SA 분들이 참석하고, 한국말로 진행했습니다.

합격

https://www.credential.net/83d3e46d-944b-4770-a492-ed4a1f96f17c#gs.4nx42v

 

반응형
Posted by FeliZ_하늘..
,
반응형

시험 계기

4월 한국 Databricks 행사에서 직원분이 나라면 합격할 수 있을것 같다고 말씀 해주시길래 시험을 봤습니다.

https://www.databricks.com/learn/certification/data-engineer-professional

경력

일단 제가 다니는 회사는 Big Data 를 전문으로 하는 회사이며, Databricks 의 Partner 회사입니다.

저는 올해 개발을 업으로 삼은지 9년차이고, Spark 경력으로는 6개월짜리 Spark Project 3개, 8개월간 Databricks PoC, Project 경험이 있습니다.

그리고 지난 2월에는 Databricks Apache Spark 3.0 Scala 시험에 합격했습니다.

https://www.credential.net/1dd9273f-e73f-4eb3-ab92-c93552b8ab8b#gs.1i4uv3

 

Databricks Certified Associate Developer for Apache Spark 3.0 • HaNeul Kim • Databricks Badges • cHJvZHVjdGlvbjQ1ODQ3

Home of digital credentials

www.credential.net

시험 공부

시험공부는 아래 udemy 강의로 공부했고, 그대로 나오는 문제도 4,5개 되는것 같고, 비슷한 유형으로 나오긴 하지만, 실제 시험 지문은 훨씬 더 길었습니다.

https://www.udemy.com/course/databricks-certified-data-engineer-professional/

https://www.udemy.com/course/practice-exams-databricks-data-engineer-professional-k/

시험 결과

4번의 시험 끝에 결국 대한민국 1호 professional 취득하였습니다!

제 인생 최대의 업적인 것 같습니다!!

https://www.credential.net/bcaf7b1e-b237-4140-9cfe-cef896a8f3b7#gs.1i4uij

 

Databricks Certified Data Engineer Professional • HaNeul Kim • Databricks Badges • cHJvZHVjdGlvbjQ1ODQ3

Home of digital credentials

www.credential.net

반응형
Posted by FeliZ_하늘..
,

TB to PB

[BigData] 2023. 3. 30. 00:46
반응형

테라바이트에서 페타바이트로

 

Hive, Spark, Impala 등의 SQL on Hadoop 에서 불가능한, Random Access 를 가능케 하는 용도로 HBase, Cassandra, Kudu, Redis 등의 NoSQL 을 많이 사용해왔다.
그리고 많은 변화가 있었고, Container, Streaming, Cloud, Table Format 등 많은 기술들이 나왔다.

5년 전까지만 해도 700~800 TB 정도의 테이블을 HBase 로 잘 운영해왔고, Kudu 로도 프로젝트를 여러번 진행했다.
용량이 3년 후 2배가 될 것이라는 계산을 훨씬 뛰어넘어 그 사이에 용량은 10배 이상 증가했다.
용량이 늘어난 만큼 운영비용도 늘어났고, 신경써야할 것들도 훨씬 많아졌다.
SQL on Hadoop 과 NoSQL 등 대부분에서 공통적으로 발생했던 issue 는 metadata 마저도 조회하는 속도가 느려졌다는 것이다.

Redshift, Synapse, BigQuery, Databricks Lakehouse, Snowflake 등으로 대용량 데이터 분석이 가능하다고 하지만,
S3, Blob Storage, GCS 등의 Cloud Storage 로 전환하는 것이 오히려 더 비싸다는 것은, 이제는 누구나 다 알지만 굳이 나서서 언급하지 않는 불편한 진실이다.
ChatGPT 에게 물어봐도 위에서 언급했던 기술들만 나열할 뿐, 역시나 기대 이상의 Insight 는 얻기 힘들다.
PB 급 테이블을 운영하고 또 앞으로의 10년을 위해 새로운 시도들을 해봐야 할 때다.

반응형

'[BigData]' 카테고리의 다른 글

Apache Doris 설치  (0) 2022.07.15
Posted by FeliZ_하늘..
,
반응형

이미 알고 있던 지식

이미 AWS 관련 프로젝트를 간접적(AWS 호환 프로그램과 연동)으로든 직접적(AWS 환경에서 프로그램 실행)으로든 여러번 진행한 경험이 있고, 부분부분 기간만 합치면 약 1년정도

  • 누군가 개발한 AWS IAM 호환 Server 연동
  • 누군가 개발한 AWS S3 호환 Server 연동
  • S3 호환 Storage(Hitachi, Dell, ...) hadoop ecosystem 연동
  • AWS 환경 Cloudera Hadoop 과 NiFi 를 이용한 데이터 처리
  • AWS 환경 Databricks Lakehouse 데이터 처리

시험 준비

【한글자막】 AWS Certified Solutions Architect Associate 시험합격! 강의 중 아래 강의들만 봤음

  • 섹션 4: IAM 및 AWS CLI
  • 섹션 5: EC2 기초
  • 섹션 6: EC2 - 솔루션스 아키텍트 어소시에이트 레벨
  • 섹션 7: EC2 인스턴스 스토리지
  • 섹션 8: 고가용성 및 스케일링성: ELB 및 ASG
  • 섹션 9: AWS 기초: RDS + Aurora + ElastiCache
  • 섹션 10: Route 53
  • 섹션 11: 클래식 솔루션 아키텍처 토론
  • 섹션 12: Amazon S3 소개
  • 섹션 13: AWS CLI, SDK, IAM 역할 및 정책
  • 섹션 14: 고급 Amazon S3
  • 섹션 15: 아마존 S3 보안
  • 섹션 16: CloudFront 및 AWS 글로벌 액셀러레이터
  • 섹션 28: 네트워킹 - VPC
  • 그 외 Examtopics 에서 처음 들어보는 서비스들의 실습 화면

Examtopics SAA-C03 을 무료로 볼 수 있는 55페이지 중 27페이지까지만 2번씩 반복

처음 7일정도는 udemy 강의만 보고 3일정도는 examtopics 풀면서 처음 보는 서비스에 대해서만 udemy 강의를 봤음

시험 예약

  • 2022-11-22 10:56 에 같은 회사 직원분께서 aws 에서 자격증 50% 할인 바우처를 준다고 공유해주셔서 냉큼 신청
  • 2022-12-05 10:54 에 바우처를 받았음
  • 2023-01-09 23:55 에 2023-01-17 14:00 시험으로 예약함
    • Pearson VUE 에서 예약
    • 비 영어권 사람이므로 30분 추가 신청
    • 온라인으로 가능
      • 별도의 공간
      • 감독관에게 카메라로 주변에 아무것도 없음을 보여줘야 함
      • 감독관은 영어가 통하긴 하지만 인도인이거나 중동 사람이 대부분이라는 후기가 많음
    • 집이 작아서 별도의 공간을 마련할 수 없으므로 오프라인 예약을 위해 찾아봄
    • 강남역 12번출구 바로 앞에 시험장이 있음
    • 시험 언어를 한국어로 지정
    • 2023-01-17 14:00 시험으로 예약함
    • 예전에 받은 바우처 등록하여 50% 할인된 85000원에 예약함

시험 당일

  • 2023-01-17 13:10 시험장 도착
    • 시험장에 도착하고 10층 엘리베이터에서 내려 한의원 좌측의 시험장에 도착하니 문이 잠겨있었음
    • 안내문이 있었음
      • 14시 시험의 Admission Time 이 13시 30분부터
    • 조금 기다리니 직원분이 오셔서 문 열어주심
  • 2023-01-17 13:15 시험 등록
    • 신분증과 영어 이름이 포함된 신용카드(시험 예약시 사용한 카드)를 제출하고 마스크 벗고 얼굴 사진 찍고 시험 등록함
    • 이떄부터 대기실에서 공부가 금지되어 있다고 하고 바로 시험이 가능하다고 함
  • 2023-01-17 13:22 시험 시작
    • 입고 갔던 파카와 모든 소지품을 사물함에 보관
    • 주머니에 아무것도 없음을 확인하고 신분증과 사물함 열쇠만 가지고 시험장 입장
  • 시험
    • Examtopics 문제와 보기까지 그대로 나오는 문제가 몇문제 되고 보기만 다른 문제들이 절반은 되는것 같은데 내가 절반만 봐서 그런것 같음
    • 전체 65문제
      • 4~5 문제는 보기 5개 중 2개 선택하는 문제
      • 그 외 문제는 보기 4개 중 1개 선택하는 문제
    • 문제와 보기가 한글로 나오긴 하지만 좌측 상단의 English 버튼을 눌러서 영어로도 확인할 수 있음
    • 우측 상단의 검토를 위해 flag 표시 버튼으로 나중에 모든 문제를 풀고 검토 가능
    • 모든 문제 푼 후에 더이상 수정할 수 없다는 확인창이 뜸
    • 설문조사 화면이 나옴
      • 시험 난이도는 어땠는지
      • 시험장 환경은 어땠는지
      • ...
    • 결과를 바로 알려주지는 않음
  • 2023-01-17 14:50 시험 종료 후 퇴실

합격 여부 수신

  • 2023-01-17 19:27 Haneul! 귀하는 Amazon Web Services Training and Certification에서 배지를 받았습니다 🎉 라는 메일 수신

합격하게 되면

  • 여기 에서 pdf 로 된 합격증을 받을 수 있음
  • 여기 에서 획득한 점수가 몇점인지 확인할 수 있음
  • 여기 에서 다음 시험 등록할 때 할인 가능한 코드를 받을 수 있음
  • 여기 에서 디지털 배지를 받을 수 있음
반응형

'[AWS]' 카테고리의 다른 글

ec2 instance user data 수정 후 작동하지 않음  (0) 2023.01.11
Posted by FeliZ_하늘..
,
반응형

https://stackoverflow.com/questions/61989020/aws-ec2-user-data-doesnt-work-after-modifying-it

sudo mv /var/lib/cloud/instance/sem/config_scripts_user /var/lib/cloud/instance/sem/config_scripts_user_bak
# lrwxrwxrwx  1 root root   44  1월 11 10:42 instance -> /var/lib/cloud/instances/i-01b234bab56cd78b9/
# sudo mv /var/lib/cloud/instances/i-01b234bab56cd78b9/sem/config_scripts_user /var/lib/cloud/instances/i-01b234bab56cd78b9/sem/config_scripts_user_bak
반응형

'[AWS]' 카테고리의 다른 글

AWS Certified Solutions Architect - Associate SAA-C03 후기  (0) 2023.01.18
Posted by FeliZ_하늘..
,
반응형

GPDB 설치

GPDB

Preparation

templatePath: E:\vm\linux\template79
displayName: mdw
hostname: mdw.sky.local
path: E:\vm\linux\gpdb
description: mdw
ip: 192.168.181.231
numvcpus: 2
coresPerSocket: 2
memsize: 4096
---
templatePath: E:\vm\linux\template79
displayName: smdw
hostname: smdw.sky.local
path: F:\vm\linux\gpdb
description: smdw
ip: 192.168.181.232
numvcpus: 2
coresPerSocket: 2
memsize: 4096
---
templatePath: E:\vm\linux\template79
displayName: sdw1
hostname: sdw1.sky.local
path: E:\vm\linux\gpdb
description: sdw1
ip: 192.168.181.233
numvcpus: 2
coresPerSocket: 2
memsize: 4096
---
templatePath: E:\vm\linux\template79
displayName: sdw2
hostname: sdw2.sky.local
path: F:\vm\linux\gpdb
description: sdw2
ip: 192.168.181.234
numvcpus: 2
coresPerSocket: 2
memsize: 4096
---
templatePath: E:\vm\linux\template79
displayName: sdw3
hostname: sdw3.sky.local
path: E:\vm\linux\gpdb
description: sdw3
ip: 192.168.181.235
numvcpus: 2
coresPerSocket: 2
memsize: 4096
---
templatePath: E:\vm\linux\template79
displayName: sdw4
hostname: sdw4.sky.local
path: F:\vm\linux\gpdb
description: sdw4
ip: 192.168.181.236
numvcpus: 2
coresPerSocket: 2
memsize: 4096
java -jar E:\vm\CopyVMWare-1.1.0.jar `
 --force `
 --yaml E:\vm\conf\gpdb_hms.yaml

Configuring Your Systems

cat >> /etc/hosts <<EOF

# GPDB HMS
192.168.181.231    mdw.sky.local    mdw
192.168.181.232   smdw.sky.local    smdw
192.168.181.233   sdw1.sky.local    sdw1
192.168.181.234   sdw2.sky.local    sdw2
192.168.181.235   sdw3.sky.local    sdw3
192.168.181.236   sdw4.sky.local    sdw4
EOF
cat >> /etc/bashrc <<EOF

export JAVA_HOME=/usr/lib/jvm/java
export PATH=\${JAVA_HOME}/bin:\${PATH}
EOF

. /etc/bashrc

# yum install -y sshpass
# ssh-keygen -t rsa -N '' -f ~/.ssh/id_rsa
# export SSHPASS="PASSWORD"

# for i in {1..1} ; do sshpass -e ssh -o StrictHostKeyChecking=no root@192.168.181.23${i} "mkdir -p ~/.ssh ; chmod 700 ~/.ssh ; touch ~/.ssh/authorized_keys ; echo '$(cat ~/.ssh/id_rsa.pub)' >> ~/.ssh/authorized_keys ; chmod 600 ~/.ssh/authorized_keys" ; done
# for i in {2..8} ; do sshpass -e ssh -o StrictHostKeyChecking=no root@192.168.181.23${i} "rm -rf ~/.ssh ; mkdir -p ~/.ssh ; chmod 700 ~/.ssh ; touch ~/.ssh/authorized_keys ; echo '$(cat ~/.ssh/id_rsa.pub)' >> ~/.ssh/authorized_keys ; chmod 600 ~/.ssh/authorized_keys ; echo SUCCESS" ; done

for i in {1..1} ; do echo     mdw.sky.local ; done | xargs -P 2 -I {} ssh {} -o StrictHostKeyChecking=no "hostname"
for i in {1..1} ; do echo    smdw.sky.local ; done | xargs -P 2 -I {} ssh {} -o StrictHostKeyChecking=no "hostname"
for i in {1..4} ; do echo sdw${i}.sky.local ; done | xargs -P 5 -I {} ssh {} -o StrictHostKeyChecking=no "hostname"
for i in {1..1} ; do echo     mdw ; done | xargs -P 2 -I {} ssh {} -o StrictHostKeyChecking=no "hostname"
for i in {1..1} ; do echo    smdw ; done | xargs -P 2 -I {} ssh {} -o StrictHostKeyChecking=no "hostname"
for i in {1..4} ; do echo sdw${i} ; done | xargs -P 5 -I {} ssh {} -o StrictHostKeyChecking=no "hostname"

for i in {2..6} ; do echo 192.168.181.23${i} ; done | xargs -P 7 -I {} scp /etc/{bashrc,hosts} {}:/etc
for i in {1..6} ; do echo 192.168.181.23${i} ; done | xargs -P 4 -I {} ssh {} "yum install -y net-tools gcc* git vim wget zip unzip tar curl dstat ntp java-1.8.0-openjdk-devel"

Disable or Configure Firewall Software

for i in {1..6} ; do echo 192.168.181.23${i} ; done | xargs -P 8 -I {} ssh {} "systemctl stop firewalld && systemctl disable firewalld"

Synchronizing System Clocks

for i in {1..6} ; do echo 192.168.181.23${i} ; done | xargs -P 8 -I {} ssh {} "systemctl enable ntpd ; systemctl start ntpd ; ntpq -p"
for i in {1..6} ; do echo 192.168.181.23${i} ; done | xargs -P 8 -I {} ssh {} "ntpq -p"

Setting Greenplum Environment Variables

cat >> ~/.bashrc << EOF

# 20220826 hskimsky for gpdb
source /usr/local/greenplum-db/greenplum_path.sh
export MASTER_DATA_DIRECTORY=/data/master/gpseg-1
export PGPORT=5432
export PGUSER=gpadmin
export PGDATABASE=gpadmin
export LD_PRELOAD=/lib64/libz.so.1 ps
EOF
for i in {2..6} ; do echo 192.168.181.23${i} ; done | xargs -P 7 -I {} scp ~/.bashrc {}:~
for i in {1..6} ; do echo 192.168.181.23${i} ; done | xargs -P 8 -I {} ssh {} "mkdir -p ~/Downloads/gpdb"
cd ~/Downloads/gpdb
wget https://github.com/greenplum-db/gpdb/releases/download/6.21.1/open-source-greenplum-db-6.21.1-rhel7-x86_64.rpm
for i in {2..6} ; do echo 192.168.181.23${i} ; done | xargs -P 7 -I {} scp ~/Downloads/gpdb/open-source-greenplum-db-6.21.1-rhel7-x86_64.rpm {}:~/Downloads/gpdb

Disable or Configure SELinux

sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
for i in {2..6} ; do echo 192.168.181.23${i} ; done | xargs -P 7 -I {} scp /etc/selinux/config {}:/etc/selinux

Recommended OS Parameters Settings

The sysctl.conf File

cat >> /etc/sysctl.conf << EOF

# 20220826 for gpdb
# kernel.shmall = _PHYS_PAGES / 2 # See Shared Memory Pages
kernel.shmall = $(echo $(expr $(getconf _PHYS_PAGES) / 2))
# kernel.shmmax = kernel.shmall * PAGE_SIZE 
kernel.shmmax = $(echo $(expr $(getconf _PHYS_PAGES) / 2 \* $(getconf PAGE_SIZE)))
kernel.shmmni = 4096
# See Segment Host Memory
vm.overcommit_memory = 2
vm.overcommit_ratio = 95

# See Port Settings
net.ipv4.ip_local_port_range = 10000 65535
kernel.sem = 250 2048000 200 8192
kernel.sysrq = 1
kernel.core_uses_pid = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.msgmni = 2048
net.ipv4.tcp_syncookies = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.conf.all.arp_filter = 1
net.core.netdev_max_backlog = 10000
net.core.rmem_max = 2097152
net.core.wmem_max = 2097152
vm.swappiness = 10
vm.zone_reclaim_mode = 0
vm.dirty_expire_centisecs = 500
vm.dirty_writeback_centisecs = 100
# memory 64GB 이상
# vm.dirty_background_ratio = 0
# vm.dirty_ratio = 0
# vm.dirty_background_bytes = 1610612736
# vm.dirty_bytes = 4294967296
# memory 64GB 미만
vm.dirty_background_ratio = 3
vm.dirty_ratio = 10

$(awk 'BEGIN {OFMT = "%.0f";} /MemTotal/ {print "vm.min_free_kbytes =", $2 * .03;}' /proc/meminfo)
EOF
for i in {2..6} ; do echo 192.168.181.23${i} ; done | xargs -P 7 -I {} scp /etc/sysctl.conf {}:/etc

System Resources Limits

cat >> /etc/security/limits.conf << EOF

# 20220827 hskimsky for gpdb
* soft nofile 524288
* hard nofile 524288
* soft nproc 131072
* hard nproc 131072
EOF
for i in {2..6} ; do echo 192.168.181.23${i} ; done | xargs -P 7 -I {} scp /etc/security/limits.conf {}:/etc/security
vim /etc/default/grub

...
GRUB_CMDLINE_LINUX="... transparent_hugepage=never"
...

for i in {2..6} ; do echo 192.168.181.23${i} ; done | xargs -P 7 -I {} scp /etc/default/grub {}:/etc/default

Creating the Greenplum Administrative User

for i in {1..6} ; do echo 192.168.181.23${i} ; done | xargs -P 6 -I {} ssh {} "groupadd gpadmin"
for i in {1..6} ; do echo 192.168.181.23${i} ; done | xargs -P 6 -I {} ssh {} "useradd gpadmin -r -m -g gpadmin"
for i in {1..6} ; do echo 192.168.181.23${i} ; done | xargs -P 6 -I {} ssh {} "echo 'changeme' | passwd gpadmin --stdin"
cat >> /etc/sudoers << EOF

# 20220827 for gpdb
gpadmin ALL=(ALL) NOPASSWD: ALL
EOF
for i in {2..6} ; do echo 192.168.181.23${i} ; done | xargs -P 7 -I {} scp /etc/sudoers {}:/etc

Installing the Greenplum Database Software

for i in {1..6} ; do echo 192.168.181.23${i} ; done | xargs -P 6 09-I {} ssh {} "yum install -y ~/Downloads/gpdb/open-source-greenplum-db-6.21.1-rhel7-x86_64.rpm"
for i in {1..6} ; do echo 192.168.181.23${i} ; done | xargs -P 6 -I {} ssh {} "chown -R gpadmin:gpadmin /usr/local/greenplum*"
for i in {1..6} ; do echo 192.168.181.23${i} ; done | xargs -P 6 -I {} ssh {} "chgrp -R gpadmin /usr/local/greenplum*"

Enabling Passwordless SSH

su - gpadmin
ssh-keygen -t rsa -N '' -f ~/.ssh/id_rsa
export SSHPASS="changeme"
for i in {1..1} ; do sshpass -e ssh -o StrictHostKeyChecking=no 192.168.181.23${i} "mkdir -p ~/.ssh ; chmod 700 ~/.ssh ; touch ~/.ssh/authorized_keys ; echo '$(cat ~/.ssh/id_rsa.pub)' >> ~/.ssh/authorized_keys ; chmod 600 ~/.ssh/authorized_keys" ; done
for i in {2..6} ; do sshpass -e ssh -o StrictHostKeyChecking=no 192.168.181.23${i} "rm -rf ~/.ssh ; mkdir -p ~/.ssh ; chmod 700 ~/.ssh ; touch ~/.ssh/authorized_keys ; echo '$(cat ~/.ssh/id_rsa.pub)' >> ~/.ssh/authorized_keys ; chmod 600 ~/.ssh/authorized_keys ; echo SUCCESS" ; done

for i in {1..1} ; do echo     mdw.sky.local ; done | xargs -P 2 -I {} ssh {} -o StrictHostKeyChecking=no "hostname"
for i in {1..1} ; do echo    smdw.sky.local ; done | xargs -P 2 -I {} ssh {} -o StrictHostKeyChecking=no "hostname"
for i in {1..4} ; do echo sdw${i}.sky.local ; done | xargs -P 5 -I {} ssh {} -o StrictHostKeyChecking=no "hostname"
for i in {1..1} ; do echo     mdw ; done | xargs -P 2 -I {} ssh {} -o StrictHostKeyChecking=no "hostname"
for i in {1..1} ; do echo    smdw ; done | xargs -P 2 -I {} ssh {} -o StrictHostKeyChecking=no "hostname"
for i in {1..4} ; do echo sdw${i} ; done | xargs -P 5 -I {} ssh {} -o StrictHostKeyChecking=no "hostname"

Setting Greenplum Environment Variables

cat >> ~/.bashrc << EOF

# 20220826 hskimsky for gpdb
source /usr/local/greenplum-db/greenplum_path.sh
export MASTER_DATA_DIRECTORY=/data/master/gpseg-1
export PGPORT=5432
export PGUSER=gpadmin
export PGDATABASE=gpadmin
export LD_PRELOAD=/lib64/libz.so.1 ps
EOF
for i in {2..6} ; do echo 192.168.181.23${i} ; done | xargs -P 7 -I {} scp ~/.bashrc {}:~

Confirming Your Installation

gpssh -f hostfile_exkeys -e 'ls -alF /usr/local/greenplum-db/greenplum_path.sh'
[gpadmin@mdw:~]$ gpssh -f hostfile_exkeys -e 'ls -alF /usr/local/greenplum-db/greenplum_path.sh'
[sdw1] ls -alF /usr/local/greenplum-db/greenplum_path.sh
[sdw1] -rw-r--r--. 1 gpadmin gpadmin 650 Aug  6 04:51 /usr/local/greenplum-db/greenplum_path.sh
[sdw3] ls -alF /usr/local/greenplum-db/greenplum_path.sh
[sdw3] -rw-r--r--. 1 gpadmin gpadmin 650 Aug  6 04:51 /usr/local/greenplum-db/greenplum_path.sh
[ mdw] ls -alF /usr/local/greenplum-db/greenplum_path.sh
[ mdw] -rw-r--r--. 1 gpadmin gpadmin 650 Aug  6 04:51 /usr/local/greenplum-db/greenplum_path.sh
[sdw2] ls -alF /usr/local/greenplum-db/greenplum_path.sh
[sdw2] -rw-r--r--. 1 gpadmin gpadmin 650 Aug  6 04:51 /usr/local/greenplum-db/greenplum_path.sh
[smdw] ls -alF /usr/local/greenplum-db/greenplum_path.sh
[smdw] -rw-r--r--. 1 gpadmin gpadmin 650 Aug  6 04:51 /usr/local/greenplum-db/greenplum_path.sh
[sdw4] ls -alF /usr/local/greenplum-db/greenplum_path.sh
[sdw4] -rw-r--r--. 1 gpadmin gpadmin 650 Aug  6 04:51 /usr/local/greenplum-db/greenplum_path.sh
[gpadmin@mdw:~]$

Creating the Data Storage Areas

Creating Data Storage Areas on the Master and Standby Master Hosts

To create the data directory location on the master

mkdir -p /data/master
chown gpadmin:gpadmin /data/master
source /usr/local/greenplum-db/greenplum_path.sh 
gpssh -h smdw -e 'mkdir -p /data/master'
gpssh -h smdw -e 'chown gpadmin:gpadmin /data/master'
[root@mdw:~]# mkdir -p /data/master
[root@mdw:~]# chown gpadmin:gpadmin /data/master
[root@mdw:~]# source /usr/local/greenplum-db/greenplum_path.sh
[root@mdw:~]# gpssh -h smdw -e 'mkdir -p /data/master'
[smdw] mkdir -p /data/master
[root@mdw:~]# gpssh -h smdw -e 'chown gpadmin:gpadmin /data/master'
[smdw] chown gpadmin:gpadmin /data/master
[root@mdw:~]#

Creating Data Storage Areas on Segment Hosts

To create the data directory locations on all segment hosts

cat >> hostfile_gpssh_segonly << EOF
sdw1
sdw2
sdw3
sdw4
EOF
source /usr/local/greenplum-db/greenplum_path.sh 
gpssh -f hostfile_gpssh_segonly -e 'mkdir -p /data/primary'
gpssh -f hostfile_gpssh_segonly -e 'mkdir -p /data/mirror'
gpssh -f hostfile_gpssh_segonly -e 'chown -R gpadmin /data/*'
[root@mdw:~]# cat >> hostfile_gpssh_segonly << EOF
> sdw1
> sdw2
> sdw3
> sdw4
> EOF
[root@mdw:~]# source /usr/local/greenplum-db/greenplum_path.sh
[root@mdw:~]# gpssh -f hostfile_gpssh_segonly -e 'mkdir -p /data/primary'
[sdw2] mkdir -p /data/primary
[sdw1] mkdir -p /data/primary
[sdw3] mkdir -p /data/primary
[sdw4] mkdir -p /data/primary
[root@mdw:~]# gpssh -f hostfile_gpssh_segonly -e 'mkdir -p /data/mirror'
[sdw1] mkdir -p /data/mirror
[sdw4] mkdir -p /data/mirror
[sdw2] mkdir -p /data/mirror
[sdw3] mkdir -p /data/mirror
[root@mdw:~]# gpssh -f hostfile_gpssh_segonly -e 'chown -R gpadmin /data/*'
[sdw4] chown -R gpadmin /data/*
[sdw2] chown -R gpadmin /data/*
[sdw1] chown -R gpadmin /data/*
[sdw3] chown -R gpadmin /data/*
[root@mdw:~]#

Initializing a Greenplum Database System

Initializing Greenplum Database

Creating the Initialization Host File

  • 다음 예제는 segment node 당 3개의 bonding 되지 않은 NIC 가 있다는 가정임
  • load-balance 또는 fault-tolerant network 를 생성하기 위해서는 NIC bonding 추천
ssh gpadmin@mdw
cd ~
mkdir ~/gpconfigs
cd ~/gpconfigs
cat > hostfile_gpinitsystem << EOF
sdw1
sdw2
sdw3
sdw4
EOF

Creating the Greenplum Database Configuration File

# cp $GPHOME/docs/cli_help/gpconfigs/gpinitsystem_config ~/gpconfigs/gpinitsystem_config
cat > ~/gpconfigs/gpinitsystem_config << EOF
ARRAY_NAME="Greenplum Data Platform"
SEG_PREFIX=gpseg
PORT_BASE=6000
declare -a DATA_DIRECTORY=(/data/primary /data/primary)
MASTER_HOSTNAME=mdw.sky.local
MASTER_DIRECTORY=/data/master
MASTER_PORT=5432
TRUSTED_SHELL=ssh
CHECK_POINT_SEGMENTS=8
ENCODING=UNICODE
MIRROR_PORT_BASE=7000
declare -a MIRROR_DATA_DIRECTORY=(/data/mirror /data/mirror)
#DATABASE_NAME=name_of_database
#MACHINE_LIST_FILE=/home/gpadmin/gpconfigs/hostfile_gpinitsystem
EOF

Running the Initialization Utility

To run the initialization utility

cd ~
# gpinitsystem -c gpconfigs/gpinitsystem_config -h gpconfigs/hostfile_gpinitsystem
gpinitsystem -c gpconfigs/gpinitsystem_config -h gpconfigs/hostfile_gpinitsystem -s smdw --mirror-mode=spread
[gpadmin@mdw:~/gpconfigs]$ cd ~
[gpadmin@mdw:~]$ gpinitsystem -c gpconfigs/gpinitsystem_config -h gpconfigs/hostfile_gpinitsystem -s smdw --mirror-mode=spread
20220828:18:44:11:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Checking configuration parameters, please wait...
20220828:18:44:11:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Reading Greenplum configuration file gpconfigs/gpinitsystem_config
20220828:18:44:11:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Locale has not been set in gpconfigs/gpinitsystem_config, will set to default value
20220828:18:44:11:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Locale set to en_US.utf8
20220828:18:44:11:020997 gpinitsystem:mdw:gpadmin-[INFO]:-No DATABASE_NAME set, will exit following template1 updates
20220828:18:44:11:020997 gpinitsystem:mdw:gpadmin-[INFO]:-MASTER_MAX_CONNECT not set, will set to default value 250
20220828:18:44:12:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Checking configuration parameters, Completed
20220828:18:44:12:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Commencing multi-home checks, please wait...
....
20220828:18:44:13:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Configuring build for standard array
20220828:18:44:13:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Sufficient hosts for spread mirroring request
20220828:18:44:13:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Commencing multi-home checks, Completed
20220828:18:44:13:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Building primary segment instance array, please wait...
........
20220828:18:44:18:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Building spread mirror array type , please wait...
........
20220828:18:44:22:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Checking Master host
20220828:18:44:23:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Checking new segment hosts, please wait...
................
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Checking new segment hosts, Completed
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Greenplum Database Creation Parameters
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:---------------------------------------
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Master Configuration
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:---------------------------------------
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Master instance name       = Greenplum Data Platform
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Master hostname            = mdw.sky.local
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Master port                = 5432
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Master instance dir        = /data/master/gpseg-1
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Master LOCALE              = en_US.utf8
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Greenplum segment prefix   = gpseg
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Master Database            =
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Master connections         = 250
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Master buffers             = 128000kB
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Segment connections        = 750
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Segment buffers            = 128000kB
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Checkpoint segments        = 12
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Encoding                   = UNICODE
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Postgres param file        = Off
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Initdb to be used          = /usr/local/greenplum-db-6.21.1/bin/initdb
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-GP_LIBRARY_PATH is         = /usr/local/greenplum-db-6.21.1/lib
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-HEAP_CHECKSUM is           = on
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-HBA_HOSTNAMES is           = 0
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Ulimit check               = Passed
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Array host connect type    = Single hostname per node
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Master IP address [1]      = ::1
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Master IP address [2]      = 192.168.181.231
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Master IP address [3]      = fe80::3f77:4886:8cc0:25ba
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Standby Master             = smdw
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Number of primary segments = 2
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Standby IP address         = ::1
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Standby IP address         = 192.168.181.232
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Standby IP address         = fe80::1958:6310:7a95:7422
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Standby IP address         = fe80::3f77:4886:8cc0:25ba
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Standby IP address         = fe80::7934:f85b:a866:6599
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Total Database segments    = 8
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Trusted shell              = ssh
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Number segment hosts       = 4
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Mirror port base           = 7000
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Number of mirror segments  = 2
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Mirroring config           = ON
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Mirroring type             = Spread
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:----------------------------------------
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Greenplum Primary Segment Configuration
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:----------------------------------------
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-sdw1.sky.local        6000    sdw1    /data/primary/gpseg0    2
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-sdw1.sky.local        6001    sdw1    /data/primary/gpseg1    3
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-sdw2.sky.local        6000    sdw2    /data/primary/gpseg2    4
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-sdw2.sky.local        6001    sdw2    /data/primary/gpseg3    5
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-sdw3.sky.local        6000    sdw3    /data/primary/gpseg4    6
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-sdw3.sky.local        6001    sdw3    /data/primary/gpseg5    7
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-sdw4.sky.local        6000    sdw4    /data/primary/gpseg6    8
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-sdw4.sky.local        6001    sdw4    /data/primary/gpseg7    9
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:---------------------------------------
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Greenplum Mirror Segment Configuration
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:---------------------------------------
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-sdw2.sky.local        7000    sdw2    /data/mirror/gpseg0     10
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-sdw3.sky.local        7001    sdw3    /data/mirror/gpseg1     11
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-sdw3.sky.local        7000    sdw3    /data/mirror/gpseg2     12
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-sdw4.sky.local        7001    sdw4    /data/mirror/gpseg3     13
20220828:18:44:43:020997 gpinitsystem:mdw:gpadmin-[INFO]:-sdw4.sky.local        7000    sdw4    /data/mirror/gpseg4     14
20220828:18:44:44:020997 gpinitsystem:mdw:gpadmin-[INFO]:-sdw1.sky.local        7001    sdw1    /data/mirror/gpseg5     15
20220828:18:44:44:020997 gpinitsystem:mdw:gpadmin-[INFO]:-sdw1.sky.local        7000    sdw1    /data/mirror/gpseg6     16
20220828:18:44:44:020997 gpinitsystem:mdw:gpadmin-[INFO]:-sdw2.sky.local        7001    sdw2    /data/mirror/gpseg7     17

Continue with Greenplum creation Yy|Nn (default=N):
> Y
20220828:18:44:51:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Building the Master instance database, please wait...
20220828:18:44:56:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Starting the Master in admin mode
20220828:18:44:57:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Commencing parallel build of primary segment instances
20220828:18:44:57:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Spawning parallel processes    batch [1], please wait...
........
20220828:18:44:57:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Waiting for parallel processes batch [1], please wait...
......................
20220828:18:45:19:020997 gpinitsystem:mdw:gpadmin-[INFO]:------------------------------------------------
20220828:18:45:19:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Parallel process exit status
20220828:18:45:19:020997 gpinitsystem:mdw:gpadmin-[INFO]:------------------------------------------------
20220828:18:45:19:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Total processes marked as completed           = 8
20220828:18:45:19:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Total processes marked as killed              = 0
20220828:18:45:19:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Total processes marked as failed              = 0
20220828:18:45:19:020997 gpinitsystem:mdw:gpadmin-[INFO]:------------------------------------------------
20220828:18:45:19:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Removing back out file
20220828:18:45:19:020997 gpinitsystem:mdw:gpadmin-[INFO]:-No errors generated from parallel processes
20220828:18:45:19:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Restarting the Greenplum instance in production mode
20220828:18:45:19:028443 gpstop:mdw:gpadmin-[INFO]:-Starting gpstop with args: -a -l /home/gpadmin/gpAdminLogs -m -d /data/master/gpseg-1
20220828:18:45:19:028443 gpstop:mdw:gpadmin-[INFO]:-Gathering information and validating the environment...
20220828:18:45:19:028443 gpstop:mdw:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20220828:18:45:19:028443 gpstop:mdw:gpadmin-[INFO]:-Obtaining Segment details from master...
20220828:18:45:19:028443 gpstop:mdw:gpadmin-[INFO]:-Greenplum Version: 'postgres (Greenplum Database) 6.21.1 build commit:fff63ec5cc64f2adc033fc1203afbc5fbb9ad7d9 Open Source'
20220828:18:45:19:028443 gpstop:mdw:gpadmin-[INFO]:-Commencing Master instance shutdown with mode='smart'
20220828:18:45:19:028443 gpstop:mdw:gpadmin-[INFO]:-Master segment instance directory=/data/master/gpseg-1
20220828:18:45:19:028443 gpstop:mdw:gpadmin-[INFO]:-Stopping master segment and waiting for user connections to finish ...
server shutting down
20220828:18:45:20:028443 gpstop:mdw:gpadmin-[INFO]:-Attempting forceful termination of any leftover master process
20220828:18:45:20:028443 gpstop:mdw:gpadmin-[INFO]:-Terminating processes for segment /data/master/gpseg-1
20220828:18:45:20:028466 gpstart:mdw:gpadmin-[INFO]:-Starting gpstart with args: -a -l /home/gpadmin/gpAdminLogs -d /data/master/gpseg-1
20220828:18:45:20:028466 gpstart:mdw:gpadmin-[INFO]:-Gathering information and validating the environment...
20220828:18:45:20:028466 gpstart:mdw:gpadmin-[INFO]:-Greenplum Binary Version: 'postgres (Greenplum Database) 6.21.1 build commit:fff63ec5cc64f2adc033fc1203afbc5fbb9ad7d9 Open Source'
20220828:18:45:20:028466 gpstart:mdw:gpadmin-[INFO]:-Greenplum Catalog Version: '301908232'
20220828:18:45:20:028466 gpstart:mdw:gpadmin-[INFO]:-Starting Master instance in admin mode
20220828:18:45:20:028466 gpstart:mdw:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20220828:18:45:20:028466 gpstart:mdw:gpadmin-[INFO]:-Obtaining Segment details from master...
20220828:18:45:20:028466 gpstart:mdw:gpadmin-[INFO]:-Setting new master era
20220828:18:45:20:028466 gpstart:mdw:gpadmin-[INFO]:-Master Started...
20220828:18:45:21:028466 gpstart:mdw:gpadmin-[INFO]:-Shutting down master
20220828:18:45:21:028466 gpstart:mdw:gpadmin-[INFO]:-Commencing parallel segment instance startup, please wait...
.
20220828:18:45:23:028466 gpstart:mdw:gpadmin-[INFO]:-Process results...
20220828:18:45:23:028466 gpstart:mdw:gpadmin-[INFO]:-----------------------------------------------------
20220828:18:45:23:028466 gpstart:mdw:gpadmin-[INFO]:-   Successful segment starts                                            = 8
20220828:18:45:23:028466 gpstart:mdw:gpadmin-[INFO]:-   Failed segment starts                                                = 0
20220828:18:45:23:028466 gpstart:mdw:gpadmin-[INFO]:-   Skipped segment starts (segments are marked down in configuration)   = 0
20220828:18:45:23:028466 gpstart:mdw:gpadmin-[INFO]:-----------------------------------------------------
20220828:18:45:23:028466 gpstart:mdw:gpadmin-[INFO]:-Successfully started 8 of 8 segment instances
20220828:18:45:23:028466 gpstart:mdw:gpadmin-[INFO]:-----------------------------------------------------
20220828:18:45:23:028466 gpstart:mdw:gpadmin-[INFO]:-Starting Master instance mdw.sky.local directory /data/master/gpseg-1
20220828:18:45:23:028466 gpstart:mdw:gpadmin-[INFO]:-Command pg_ctl reports Master mdw.sky.local instance active
20220828:18:45:23:028466 gpstart:mdw:gpadmin-[INFO]:-Connecting to dbname='template1' connect_timeout=15
20220828:18:45:23:028466 gpstart:mdw:gpadmin-[INFO]:-No standby master configured.  skipping...
20220828:18:45:23:028466 gpstart:mdw:gpadmin-[INFO]:-Database successfully started
20220828:18:45:23:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Completed restart of Greenplum instance in production mode
20220828:18:45:23:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Commencing parallel build of mirror segment instances
20220828:18:45:23:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Spawning parallel processes    batch [1], please wait...
........
20220828:18:45:23:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Waiting for parallel processes batch [1], please wait...
...........
20220828:18:45:34:020997 gpinitsystem:mdw:gpadmin-[INFO]:------------------------------------------------
20220828:18:45:34:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Parallel process exit status
20220828:18:45:34:020997 gpinitsystem:mdw:gpadmin-[INFO]:------------------------------------------------
20220828:18:45:34:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Total processes marked as completed           = 8
20220828:18:45:34:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Total processes marked as killed              = 0
20220828:18:45:34:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Total processes marked as failed              = 0
20220828:18:45:34:020997 gpinitsystem:mdw:gpadmin-[INFO]:------------------------------------------------
20220828:18:45:35:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Starting initialization of standby master smdw
20220828:18:45:35:030546 gpinitstandby:mdw:gpadmin-[INFO]:-Validating environment and parameters for standby initialization...
20220828:18:45:35:030546 gpinitstandby:mdw:gpadmin-[INFO]:-Checking for data directory /data/master/gpseg-1 on smdw
20220828:18:45:35:030546 gpinitstandby:mdw:gpadmin-[INFO]:------------------------------------------------------
20220828:18:45:35:030546 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum standby master initialization parameters
20220828:18:45:35:030546 gpinitstandby:mdw:gpadmin-[INFO]:------------------------------------------------------
20220828:18:45:35:030546 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum master hostname               = mdw.sky.local
20220828:18:45:35:030546 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum master data directory         = /data/master/gpseg-1
20220828:18:45:35:030546 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum master port                   = 5432
20220828:18:45:35:030546 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum standby master hostname       = smdw
20220828:18:45:35:030546 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum standby master port           = 5432
20220828:18:45:35:030546 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum standby master data directory = /data/master/gpseg-1
20220828:18:45:35:030546 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum update system catalog         = On
20220828:18:45:35:030546 gpinitstandby:mdw:gpadmin-[INFO]:-Syncing Greenplum Database extensions to standby
20220828:18:45:36:030546 gpinitstandby:mdw:gpadmin-[INFO]:-The packages on smdw are consistent.
20220828:18:45:36:030546 gpinitstandby:mdw:gpadmin-[INFO]:-Adding standby master to catalog...
20220828:18:45:36:030546 gpinitstandby:mdw:gpadmin-[INFO]:-Database catalog updated successfully.
20220828:18:45:36:030546 gpinitstandby:mdw:gpadmin-[INFO]:-Updating pg_hba.conf file...
20220828:18:45:38:030546 gpinitstandby:mdw:gpadmin-[INFO]:-pg_hba.conf files updated successfully.
20220828:18:45:39:030546 gpinitstandby:mdw:gpadmin-[INFO]:-Starting standby master
20220828:18:45:39:030546 gpinitstandby:mdw:gpadmin-[INFO]:-Checking if standby master is running on host: smdw  in directory: /data/master/gpseg-1
20220828:18:45:43:030546 gpinitstandby:mdw:gpadmin-[INFO]:-Cleaning up pg_hba.conf backup files...
20220828:18:45:44:030546 gpinitstandby:mdw:gpadmin-[INFO]:-Backup files of pg_hba.conf cleaned up successfully.
20220828:18:45:44:030546 gpinitstandby:mdw:gpadmin-[INFO]:-Successfully created standby master on smdw
20220828:18:45:44:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Successfully completed standby master initialization
20220828:18:45:44:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Scanning utility log file for any warning messages
20220828:18:45:44:020997 gpinitsystem:mdw:gpadmin-[WARN]:-*******************************************************
20220828:18:45:44:020997 gpinitsystem:mdw:gpadmin-[WARN]:-Scan of log file indicates that some warnings or errors
20220828:18:45:44:020997 gpinitsystem:mdw:gpadmin-[WARN]:-were generated during the array creation
20220828:18:45:44:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Please review contents of log file
20220828:18:45:44:020997 gpinitsystem:mdw:gpadmin-[INFO]:-/home/gpadmin/gpAdminLogs/gpinitsystem_20220828.log
20220828:18:45:44:020997 gpinitsystem:mdw:gpadmin-[INFO]:-To determine level of criticality
20220828:18:45:44:020997 gpinitsystem:mdw:gpadmin-[INFO]:-These messages could be from a previous run of the utility
20220828:18:45:44:020997 gpinitsystem:mdw:gpadmin-[INFO]:-that was called today!
20220828:18:45:44:020997 gpinitsystem:mdw:gpadmin-[WARN]:-*******************************************************
20220828:18:45:44:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Greenplum Database instance successfully created
20220828:18:45:44:020997 gpinitsystem:mdw:gpadmin-[INFO]:-------------------------------------------------------
20220828:18:45:44:020997 gpinitsystem:mdw:gpadmin-[INFO]:-To complete the environment configuration, please
20220828:18:45:44:020997 gpinitsystem:mdw:gpadmin-[INFO]:-update gpadmin .bashrc file with the following
20220828:18:45:44:020997 gpinitsystem:mdw:gpadmin-[INFO]:-1. Ensure that the greenplum_path.sh file is sourced
20220828:18:45:44:020997 gpinitsystem:mdw:gpadmin-[INFO]:-2. Add "export MASTER_DATA_DIRECTORY=/data/master/gpseg-1"
20220828:18:45:44:020997 gpinitsystem:mdw:gpadmin-[INFO]:-   to access the Greenplum scripts for this instance:
20220828:18:45:44:020997 gpinitsystem:mdw:gpadmin-[INFO]:-   or, use -d /data/master/gpseg-1 option for the Greenplum scripts
20220828:18:45:44:020997 gpinitsystem:mdw:gpadmin-[INFO]:-   Example gpstate -d /data/master/gpseg-1
20220828:18:45:44:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Script log file = /home/gpadmin/gpAdminLogs/gpinitsystem_20220828.log
20220828:18:45:44:020997 gpinitsystem:mdw:gpadmin-[INFO]:-To remove instance, run gpdeletesystem utility
20220828:18:45:44:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Standby Master smdw has been configured
20220828:18:45:44:020997 gpinitsystem:mdw:gpadmin-[INFO]:-To activate the Standby Master Segment in the event of Master
20220828:18:45:44:020997 gpinitsystem:mdw:gpadmin-[INFO]:-failure review options for gpactivatestandby
20220828:18:45:44:020997 gpinitsystem:mdw:gpadmin-[INFO]:-------------------------------------------------------
20220828:18:45:44:020997 gpinitsystem:mdw:gpadmin-[INFO]:-The Master /data/master/gpseg-1/pg_hba.conf post gpinitsystem
20220828:18:45:44:020997 gpinitsystem:mdw:gpadmin-[INFO]:-has been configured to allow all hosts within this new
20220828:18:45:44:020997 gpinitsystem:mdw:gpadmin-[INFO]:-array to intercommunicate. Any hosts external to this
20220828:18:45:44:020997 gpinitsystem:mdw:gpadmin-[INFO]:-new array must be explicitly added to this file
20220828:18:45:44:020997 gpinitsystem:mdw:gpadmin-[INFO]:-Refer to the Greenplum Admin support guide which is
20220828:18:45:44:020997 gpinitsystem:mdw:gpadmin-[INFO]:-located in the /usr/local/greenplum-db-6.21.1/docs directory
20220828:18:45:44:020997 gpinitsystem:mdw:gpadmin-[INFO]:-------------------------------------------------------
[gpadmin@mdw:~]$

Start

Restart

gpstop -r

pg_hba.conf 적용

gpstop -u

Stop

gpstop -M fast

Start

gpstart -a

Recovery

ssh gpadmin@smdw
cat > ${MASTER_DATA_DIRECTORY}/recovery.conf << EOF
standby_mode = 'on'
primary_conninfo = 'user=gpadmin host=mdw.sky.local port=5432 sslmode=prefer sslcompression=1 krbsrvname=postgres application_name=gp_walreceiver'
EOF
gpactivatestandby

reboot

for i in {6..1} ; do ssh 192.168.181.23${i} "reboot" ; done

shutdown

for i in {6..1} ; do ssh 192.168.181.23${i} "shutdown -h now" ; done

References

반응형

'[DB] > [PostgreSQL]' 카테고리의 다른 글

postgresql 8.2 text to boolean  (0) 2016.07.09
postgresql 9.4, postgis 설치하기  (0) 2015.08.06
postgresql 에서 테이블 정보 추출하기  (0) 2015.03.03
pgadmin 설치  (0) 2015.02.23
PostgreSQL과 MySQL의 차이점  (0) 2015.02.23
Posted by FeliZ_하늘..
,
반응형

회사에서 테스트 때문에 synology nas 에 java 를 실행해야 할 일이 생겼는데

DS 7 부터 java 를 지원하지 않는다고 하여 해결법을 적어둔다.

회사 synology nas 는 얼마 전 DSM 7.1 로 업그레이드를 했다.

 

패키지 센터 > 설정
패키지 소스 > 추가

이름: Java

위치: https://get.filebot.net/syno/ 

확인

추가

 

설정에 추가된 모습
커뮤니티 > Java Installer 설치
사용자 동의
설치 중...
설치완료
Terminal 에서 확인

반응형

'[프로그래밍]' 카테고리의 다른 글

오늘부터의 목표  (0) 2022.07.18
Apache Ratis 소개  (3) 2018.03.21
java 로 KMP 알고리즘  (0) 2017.12.21
dfs 알고리즘을 java 로 구현하기  (0) 2017.02.16
[IntelliJ] SVN 커밋 시 'is out of date' 에러 문제  (0) 2015.10.23
Posted by FeliZ_하늘..
,
반응형

2022년 7월 18일 출근하려고 샤워하는 도중 문득 이런 생각을 하게 되었다.

작년까지 목표는 대표님 코딩 안하게 하는 것이었다.
그런데 올해 5월쯤 다른회사 프로젝트가 거의 마무리 되어갈 즈음, 대표님께서 오른쪽 자리에 앉아서 이런 말씀을 하셨다.
"오랜만에 코딩을 하니 살아있는 것 같았다."
그 당시엔 그럴 수 있지 라며 넘겼었는데, 오늘 샤워하는 도중 뇌리에 그 단어들이 스치더니 내 오른쪽 뇌를 때리는 것 같았다.

내 개발인생 전체 8년간 봬온 분이었는데...
진짜 개발자가 개발하는 모습을 옆자리에서 처음으로 보여주신 분이었는데...
대표님의 코딩은 업무이면서 동시에 행복코딩이 아니었을까 하는 생각이 들었다.

나는 16살에 바이올린 전공을 포기하면서 바이올린을 취미로 미루고 그 다음 취미는 코딩이었다.
이직을 결심하고 공부하면서 지금의 대표님을 만났고, 업으로도 코딩을 했다.
생각해보니 나도 언젠가부터는 회사에서의 코딩과 집에서의 코딩을 구분했다.
나도 행복코딩을 하고 있었다.
그런 분의 코딩을 안하게 한다니...
내 뒤에 있을 누군가의 칼을 갈아주는 것 같다는 느낌이 들었다.

지금부터는 대표님의 "행복코딩"을 위해 일해야겠다는 생각이 들었다.
그래야 나도 행복코딩을 할 수 있을테니까...

반응형

'[프로그래밍]' 카테고리의 다른 글

Java Install in Synology DSM 7  (0) 2022.07.26
Apache Ratis 소개  (3) 2018.03.21
java 로 KMP 알고리즘  (0) 2017.12.21
dfs 알고리즘을 java 로 구현하기  (0) 2017.02.16
[IntelliJ] SVN 커밋 시 'is out of date' 에러 문제  (0) 2015.10.23
Posted by FeliZ_하늘..
,

Apache Doris 설치

[BigData] 2022. 7. 15. 23:33
반응형

Apache Doris 설치

Apache Doris

Preparation

templatePath: E:\vm\linux\template79
displayName: dm1
hostname: dm1.sky.local
path: E:\vm\linux\doris
description: frontend
ip: 192.168.181.231
numvcpus: 2
coresPerSocket: 2
memsize: 4096
---
templatePath: E:\vm\linux\template79
displayName: dm2
hostname: dm2.sky.local
path: F:\vm\linux\doris
description: frontend
ip: 192.168.181.232
numvcpus: 2
coresPerSocket: 2
memsize: 4096
---
templatePath: E:\vm\linux\template79
displayName: dm3
hostname: dm3.sky.local
path: E:\vm\linux\doris
description: frontend
ip: 192.168.181.233
numvcpus: 2
coresPerSocket: 2
memsize: 4096
---
templatePath: E:\vm\linux\template79
displayName: dw1
hostname: dw1.sky.local
path: F:\vm\linux\doris
description: backend
ip: 192.168.181.234
numvcpus: 2
coresPerSocket: 2
memsize: 4096
---
templatePath: E:\vm\linux\template79
displayName: dw2
hostname: dw2.sky.local
path: E:\vm\linux\doris
description: backend
ip: 192.168.181.235
numvcpus: 2
coresPerSocket: 2
memsize: 4096
---
templatePath: E:\vm\linux\template79
displayName: dw3
hostname: dw3.sky.local
path: F:\vm\linux\doris
description: backend
ip: 192.168.181.236
numvcpus: 2
coresPerSocket: 2
memsize: 4096
java -jar E:\vm\CopyVMWare-1.1.0.jar `
 --force `
 --yaml E:\vm\conf\doris.yaml
cat >> /etc/hosts <<EOF

# Apache Doris
192.168.181.231    dm1.sky.local    dm1
192.168.181.232    dm2.sky.local    dm2
192.168.181.233    dm3.sky.local    dm3
192.168.181.234    dw1.sky.local    dw1
192.168.181.235    dw2.sky.local    dw2
192.168.181.236    dw3.sky.local    dw3
EOF
cat >> /etc/bashrc <<EOF

export JAVA_HOME=/usr/lib/jvm/java
export PATH=\${JAVA_HOME}/bin:\${PATH}
EOF

. /etc/bashrc

# yum install -y sshpass
# ssh-keygen -t rsa -N '' -f ~/.ssh/id_rsa
# export SSHPASS="PASSWORD"

# for i in {1..1} ; do sshpass -e ssh -o StrictHostKeyChecking=no root@192.168.181.23${i} "mkdir -p ~/.ssh ; chmod 700 ~/.ssh ; touch ~/.ssh/authorized_keys ; echo '$(cat ~/.ssh/id_rsa.pub)' >> ~/.ssh/authorized_keys ; chmod 600 ~/.ssh/authorized_keys" ; done
# for i in {2..8} ; do sshpass -e ssh -o StrictHostKeyChecking=no root@192.168.181.23${i} "rm -rf ~/.ssh ; mkdir -p ~/.ssh ; chmod 700 ~/.ssh ; touch ~/.ssh/authorized_keys ; echo '$(cat ~/.ssh/id_rsa.pub)' >> ~/.ssh/authorized_keys ; chmod 600 ~/.ssh/authorized_keys ; echo SUCCESS" ; done

for i in {1..3} ; do echo dm${i}.sky.local  ; done | xargs -P 2 -I {} ssh {} -o StrictHostKeyChecking=no "hostname"
for i in {1..3} ; do echo dw${i}.sky.local  ; done | xargs -P 5 -I {} ssh {} -o StrictHostKeyChecking=no "hostname"
for i in {1..3} ; do echo dm${i}  ; done | xargs -P 2 -I {} ssh {} -o StrictHostKeyChecking=no "hostname"
for i in {1..3} ; do echo dw${i}  ; done | xargs -P 5 -I {} ssh {} -o StrictHostKeyChecking=no "hostname"

for i in {2..6} ; do echo 192.168.181.23${i} ; done | xargs -P 7 -I {} scp /etc/{bashrc,hosts} {}:/etc
for i in {1..6} ; do echo 192.168.181.23${i} ; done | xargs -P 4 -I {} ssh {} "yum install -y net-tools gcc* git vim wget zip unzip tar curl dstat ntp java-1.8.0-openjdk-devel"
for i in {1..6} ; do echo 192.168.181.23${i} ; done | xargs -P 8 -I {} ssh {} "systemctl stop firewalld && systemctl disable firewalld ; systemctl enable ntpd ; systemctl start ntpd ; ntpq -p"
for i in {1..6} ; do echo 192.168.181.23${i} ; done | xargs -P 8 -I {} ssh {} "ntpq -p"

kernel

cat >> /etc/security/limits.conf << EOF

# 20220715 hskimsky for apache doris
* soft nofile 65536
* hard nofile 65536
EOF
for i in {2..6} ; do echo 192.168.181.23${i} ; done | xargs -P 7 -I {} scp /etc/security/limits.conf {}:/etc/security
cat >> /etc/sysctl.conf << EOF

# 20220715 hskimsky for apache doris
vm.swappiness = 0
EOF
for i in {2..6} ; do echo 192.168.181.23${i} ; done | xargs -P 7 -I {} scp /etc/sysconfig/network {}:/etc/sysconfig

deployment

  • 1 node 에 1개의 FE instance 만 가능
  • 1 node 에 여러개의 BE instance 배포 가능
  • FE disk 는 수백 MB ~ 수 GB
  • BE disk 는 사용자 데이터 저장
    • 사용자의 총 데이터 * 3 추가로 40% 공간이 중간 데이터 저장
  • 모든 FE 서버 시간은 5초 이내 편차만 허용함
  • 모든 node 는 swap 꺼야 함
  • FE Role
    • leader: follower group 에서 선출
    • follower
    • observer
    • online service 에선 follower * 3 + observer * 1~3
    • offline service 에선 follower * 1 + observer * 1~3
  • broker
    • hdfs 같은 외부 data source 에 access 하기 위한 process
    • 각 machine 에 배포
for i in {1..6} ; do echo 192.168.181.23${i} ; done | xargs -P 7 -I {} ssh {} "mkdir -p ~/Downloads/doris"
cd ~/Downloads/doris
wget https://dist.apache.org/repos/dist/release/doris/1.0/1.0.0-incubating/apache-doris-1.0.0-incubating-bin.tar.gz
for i in {2..6} ; do echo 192.168.181.23${i} ; done | xargs -P 7 -I {} scp ~/Downloads/doris/apache-doris-1.0.0-incubating-bin.tar.gz {}:~/Downloads/doris

for i in {1..6} ; do echo 192.168.181.23${i} ; done | xargs -P 7 -I {} ssh {} "mkdir -p /opt/doris"
for i in {1..6} ; do echo 192.168.181.23${i} ; done | xargs -P 7 -I {} ssh {} "tar zxf ~/Downloads/doris/apache-doris-1.0.0-incubating-bin.tar.gz -C /opt/doris"
for i in {1..6} ; do echo 192.168.181.23${i} ; done | xargs -P 7 -I {} ssh {} "chown -R root:root /opt/doris/apache-doris-1.0.0-incubating-bin"
for i in {1..6} ; do echo 192.168.181.23${i} ; done | xargs -P 7 -I {} ssh {} "cd /opt/doris ; ln -s apache-doris-1.0.0-incubating-bin default"

cat >> /opt/doris/default/fe/conf/fe.conf << EOF

# 20220715 hskimsky for apache doris
priority_networks = 192.168.0.0/16
EOF
for i in {2..6} ; do echo 192.168.181.23${i} ; done | xargs -P 7 -I {} scp /opt/doris/default/fe/conf/fe.conf {}:/opt/doris/default/fe/conf

cat > /etc/profile.d/doris.sh << EOF
export DORIS_HOME=/opt/doris/default
export PATH=\${PATH}:\${DORIS_HOME}/fe/bin:\${DORIS_HOME}/be/bin
EOF
for i in {2..6} ; do echo 192.168.181.23${i} ; done | xargs -P 7 -I {} scp /etc/profile.d/doris.sh {}:/etc/profile.d

# save and source
source /etc/profile.d/doris.sh

Start

ssh 192.168.181.231 "/opt/doris/default/fe/bin/start_fe.sh --daemon"
ssh 192.168.181.232 "/opt/doris/default/fe/bin/start_fe.sh --daemon --helper 192.168.181.231:9010"
ssh 192.168.181.233 "/opt/doris/default/fe/bin/start_fe.sh --daemon --helper 192.168.181.231:9010"
ssh 192.168.181.234 "/opt/doris/default/be/bin/start_be.sh --daemon"
ssh 192.168.181.235 "/opt/doris/default/be/bin/start_be.sh --daemon"
ssh 192.168.181.236 "/opt/doris/default/be/bin/start_be.sh --daemon"
for i in {1..6} ; do ssh 192.168.181.23${i} "hostname ; /opt/doris/default/apache_hdfs_broker/bin/start_broker.sh --daemon" ; done

초기 Frontend Web UI 화면이 뜨고 OK 누르면 로그인 화면으로 넘어감

초기 Frontend Web UI

로그인 화면

  • Username: root
  • Password:

Login UI

로그인 후 첫화면

첫화면

Add all BE nodes to FE

mkdir ~/Downloads/MySQL-5
cd ~/Downloads/MySQL-5
wget https://dev.mysql.com/get/Downloads/MySQL-5.7/mysql-5.7.38-linux-glibc2.12-x86_64.tar.gz
tar zxf mysql-5.7.38-linux-glibc2.12-x86_64.tar.gz
rm -f mysql-5.7.38-linux-glibc2.12-x86_64.tar.gz
ln -s mysql-5.7.38-linux-glibc2.12-x86_64/ default

add nodes

ALTER SYSTEM ADD FOLLOWER "follower_host:edit_log_port"
ALTER SYSTEM ADD OBSERVER "follower_host:edit_log_port"
cat > 192.168.181.231.sql << EOF
ALTER SYSTEM ADD FOLLOWER "192.168.181.232:9010";
ALTER SYSTEM ADD OBSERVER "192.168.181.233:9010";
ALTER SYSTEM ADD BACKEND  "192.168.181.234:9050";
ALTER SYSTEM ADD BACKEND  "192.168.181.235:9050";
ALTER SYSTEM ADD BACKEND  "192.168.181.236:9050";
ALTER SYSTEM ADD BROKER hdfs
 "192.168.181.231:8000"
,"192.168.181.232:8000"
,"192.168.181.233:8000"
,"192.168.181.234:8000"
,"192.168.181.235:8000"
,"192.168.181.236:8000";
EOF
~/Downloads/MySQL-5/default/bin/mysql -h 192.168.181.231 -P 9030 -uroot < 192.168.181.231.sql

Follower, Observer 추가 후 Frontend 목록

Frontend List

stop-all

for i in {1..3} ; do echo 192.168.181.23${i} ; done | xargs -P 7 -I {} ssh {} "/opt/doris/default/fe/bin/stop_fe.sh"
for i in {4..6} ; do echo 192.168.181.23${i} ; done | xargs -P 7 -I {} ssh {} "/opt/doris/default/be/bin/stop_be.sh"
for i in {1..6} ; do echo 192.168.181.23${i} ; done | xargs -P 7 -I {} ssh {} "/opt/doris/default/apache_hdfs_broker/bin/stop_broker.sh"

reboot

for i in {6..1} ; do ssh 192.168.181.23${i} "reboot" ; done

shutdown

for i in {6..1} ; do ssh 192.168.181.23${i} "shutdown -h now" ; done

References

반응형

'[BigData]' 카테고리의 다른 글

TB to PB  (0) 2023.03.30
Posted by FeliZ_하늘..
,
반응형

powershell 에서 해당 vpn 의 SplitTunneling 을 false 에서 true 로 변경 후 재접속

PS C:\Users\hskim> Get-VpnConnection                                                    


Name                  : VPN_NAME
ServerAddress         : XXX.XXX.XXX.XXX
AllUserConnection     : False
Guid                  : {12345678-ABCD-1234-ABCD-1234567890AB}
TunnelType            : L2tp
AuthenticationMethod  : {MsChapv2}
EncryptionLevel       : Optional
L2tpIPsecAuth         : Psk
UseWinlogonCredential : False
EapConfigXmlStream    :
ConnectionStatus      : Disconnected
RememberCredential    : True
SplitTunneling        : False
DnsSuffix             :
IdleDisconnectSeconds : 0


PS C:\Users\hskim> Set-VpnConnection -name VPN_NAME -splitTunneling $true
PS C:\Users\hskim> Get-VpnConnection


Name                  : VPN_NAME
ServerAddress         : XXX.XXX.XXX.XXX
AllUserConnection     : False
Guid                  : {12345678-ABCD-1234-ABCD-1234567890AB}
TunnelType            : L2tp
AuthenticationMethod  : {MsChapv2}
EncryptionLevel       : Optional
L2tpIPsecAuth         : Psk
UseWinlogonCredential : False
EapConfigXmlStream    :
ConnectionStatus      : Disconnected
RememberCredential    : True
SplitTunneling        : True
DnsSuffix             :
IdleDisconnectSeconds : 0



PS C:\Users\hskim> 
반응형
Posted by FeliZ_하늘..
,