tonglin0325的个人主页

Java适配器设计模式

适配器设计模式,一个接口首先被一个抽象类先实现(此抽象类通常称为适配器类,比如下面的WindowAdapter),并在此抽象类中实现若干方法(但是这个抽象类中的方法体是空的),则以后的子类直接继承此抽象类,就可以有选择地覆写所需要的方法。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
interface Window{					//定义Window接口,表示窗口操作
public void open(); //窗口打开
public void close(); //窗口关闭
public void activated(); //窗口活动
public void iconified(); //窗口最小化
public void deiconified(); //窗口恢复大小
}

abstract class WindowAdapter implements Window{ //定义抽象类实现接口,在此类中覆写方法,但是所有的方法体为空
public void open() {} //窗口打开
public void close() {}; //窗口关闭
public void activated() {}; //窗口活动
public void iconified() {}; //窗口最小化
public void deiconified() {}; //窗口恢复大小
}

class WindowImp1 extends WindowAdapter{ //子类直接继承WindowAdapter类,有选择地实现需要的方法
public void open() { //窗口打开
System.out.println("窗口打开");
}

public void close() { //窗口关闭
System.out.println("窗口关闭");
}
}

public class Adapter_demo {

public static void main(String[] args) {
// TODO 自动生成的方法存根
Window win = new WindowImp1(); //实现接口对象
win.open();
win.close();
}

}

全文 >>

Java自定义异常类

用户可以根据自己的需要定义自己的异常类,定义异常类只需要继承Exception类即可

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
class MyException extends Exception{		//自定义异常类,继承Exception类
public MyException(String msg){ //构造方法接受异常信息
super(msg); //调用父类中的构造方法
}
}


//主类
//Function : MyException_demo
public class MyException_demo {

public static void main(String[] args) {
// TODO 自动生成的方法存根
try{
throw new MyException("自定义异常"); //抛出异常
}catch(Exception e){ //异常处理
System.out.println(e);
}

}

}

全文 >>

k8s学习笔记——基本命令

  1. 进入pod,获取一个交互 TTY 并运行 /bin/bash
1
2
kubectl exec -it <pod-name> -n <namespace> bash

参考:k8s 命令操作

2.创建namespace

1
2
kubectl create ns xxxx

3.查看所有namespace下的pod

1
2
kubectl get pod -A

查看特定namespace下的pod

1
2
kubectl get pod -n kube-system

查看所有的namespace

1
2
3
4
5
6
7
kubectl get namespace
NAME STATUS AGE
default Active 4d
kube-node-lease Active 4d
kube-public Active 4d
kube-system Active 4d

查看所有的service

1
2
3
4
5
kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
chart-1645713368-kubernetes-dashboard NodePort 10.109.3.120 <none> 443:31392/TCP 4d
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 4d

编写svc

1
2
kubectl edit svc kubernetes-dashboard -n kube-system

查看所有的deployment,删除了deployment,pod也会自动删除

1
2
kubectl get deployment -A

查看所有的secret

1
2
kubectl get secrets -A

查看k8s的所有节点

1
2
kubectl get nodes --show-labels

查看所有k8s角色

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
kubectl get role -A
NAMESPACE NAME CREATED AT
kube-public kubeadm:bootstrap-signer-clusterinfo 2022-02-24T13:55:25Z
kube-public system:controller:bootstrap-signer 2022-02-24T13:55:23Z
kube-system extension-apiserver-authentication-reader 2022-02-24T13:55:23Z
kube-system kube-proxy 2022-02-24T13:55:25Z
kube-system kubeadm:kubelet-config-1.21 2022-02-24T13:55:24Z
kube-system kubeadm:nodes-kubeadm-config 2022-02-24T13:55:24Z
kube-system system::leader-locking-kube-controller-manager 2022-02-24T13:55:23Z
kube-system system::leader-locking-kube-scheduler 2022-02-24T13:55:23Z
kube-system system:controller:bootstrap-signer 2022-02-24T13:55:23Z
kube-system system:controller:cloud-provider 2022-02-24T13:55:23Z
kube-system system:controller:token-cleaner 2022-02-24T13:55:23Z
kube-system system:persistent-volume-provisioner 2022-02-24T13:55:27Z
kubernetes-dashboard kubernetes-dashboard 2022-03-02T16:10:23Z

查看所有的serviceaccount

1
2
kubectl get serviceaccount -A

全文 >>

mac安装多个版本python

1.安装pyenv

1
2
brew install pyenv

2.是否安装成功

1
2
3
pyenv -v
pyenv 2.0.6

3.安装python3.8.10,2.7.15和miniconda3-4.7.12

1
2
3
4
pyenv install 3.8.10
pyenv install 2.7.15
pyenv install miniconda3-4.7.12

查看可以安装的版本列表

1
2
pyenv install --list

4.查看安装的python版本

1
2
3
4
5
6
7
8
pyenv versions

system
2.7.15
3.7.10
* 3.8.10 (set by /Users/lintong/.python-version)
miniconda3-4.7.12

即可选择现成的interpreter

5.目录切换interpreter

1
2
3
4
pyenv local 3.8.10  # 当前目录及其目录切换
python -V # 验证一下是否切换成功
pyenv local --unset # 解除local设置

全局切换interpeter

1
2
3
4
pyenv global 3.8.10 # 不建议全局切换
python -V # 验证一下是否切换成功
pyevn global system # 切换回系统版本

shell切换interpreter

1
2
3
pyenv shell 3.8.10  # 当前shell会话切换
python -V # 验证一下是否切换成功
pyenv shell --unset # 解除shell设置

如果遇到不能切换的情况,在~/.bash_profile添加

1
2
3
4
5
6
7
# pythpn
export PYENV_ROOT="$HOME/.pyenv"
export PATH="$PYENV_ROOT/shims:$PATH"
if command -v pyenv 1>/dev/null 2>&amp;1; then
eval "$(pyenv init -)"
fi

6.使用中科大的源来pip install

1
2
pip install -r ./requirements.txt -i https://pypi.mirrors.ustc.edu.cn/simple

或者使用清华的源

1
2
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple some-package

参考:pypi 镜像使用帮助

7.如果需要创建虚拟环境,首先需要安装virtualenv

1
2
brew install pyenv-virtualenv

8.创建和删除virtualenv

1
2
3
pyenv virtualenv 3.8.10 env3.8.10
pyenv uninstall env3.8.10

全文 >>

使用Impala parser解析SQL

Impala对于hive引擎的语法和hive原生的有些许不同,虽然使用hive的parser也能部分兼容,但是由于impala的parser是使用flex(Fast Lexical Analyzer Generator,快速词法分析生成器)和java cup(Java Constructor of Useful Parsers,生成语法分析器(parser)的工具)开发的,所以对impala的query进行语法解析的时候建议还是使用Impala原生的parser

1.在安装了impala的机器下找到impala-frontend的jar包(环境中的impala版本为2.12.0+cdh5.15.1+0)

1
2
3
lintong@master:/opt/cloudera/parcels/CDH/jars$ ls | grep impala-frontend
impala-frontend-0.1-SNAPSHOT.jar

2.使用mvn install安装到本地仓库中,或者上传到私服仓库中

1
2
mvn install:install-file -Dfile=/home/lintong/下载/impala-frontend-0.1-SNAPSHOT.jar -DgroupId=org.apache.impala -DartifactId=impala-frontend -Dversion=0.1-SNAPSHOT -Dpackaging=jar

3.在工程中引入impala-frontend和java-cup,java-cup的版本可以使用反编译工具打开impala-frontend的jar进行确认

1
2
3
4
5
6
7
8
9
10
11
<dependency>
<groupId>org.apache.impala</groupId>
<artifactId>impala-frontend</artifactId>
<version>0.1-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>net.sourceforge.czt.dev</groupId>
<artifactId>java-cup</artifactId>
<version>0.11-a-czt02-cdh</version>
</dependency>

在解析select语句的时候如果报

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
java.lang.NoClassDefFoundError: org/apache/sentry/core/model/db/DBModelAction

at org.apache.impala.analysis.TableRef.<init>(TableRef.java:138)
at org.apache.impala.analysis.CUP$SqlParser$actions.case421(SqlParser.java:18035)
at org.apache.impala.analysis.CUP$SqlParser$actions.CUP$SqlParser$do_action(SqlParser.java:5976)
at org.apache.impala.analysis.SqlParser.do_action(SqlParser.java:1349)
at java_cup.runtime.lr_parser.parse(lr_parser.java:587)
at com.xxxx.xx.core.parser.XXXXTest.getLineageInfo(XXXXTest.java:41)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69)
at com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33)
at com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:220)
at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:53)
Caused by: java.lang.ClassNotFoundException: org.apache.sentry.core.model.db.DBModelAction
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 28 more


Process finished with exit code 255

在pom中添加

1
2
3
4
5
6
<dependency>
<groupId>org.apache.sentry</groupId>
<artifactId>sentry-core-model-db</artifactId>
<version>1.5.1-cdh5.15.1</version>
</dependency>

4.参考Impala的源代码中parser的demo

1
2
https://github.com/cloudera/Impala/blob/master/fe/src/test/java/com/cloudera/impala/analysis/ParserTest.java

解析select和create kudu table等语句

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
import org.apache.impala.analysis.*;
import java.io.StringReader;


String impalaSelectQuery = "SELECT `ds` FROM `db1`.`table1` WHERE (`ds`='test') OR (`ds`='2020-08-02') OR (`ds`='2020-08-01') LIMIT 100"; // select语句
String hiveSelectQuery = "select city,array_contains(city, 'Paris') from default.arraydemo limit 5";
String kuduCreateTableQuery = "CREATE TABLE `db1`.`my_first_table`\n" +
"(\n" +
" id BIGINT,\n" +
" name STRING,\n" +
" PRIMARY KEY(id)\n" +
")\n" +
"PARTITION BY HASH PARTITIONS 16\n" +
"STORED AS KUDU\n" +
"TBLPROPERTIES (\n" +
" 'kudu.master_addresses' = 'hadoop01:7051,hadoop02:7051,hadoop03:7051', \n" +
" 'kudu.table_name' = 'my_first_table'\n" +
");"; // kudu建表语句
String invalidQuery = "INVALIDATE METADATA db1.tb1"; // 刷新元数据语句
String refreshQuery = "REFRESH db1.tb1 partition(ds='2021-05-02')"; // 刷新元数据语句
String computeQuery = "COMPUTE INCREMENTAL STATS db1.tb1"; // compute stats语句
String describeQuery = "Describe db1.tb1;"; // describe语句
String renameQuery = "ALTER TABLE my_db.customers RENAME TO my_db.users;"; // rename语句
String addColQuery = "ALTER TABLE db1.tb1 ADD COLUMNS (col1 string)"; // add col语句
String alterColQuery = "ALTER TABLE db1.tb1 CHANGE col1 col2 bigint"; // alter col语句
String setQuery = "set mem_limit = 5gb";
String useQuery = "use default";
String query = impalaSelectQuery;
SqlScanner input = new SqlScanner(new StringReader(query));
SqlParser parser = new SqlParser(input);
ParseNode node = null;
try {
node = (ParseNode) parser.parse().value;
if (node instanceof SelectStmt) {
System.out.println("查询语句"); // with语句也属于查询语句
SelectStmt selectStmt = (SelectStmt) node;
String databaseName = selectStmt.getTableRefs().get(0).getPath().get(0);
String tableName = selectStmt.getTableRefs().get(0).getPath().get(1);
System.out.println(databaseName);
System.out.println(tableName);
} else if (node instanceof CreateTableStmt) {
System.out.println("建表语句");
CreateTableStmt createTableStmt = (CreateTableStmt) node;
System.out.println(createTableStmt.getTbl());
for (ColumnDef def : createTableStmt.getColumnDefs()) {
System.out.println(def.getColName() + " " + def.getTypeDef());
}
} else if (node instanceof ResetMetadataStmt) {
System.out.println("刷新元数据语句");
} else if (node instanceof ComputeStatsStmt) {
System.out.println("compute stats语句");
} else if (node instanceof DescribeTableStmt) {
System.out.println("describe语句");
} else if (node instanceof AlterTableOrViewRenameStmt) {
System.out.println("rename语句");
} else if (node instanceof AlterTableAddReplaceColsStmt) {
System.out.println("add col语句");
} else if (node instanceof AlterTableAlterColStmt) {
System.out.println("alter col语句");
} else if (node instanceof UseStmt) {
System.out.println("use语句");
} else if (node instanceof SetStmt) {
System.out.println("set语句");
} else {
System.out.println(node.getClass());
}
} catch (Exception e) {
e.printStackTrace();
fail("\nParser error:\n" + parser.getErrorMsg(query));
}

输出

1
2
3
4
5
建表语句
my_first_table
id BIGINT
name STRING

impala建textfile表语句

1
2
3
4
5
6
create table IF NOT EXISTS default.bbb (
column1 string,
column2 int,
column3 bigint
);

不添加其他参数默认建立的是TEXTFILE格式的hive表

1
2
CREATE TABLE default.bbb (   column1 STRING,   column2 INT,   column3 BIGINT ) STORED AS TEXTFILE LOCATION 'hdfs://xx-nameservice/user/hive/warehouse/bbb'

impala建parquet表语句

1
2
3
4
5
6
7
create table IF NOT EXISTS default.bbb (
column1 string,
column2 int,
column3 bigint
)
stored as parquet;

表结构

1
2
CREATE TABLE default.bbb (   column1 STRING,   column2 INT,   column3 BIGINT ) STORED AS PARQUET LOCATION 'hdfs://xx-nameservice/user/hive/warehouse/bbb'

  

全文 >>

Java对象的多态性(转型)

多态性在面向对象中主要有两种体现:

<1>方法的重载与覆写

<2>对象的多态性

向上转型:子类对象–>父类对象,向上转型会自动完成

向下转型:父类对象–>子类对象,向下转型时,必须明确地指明转型的子类类型

全文 >>

airflow学习笔记——sensor

sensor也是airflow的一种operator,用于检测某个条件是否达到。如果条件满足,sensor将会执行成功;如果条件不满足,sensor将会重试,直到超时,task超时的时候状态就位skipped。

下面是常用的几种sensor:

  • The FileSensor: Waits for a file or folder to land in a filesystem.
  • The S3KeySensor: Waits for a key to be present in a S3 bucket.
  • The SqlSensor: Runs a sql statement repeatedly until a criteria is met.
  • The HivePartitionSensor: Waits for a partition to show up in Hive.
  • The ExternalTaskSensor: Waits for a different DAG or a task in a different DAG to complete for a specific execution date. (Pretty useful that one 🤓 )
  • The DateTimeSensor: Waits until the specified datetime (Useful to add some delay to your DAGs)
  • The TimeDeltaSensor: Waits for a timedelta after the task’s execution_date + schedule interval (Looks similar to the previous one no?)

参考:Airflow Sensors : What you need to know

以及

全文 >>

Java数据结构——红黑树

二叉树:查找时间复杂度:最好:O(lgn),最差O(n)。最差情况是所有的数据全部在一端时。

二叉搜索树(二叉排序树、二叉查找树):查找时间复杂度:最好:O(lgn),最差O(n)。最差情况是所有的数据全部在一端时。

平衡二叉树:查找时间复杂度:O(lgn)

红黑树:查找删除插入时间复杂度:O(lgn)

全文 >>