167 Commits

Author SHA1 Message Date
wujiawei
4d2152650a [fix] images version change 1.3.0 2025-06-21 14:24:58 +08:00
wujiawei
e82a0c36d3 [fix] images version change 2025-06-21 14:23:43 +08:00
wujiawei
6f8e6cd8a9 【fix】 2025-05-27 11:11:39 +08:00
wujiawei
cf22211114 【fix】1.3.0-JDK17 2025-05-26 15:35:19 +08:00
wujiawei
d8a8b630ae 【fix】 优化代理 2025-04-08 16:52:07 +08:00
wujiawei
e0f461460a 【fix】 优化代理 2025-04-08 16:43:29 +08:00
wujiawei
75acddadca 【fix】 优化代理 2025-04-08 16:41:14 +08:00
wujiawei
50e67f54e3 【fix】 so nice serve proxy client is easy 2025-04-08 12:52:30 +08:00
wujiawei
3444e90d3d 【fix】 so nice serve proxy client is easy 2025-04-07 21:50:05 +08:00
wujiawei
b1b329aae1 【fix】 bug fix 2025-04-06 23:03:12 +08:00
wujiawei
26283de998 【fix】 bug fix 2025-04-06 22:54:57 +08:00
wujiawei
9c5000f995 【fix】 bug fix 2025-04-06 22:43:01 +08:00
wujiawei
532d05d4a8 【fix】 bug fix 2025-04-06 22:24:01 +08:00
wujiawei
402f6f0ac1 【fix】 bug fix 2025-04-06 22:18:22 +08:00
wujiawei
a5a3d7d6f2 【fix】 bug fix 2025-04-06 21:59:48 +08:00
wujiawei
e4c33f34c1 【fix】 bug fix 2025-04-06 21:28:55 +08:00
wujiawei
394dfdf337 【fix】 bug fix 2025-04-06 21:27:22 +08:00
wujiawei
9b44a199be 【fix】 bug fix 2025-04-06 21:24:12 +08:00
wujiawei
fe8a2c865a 【fix】 添加客户端代理客户端 2025-04-06 18:52:31 +08:00
wujiawei
ba8ca7bcd9 【fix】 添加服务端、客户端路由管理接口 2025-04-06 01:26:21 +08:00
wujiawei
9a211ad200 【fix】 添加服务端、客户端路由管理接口 2025-04-06 01:07:35 +08:00
wujiawei
40ab82de35 【fix】 添加服务端、客户端路由管理接口 2025-04-06 00:43:04 +08:00
wujiawei
8728cd5d54 【fix】 添加服务端、客户端路由管理接口 2025-04-06 00:41:38 +08:00
wujiawei
3f7f10bcd5 【fix】 添加服务端、客户端路由管理接口 2025-04-05 21:05:57 +08:00
wujiawei
269a0f2ba2 【fix】 添加服务端、客户端路由管理接口 2025-04-05 18:01:26 +08:00
wujiawei
8f02c03d42 【fix】 添加服务端、客户端路由管理接口 2025-04-05 18:01:23 +08:00
wujiawei
9e1da9649e 【fix】 支持本地代理 2025-04-05 13:09:45 +08:00
wujiawei
074ab4f217 【fix】 ip is null 2025-04-03 11:32:07 +08:00
wujiawei
07e8b20449 【fix】 http代理顺利验证通过 2025-03-22 12:40:21 +08:00
wujiawei
2a98ec0589 【fix】 http代理顺利验证通过 2025-03-22 00:09:18 +08:00
wujiawei
3321e0dd7b 【fix】 添加协议解析处理 2025-03-21 20:57:44 +08:00
wujiawei
cf9438a7da 【fix】 域名解析正常 2025-03-14 21:07:42 +08:00
wujiawei
4690bc85b3 【fix】 add dns module; 2025-03-08 23:16:32 +08:00
wujiawei
2389a25e11 【fix】 优化使用 EventLoopGroup bossGroup = EventLoopGroupFactory.createBossGroup();
EventLoopGroup workerGroup = EventLoopGroupFactory.createWorkerGroup();
2025-02-18 17:21:00 +08:00
wujiawei
cf10347939 【fix】 控制mysql占用内存在150mb左右 2025-02-11 11:26:48 +08:00
wujiawei
98b4701978 This is a Kubernetes configuration file that defines several resources, including:
1. A Pod named `mysql` in the `default` namespace.
2. A Service named `mysql` in the `default` namespace that exposes the Pod's port 3306 and provides a persistent volume for the database data.
3. An Ingress resource that routes incoming requests to the `mysql` service.

The configuration also defines several environment variables, including:

1. `MYSQL_CNF`: a ConfigMap containing a MySQL configuration file (`cnf`) with various settings such as `sql_mode`, `log_timestamps`, and `slow_query_log`.

Here is an annotated version of the configuration:

**Pod:**
```yaml
apiVersion: v1
kind: Pod
metadata:
  name: mysql
spec:
  containers:
  - name: mysql
    image: your-mysql-image
    env:
    - name: MYSQL_CNF
      valueFrom:
        configMapKeyRef:
          name: mysql-cnf
          key: cnf
```
This Pod runs a container with the `your-mysql-image` and uses the `MYSQL_CNF` environment variable to set the MySQL configuration.

**Service:**
```yaml
apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  selector:
    app: mysql
  ports:
  - name: mysql
    port: 3306
    targetPort: 3306
  persistentVolumeClaim:
    claimName: mysql-pvc
```
This Service exposes the `mysql` container's port 3306 and provides a persistent volume (`mysql-pvc`) for the database data.

**Ingress:**
```yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: mysql-ingress
spec:
  rules:
  - host: your-ingress-hostname.com
    http:
      paths:
      - path: /
        backend:
          serviceName: mysql
          servicePort: 3306
```
This Ingress resource routes incoming requests to the `mysql` Service's port 3306.

**ConfigMap:**
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: mysql-cnf
data:
  cnf: |
    [mysql]
    no-auto-rehash

    [mysqld]
    skip-host-cache
    skip-name-resolve
    max_connections=1000
    lower_case_table_names=0
    default-time-zone='+08:00'
    sql_mode='STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION'
    log_timestamps='SYSTEM'

    binlog_expire_logs_seconds = 2
    innodb_flush_log_at_trx_commit=2
    group_concat_max_len=1024
```
This ConfigMap contains the MySQL configuration file (`cnf`) with various settings and values.
2025-02-11 10:15:48 +08:00
wujiawei
7da5f58263 [fix] 修复客户端负载问题 2025-01-22 10:20:26 +08:00
wujiawei
a019561c06 [fix] 调整是否统计流量,避免网络问题导致流量缺口,数据丢失 2025-01-16 09:52:03 +08:00
wujiawei
f985cdac8f [fix] 修复数据存乎导致主线程问题 2025-01-09 17:26:12 +08:00
wujiawei
7429cff23a [fix] 版本调整 2025-01-03 13:55:41 +08:00
wujiawei
9fa3359569 [fix] 优化tcp、udp架构 test 2024-12-21 21:57:14 +08:00
wujiawei
fac28c4fc8 [fix] 优化tcp、udp架构 test 2024-12-17 15:14:09 +08:00
wujiawei
f9188037a0 [fix] 优化tcp、udp架构 test 2024-12-17 13:22:16 +08:00
wujiawei
f19b1cae68 [fix] 优化tcp、udp架构 2024-12-17 10:47:25 +08:00
wujiawei
0df0a42b2c [fix] 优化tcp、udp架构 2024-12-16 20:14:45 +08:00
wujiawei
2b3c7cb7c2 [fix] 优化tcp、udp架构 2024-12-16 18:59:02 +08:00
wujiawei
daeba59a43 [fix] 优化tcp、udp架构 2024-12-16 18:53:08 +08:00
wujiawei
cb4c9c0b41 [fix] 优化tcp、udp架构 2024-12-16 18:45:17 +08:00
wujiawei
0ceb88bfe3 [fix] 优化tcp架构 2024-12-13 14:30:38 +08:00
wujiawei
42bc37c712 [fix] 优化tcp架构 2024-12-13 14:29:11 +08:00
wujiawei
41a81133f1 [fix] 优化tcp架构 2024-12-13 13:35:35 +08:00
wujiawei
ddaa5568e0 [fix] 优化tcp架构 2024-12-13 13:26:15 +08:00
wujiawei
9b64c622cc [fix] 优化tcp架构 2024-12-13 13:13:17 +08:00
wujiawei
f11200cee9 [fix] 页面优化 2024-12-03 21:04:48 +08:00
wujiawei
0f04f29cd0 [fix] 优化负载获取可以使用的客户端 2024-11-25 19:34:25 +08:00
wujiawei
3568292ac2 [fix] 优化负载获取可以使用的客户端 2024-11-25 10:40:06 +08:00
wujiawei
1924e87c7d [fix] 优化负载获取可以使用的客户端 2024-11-21 22:54:19 +08:00
wujiawei
30e1141ecf [fix] 优化负载获取可以使用的客户端 2024-11-19 19:21:12 +08:00
wujiawei
79b09ecfea [fix] 优化负载获取可以使用的客户端 2024-11-18 20:41:02 +08:00
wujiawei
5a079fcba8 [fix] records 2024-11-13 21:12:28 +08:00
wujiawei
20bb29350c [fix] records 2024-11-13 20:58:35 +08:00
wujiawei
ea2fb01b7e [fix] records 2024-11-12 15:22:18 +08:00
wujiawei
ba960a70ab [fix] records 2024-11-12 14:53:37 +08:00
wujiawei
4de7cdb6be [fix] 修复客户端渗透服务端流量计算错误问题 2024-11-09 20:30:25 +08:00
wujiawei
1c6f56d31e [fix] 修复客户端渗透服务端流量计算错误问题 2024-11-09 20:18:27 +08:00
wujiawei
8952d2277f [fix] 添加客户端渗透客户端流量记录 2024-11-09 20:03:53 +08:00
wujiawei
dab6e580e4 [fix] test 2024-11-09 14:58:53 +08:00
wujiawei
9edb6f3d4d [fix] 2024-11-09 14:57:34 +08:00
WJWJW
3740a4f4ea fix 2024-11-09 14:56:33 +08:00
wujiawei
3b57ee1c23 [fix] 适配接口允许代理 2024-11-06 21:07:21 +08:00
wujiawei
b76f7a8f62 [fix] 适配接口允许代理 2024-11-05 21:20:00 +08:00
bug-fix
aa056dbe07 【fix】 2024-10-31 20:48:38 +08:00
bug-fix
810579178f 【fix】 2024-10-31 19:10:44 +08:00
wujiawei
240de53888 [fix] 2024-10-30 21:05:44 +08:00
wujiawei
2eb6dc2775 [fix] // TODO 流量并发异常 2024-10-30 20:56:46 +08:00
wujiawei
c665bf687d [fix] // TODO 流量并发异常 2024-10-30 18:36:14 +08:00
wujiawei
1e2f58b5a3 [fix] // TODO 流量并发异常 2024-10-28 23:17:08 +08:00
wujiawei
381f8a6960 [fix] 规范客户端渗透表信息 2024-10-28 23:06:31 +08:00
wujiawei
a04499ab4b [fix] log 2024-10-28 22:54:29 +08:00
wujiawei
477d8cfaac [fix] log 2024-10-28 22:23:53 +08:00
wujiawei
b2cd1eee0a [fix] 通道数据添加appKey、appSecret、originalIp验证 2024-10-28 14:24:16 +08:00
wujiawei
b7d571ccc1 [fix] 通道数据添加appKey、appSecret、originalIp验证 2024-10-19 22:31:41 +08:00
wujiawei
55ce3ff359 [fix] 表名称调整 2024-10-12 15:16:09 +08:00
wujiawei
42e8e5afec [fix] 调整计划适配多客户端问题 2024-10-10 22:20:09 +08:00
wujiawei
4977348113 [fix] 添加令牌密钥 2024-10-10 21:32:57 +08:00
wujiawei
89f9207367 [fix] 添加令牌密钥 2024-10-09 20:31:51 +08:00
wujiawei
70cf9fe3ac [fix] 添加令牌密钥 2024-10-09 20:31:10 +08:00
wujiawei
ecd5288929 [fix] 添加令牌密钥 2024-10-09 20:29:43 +08:00
wujiawei
571fe7893d [fix] 添加令牌密钥 2024-10-09 20:28:45 +08:00
wujiawei
1e89813793 [fix] md 2024-10-07 23:16:20 +08:00
wujiawei
0dd5b986a2 [fix] 2024-10-07 22:34:40 +08:00
wujiawei
67c2537823 [fix] 2024-10-07 22:19:05 +08:00
wujiawei
8dc1272109 [fix] 1.2.9-JDK17-SNAPSHOT
spring 3.3.2 fix windows build with native images
2024-09-30 15:49:51 +08:00
wujiawei
9ae2f162e2 [fix] 仓库 2024-09-29 10:12:39 +08:00
wujiawei
c2a4926888 [fix] 2024-09-28 19:27:13 +08:00
wujiawei
a5176c3cf1 [fix] 2024-09-28 19:14:18 +08:00
wujiawei
600706ccc1 [fix] 2024-09-28 19:01:41 +08:00
wujiawei
fb086d6e95 [fix] 2024-09-28 18:21:29 +08:00
wujiawei
98811a18c3 [fix] 2024-09-28 18:12:08 +08:00
wujiawei
0493c1ce5e [fix] 2024-09-28 16:44:05 +08:00
wujiawei
dd6daa5421 [fix] 添加客户端令牌桶 2024-09-28 15:54:18 +08:00
wujiawei
6c7f6b807f [fix] 添加客户端令牌桶 2024-09-28 13:57:57 +08:00
wujiawei
97f50d1701 [fix] 添加文档 2024-09-28 12:13:43 +08:00
wujiawei
5e3216a2a4 [fix] update version to 1.2.8-JDK17-SNAPSHOT 2024-09-26 20:07:28 +08:00
wujiawei
68d9713cf9 [fix] update version to 1.2.8-JDK17-SNAPSHOT 2024-09-26 20:07:06 +08:00
wujiawei
2d15abcc9f [fix] update version to 1.2.8-JDK17-SNAPSHOT 2024-09-26 11:16:22 +08:00
wujiawei
ec4c1cd7e4 [fix] update version to 1.2.8-JDK17-SNAPSHOT 2024-09-25 14:52:14 +08:00
wujiawei
bab2ce0242 [fix] update version to 1.2.8-JDK17-SNAPSHOT 2024-09-25 14:48:29 +08:00
wujiawei
6408239029 [fix] update version to 1.2.8-JDK17-SNAPSHOT 2024-09-25 08:27:04 +08:00
wujiawei
485d0602af [fix] 通信通道统一绑定客户端ID,访客ID 2024-09-24 21:54:53 +08:00
wujiawei
9784cc65b6 [fix] 通信通道统一绑定客户端ID,访客ID 2024-09-24 20:19:43 +08:00
wujiawei
576498afc7 [fix] 修复位置问题 2024-09-24 19:38:53 +08:00
wujiawei
bc1c2b008b 【fix】修复客户端渗透客户端问题 通道关闭错误 2024-09-22 22:01:28 +08:00
wujiawei
69e761e46f 【fix】修复客户端渗透客户端问题 通道关闭错误 2024-09-22 21:56:24 +08:00
wujiawei
616ab03495 【fix】修复客户端渗透客户端问题 通道关闭错误 2024-09-22 21:41:11 +08:00
wujiawei
06c45f4fef 【fix】修复客户端渗透客户端问题 通道关闭错误 2024-09-22 21:33:17 +08:00
wujiawei
3ec0ab3271 【fix】修复客户端渗透客户端问题 通道关闭错误 2024-09-22 20:03:47 +08:00
wujiawei
1a2b5b8c29 【fix】修复客户端渗透客户端问题 2024-09-22 19:22:25 +08:00
wujiawei
c04ba09763 【fix】修复流量计费bug 2024-09-22 18:13:51 +08:00
wujiawei
4f58a675d0 【fix】修复流量计费bug 2024-09-22 15:49:16 +08:00
wujiawei
a5a832aa1d 【fix】web 2024-09-21 22:57:24 +08:00
wujiawei
ccca450e73 【fix】web 2024-09-21 17:30:16 +08:00
wujiawei
30dfaa8dd1 [fix] 2024-09-21 17:10:11 +08:00
wujiawei
54848f5bc2 【fix】log 2024-09-21 02:39:49 +08:00
wujiawei
b4a495e2d8 【fix】log 2024-09-21 02:31:27 +08:00
wujiawei
647b1b5f4b 【fix】log 2024-09-21 02:22:18 +08:00
wujiawei
26b72fc540 Merge remote-tracking branch 'origin/master' 2024-09-21 02:03:21 +08:00
wujiawei
80e5e4f628 【fix】log 2024-09-21 02:03:16 +08:00
wujiawei
e9d0f9b904 [fix] 2024-09-21 02:03:04 +08:00
wujiawei
b1bd674ada 【fix】log 2024-09-21 01:45:19 +08:00
wujiawei
37bce25f31 【fix】log 2024-09-21 01:41:50 +08:00
wujiawei
eda7bb1b44 【fix】log 2024-09-21 01:34:59 +08:00
wujiawei
9ae7a5715e 【fix】客户端渗透客户端编码 2024-09-21 01:12:52 +08:00
wujiawei
2c139c9ae1 【fix】客户端渗透客户端编码 2024-09-21 01:03:37 +08:00
wujiawei
b4daa3ba70 【fix】客户端渗透客户端编码 2024-09-21 00:39:35 +08:00
wujiawei
8f1f23eb2b 【fix】客户端渗透客户端编码 2024-09-21 00:32:09 +08:00
wujiawei
b4e2f8ec3c 【fix】客户端渗透客户端编码 2024-09-20 21:57:02 +08:00
wujiawei
238b2906a0 【fix】客户端渗透客户端编码 2024-09-20 21:55:04 +08:00
wujiawei
a0d246a0bf [fix] 2024-09-20 17:21:49 +08:00
wujiawei
c144688a9d [fix] 2024-09-20 11:27:29 +08:00
wujiawei
93549cad90 [fix] 修复数据下发与上传问题 2024-09-18 23:52:26 +08:00
wujiawei
7028e001ea [fix] 统一使用next 通道进行数据处理 2024-09-18 23:43:56 +08:00
wujiawei
37d9065c5a [fix] 统一使用next 通道进行数据处理 2024-09-18 22:33:24 +08:00
wujiawei
138752e56d [fix] 添加客户端渗透服务端 2024-09-18 22:03:31 +08:00
wujiawei
2166a1eee6 [fix] 2024-09-18 20:28:18 +08:00
wujiawei
15da22ba2d 【fix】添加webui地址 2024-09-17 22:19:53 +08:00
wujiawei
7aa2a70a43 【fix】添加客户端渗透客户端映射 2024-09-17 22:13:38 +08:00
wujiawei
81e2a74cf1 【fix】添加客户端渗透客户端映射 2024-09-17 21:56:29 +08:00
wujiawei
17aeb842db 【fix】添加客户端渗透客户端映射 2024-09-17 21:52:24 +08:00
wujiawei
a6ab7d32e3 【fix】添加客户端渗透服务端 页面接口测试 2024-09-17 21:32:32 +08:00
wujiawei
b4ceae4349 【fix】添加客户端渗透服务端 2024-09-17 21:27:24 +08:00
wujiawei
731c61b34a 【fix】添加客户端渗透服务端 2024-09-17 21:25:16 +08:00
wujiawei
2683bb4dee 【fix】数据通信失败原因需要使用 ByteBuf格式 发送数据 ByteBuf realBuf = nextChannel.config().getAllocator().buffer(bytes.length);
realBuf.writeBytes(bytes);
2024-09-17 20:41:42 +08:00
wujiawei
4a7bdb366f 【fix】服务端内网渗透 test 2024-09-17 18:21:33 +08:00
wujiawei
234613a76a 【fix】服务端内网渗透 2024-09-17 15:19:07 +08:00
wujiawei
5852aad88a 【fix】端口池类型添加 服务端内网穿透、服务端内网渗透、客户端渗透服务端、客户端渗透客户端 2024-09-17 01:27:50 +08:00
wujiawei
ee6cafa8f2 [fix] 流量访问、添加下步计划内网渗透 2024-09-13 09:06:44 +08:00
wujiawei
c9798d7289 [fix] 修复下线客户端、删除映射无法刷新问题 2024-09-12 19:25:51 +08:00
wujiawei
10d2d74ca3 【fix】 优化读写缓冲区设置为度缓冲区2m 写1m 低水位1m 高水位2m 2024-09-09 15:51:01 +08:00
wujiawei
4ec1401712 【fix】 取消使用.option(ChannelOption.TCP_NODELAY, false) 压缩成大包发送 2024-09-04 13:05:07 +08:00
wujiawei
f5798114f7 【fix】 注释NettyRecvByteBufAllocator 使用默认缓冲区配置 2024-09-03 20:18:33 +08:00
wujiawei
9bf26e99d0 【fix】 线程使用完后使用主线程执行 ThreadPoolExecutor.AbortPolicy() 2024-09-03 16:46:20 +08:00
wujiawei
b9d75715de 【fix】 使用线程池处理流量信息及业务 2024-09-03 16:37:46 +08:00
wujiawei
58aae7a67d 【fix】 自定义netty 接收数据缓冲期控制接收数据大小 2024-08-30 17:12:23 +08:00
wujiawei
629910e860 【fix】 自定义netty 接收数据缓冲期控制接收数据大小 2024-08-28 16:57:31 +08:00
wujiawei
fd364c78e7 【fix】 添加流量控制开关 2024-08-26 16:17:20 +08:00
wujiawei
5ddf1337ad 【fix】 1.2.7-JDK17-SNAPSHOT 2024-08-26 14:27:46 +08:00
890 changed files with 39535 additions and 3155 deletions

25
ClientPermeateClient.md Normal file
View File

@@ -0,0 +1,25 @@
### 背景:国庆期间的问题,如何在老家访问杭州办公室的网络呢
#### 实现方案:异地组网
#### 实现语言Java
#### 环境三个网络一台拥有公网IP的服务器、一台杭州本地机房内服务器、你老家所在网络中的一台电脑用你自己的就好了
#### 实现原理:![NetworkPermeateClientPermeateClient.png](NetworkPermeateClientPermeateClient.png)
#### 操作步骤拥有公网ip的服务器开发6001web、7001端口tcp然后执行命令启动服务端
```shell
docker run -d -it -p 6001:6001 -p 7001:7001 -e spring.profiles.active=prod -e MAIN_DB_HOST=localhost:3306 -e MAIN_DB_PASSWORD=root -e MAIN_DB_PASSWORD=root --name wu-lazy-cloud-heartbeat-server-start registry.cn-hangzhou.aliyuncs.com/wu-lazy/wu-lazy-cloud-heartbeat-server-start:1.3.0-JDK17-SNAPSHOT
```
#### 操作步骤:杭州本地机房所在网络服务器启动客户端、你老家所在网络中启动客户端,命令如下
```shell
docker run -d -it --privileged --name hangzhou-client --restart=always -e spring.lazy.netty.client.inet-host=公网IP -e spring.lazy.netty.client.inet-port=7001 -e spring.lazy.netty.client.client-id="hangzhou-jifang" registry.cn-hangzhou.aliyuncs.com/wu-lazy/wu-lazy-cloud-heartbeat-client-start:1.3.0-JDK17-SNAPSHOT
```
```shell
docker run -d -it --privileged --name my-home-client --restart=always -e spring.lazy.netty.client.inet-host=公网IP -e spring.lazy.netty.client.inet-port=7001 -e spring.lazy.netty.client.client-id="my-home" registry.cn-hangzhou.aliyuncs.com/wu-lazy/wu-lazy-cloud-heartbeat-client-start:1.3.0-JDK17-SNAPSHOT
```
#### 操作步骤:配置端口
![client_permeate_port_pool.png](client_permeate_port_pool.png)
#### 操作步骤打开页面配置菜单看到如下界面from客户端ID你老家网络中的电脑to客户端ID
![client_permeate_client_mapping.png](client_permeate_client_mapping.png)
#### 连接使用在你老家使用my-home这个客户端所在机器上的ip+13306 即可访问杭州机房内的数据库的服务3306

61
LazyDNS.puml Normal file
View File

@@ -0,0 +1,61 @@
@startuml
title 动态DNS
actor 访客 as User
package "服务端(公网)" as dns_server_{
component [服务端私有网络A]{
[mysql:(172.1.1.4:3306)] as dns_server_remote_local_
[clickhouse:(172.1.1.5:3306)]
}
component [服务端私有网络B]{
[mysql:(172.1.2.4:3306)]
[clickhouse:(172.1.2.5:3306)]
}
}
package "客户端(私有网络)" as dns_remote_local_{
component [客户端私有网络A]{
[mysql:(162.1.1.4:3306)] as dns_client_remote_local_
[clickhouse:(162.1.1.5:3306)]
}
component [客户端私有网络B]{
[mysql:(162.1.2.4:3306)]
[clickhouse:(162.1.2.5:3306)]
}
}
package "客户端(用户本地)" as dns_local_ {
}
note "用户本地网络" as local_net_
note "服务端网络" as server_net_
note "客户端私有网络" as remote_net_
'(User) .... local_condition_
'local_condition_ ... (target)
[User] ...right...> dns_local_: DNS连接到本地
dns_local_ ...right...> local_net_: 访问本地网络本地DNS
dns_local_ ...up...> dns_server_remote_local_: 远程DNS解析
dns_server_ ...up...> server_net_: server本地的网络
dns_server_ ...down...> dns_remote_local_: 访问的地址在远程的客户端中
dns_remote_local_ ...down...> dns_client_remote_local_: 远程客户端所在的私有网络
dns_remote_local_ ...down...> remote_net_: 远程客户端中的其他网络
@enduml

140
NetworkPermeate1.0.puml Normal file
View File

@@ -0,0 +1,140 @@
@startuml
title 服务端渗透客户端
actor 访客 as User
package "服务端(公网)"{
component [服务端开放端口]{
[默认UI页面端口:6001]
[默认tcp端口:7001] as tcp
[开放给客户端访问的端口:13306]
}
}
package "客户端(私有网络)"{
component [客户端端口]{
[默认UI页面端口:6004]
}
database "客户端所在网络中的mysql:3306" as target {
}
}
[User] ...right...> [开放给客户端访问的端口:13306]: 发送请求到 http://ip:13306
[开放给客户端访问的端口:13306] ...down...> [target]: 发送真实二进制请求到真实服务
note "无法直接访问" as N2
(User) .... N2
N2 ... (target)
@enduml
@startuml
title 服务端渗透服务端
actor 访客 as User
package "服务端(局域网)"{
component [服务端开放端口]{
[默认UI页面端口:6001]
[默认tcp端口:7001] as tcp
[开放给服务端访问的端口:13306]
}
database "服务端所在网络中的mysql:3306" as target {
}
}
[User] ...right...> [开放给服务端访问的端口:13306]: 发送请求到 http://ip:13306
[开放给服务端访问的端口:13306] ...down...> [target]: 发送真实二进制请求到真实服务
note "无法直接访问" as N2
(User) .. N2
N2 .. (target)
@enduml
@startuml
title 客户端渗透服务端
actor 访客 as User
package "服务端(公网)"{
component [服务端开放端口]{
[默认UI页面端口:6001]
[默认tcp端口:7001] as tcp
}
database "服务端所在网络中的mysql:3306" as target {
}
}
package "客户端(私有网络)"{
component [客户端端口]{
[默认UI页面端口:6004]
[客户端开放端口:13306]
}
}
[User] ...right...> [客户端开放端口:13306]: 发送请求到 http://ip:13306
[客户端开放端口:13306] ...up...> [target]: 发送真实二进制请求到真实服务
note "无法直接访问" as N2
(User) ...up... N2
N2 ...up.. (target)
@enduml
@startuml
title 客户端渗透客户端
actor 访客杭州 as User
package "服务端(公网)" as server{
component [服务端开放端口]{
[默认UI页面端口:6001]
[默认tcp端口:7001] as tcp
}
}
package "客户端(私有网络--杭州)" as client_hangzhou{
component [客户端端口]{
[默认UI页面端口:6004]
[客户端开放端口:13306]
}
}
package "客户端(私有网络--上海)" as client_shanghai{
component [上海客户端端口]{
[上海默认UI页面端口:6004]
}
database "服务端所在网络中的mysql:3306" as target {
}
}
server ...down...> client_hangzhou
server <...right... client_shanghai
[User] ...right...> [客户端开放端口:13306]: 发送请求到 http://ip:13306
[客户端开放端口:13306] ...up...> [target]: 发送真实二进制请求到真实服务
note "无法直接访问" as N2
(User) ...up... N2
N2 ...up.. (target)
@enduml

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 41 KiB

View File

@@ -33,8 +33,19 @@
wu-lazy-cloud-network
是一款基于([wu-framework-parent](https://gitee.com/wujiawei1207537021/wu-framework-parent)孵化出的项目内部使用Lazy
ORM操作数据库主要功能是网络穿透对于没有公网IP的服务进行公网IP映射
使用环境JDK17 Spring Boot 3.0.2
ORM操作数据库使用环境JDK17 Spring Boot 3.0.2。
主要功能:
- 服务端渗透客户端网络穿透对于没有公网IP的服务进行公网IP映射
- ![NetworkPermeateServerPermeateClient.png](NetworkPermeateServerPermeateClient.png)
- 服务端渗透服务端----本地同局域网内端口映射
- ![NetworkPermeateServerPermeateServer.png](NetworkPermeateServerPermeateServer.png)
- 客户端渗透服务端----本地端口映射到另一个服务端中的局域网端口
- ![NetworkPermeateClientPermeateServer.png](NetworkPermeateClientPermeateServer.png)
- 客户端渗透客户端----本地端口映射到另一个局域网端口
- ![NetworkPermeateClientPermeateClient.png](NetworkPermeateClientPermeateClient.png)
[UI](https://gitee.com/wujiawei1207537021/wu-lazy-cloud-network-server-ui)
#### 项目地址
@@ -197,7 +208,7 @@ public class NettyClientSocket {
log.info("连接服务端成功");
// 告诉服务端这条连接是client的连接
NettyProxyMsg nettyMsg = new NettyProxyMsg();
nettyMsg.setType(MessageType.REPORT_CLIENT_CONNECT_SUCCESS);
nettyMsg.setType(MessageType.TCP_REPORT_CLIENT_CONNECT_SUCCESS);
nettyMsg.setClientId(clientId);
nettyMsg.setData((clientId).getBytes());
ChannelAttributeKeyUtils.buildClientId(channel, clientId);
@@ -247,24 +258,23 @@ public class NettyClientSocket {
### 项目结构
| 模块 | 版本 | 描述 |
|-----------------------------------------------------------------------------------------------------------------------------------------|----------------------|------------------------------|
| [wu-lazy-cloud-heartbeat-common](wu-lazy-cloud-heartbeat-common) | 1.2.6-JDK17-SNAPSHOT | 内网穿透公共模块(声明接口、枚举、常量、适配器、解析器) |
| [wu-lazy-cloud-heartbeat-client](wu-lazy-cloud-heartbeat-client) | 1.2.6-JDK17-SNAPSHOT | 客户端(支持二次开发) |
| [wu-lazy-cloud-heartbeat-server](wu-lazy-cloud-heartbeat-server) | 1.2.6-JDK17-SNAPSHOT | 服务端(支持二次开发) |
| [wu-lazy-cloud-network-ui](wu-lazy-cloud-heartbeat-server-ui) | 1.2.6-JDK17-SNAPSHOT | 服务端页面 |
| [wu-lazy-cloud-heartbeat-client-start](wu-lazy-cloud-heartbeat-sample/wu-lazy-cloud-heartbeat-client-sample) | 1.2.6-JDK17-SNAPSHOT | 客户端样例 |
| [wu-lazy-cloud-heartbeat-server-start](wu-lazy-cloud-heartbeat-sample/wu-lazy-cloud-heartbeat-server-sample) | 1.2.6-JDK17-SNAPSHOT | 服务端样例 |
| 模块 | 版本 | 描述 |
|------------------------------------------------------------------------------------------------------------|----------------------|------------------------------|
| [wu-lazy-cloud-heartbeat-common](wu-lazy-cloud-heartbeat-common) | 1.3.0-JDK17-SNAPSHOT | 内网穿透公共模块(声明接口、枚举、常量、适配器、解析器) |
| [wu-lazy-cloud-heartbeat-client](wu-lazy-cloud-heartbeat-client) | 1.3.0-JDK17-SNAPSHOT | 客户端(支持二次开发) |
| [wu-lazy-cloud-heartbeat-server](wu-lazy-cloud-heartbeat-server) | 1.3.0-JDK17-SNAPSHOT | 服务端(支持二次开发) |
| [wu-lazy-cloud-heartbeat-client-start](wu-lazy-cloud-heartbeat-start/wu-lazy-cloud-heartbeat-server-start) | 1.3.0-JDK17-SNAPSHOT | 客户端样例 |
| [wu-lazy-cloud-heartbeat-server-start](wu-lazy-cloud-heartbeat-start/wu-lazy-cloud-heartbeat-client-start) | 1.3.0-JDK17-SNAPSHOT | 服务端样例 |
### 使用技术
| 框架 | 版本 | 描述 |
|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------|--------------|
| spring-boot | 3.0.7 | springboot框架 |
| [wu-framework-web](https://gitee.com/wujiawei1207537021/wu-framework-parent/tree/master/wu-framework-web) | 1.2.6-JDK17-SNAPSHOT | web容器 |
| [Lazy -ORM](https://gitee.com/wujiawei1207537021/wu-framework-parent/tree/master/wu-inner-intergration/wu-database-parent) | 1.2.6-JDK17-SNAPSHOT | ORM |
| [wu-framework-web](https://gitee.com/wujiawei1207537021/wu-framework-parent/tree/master/wu-framework-web) | 1.3.0-JDK17-SNAPSHOT | web容器 |
| [Lazy -ORM](https://gitee.com/wujiawei1207537021/wu-framework-parent/tree/master/wu-inner-intergration/wu-database-parent) | 1.3.0-JDK17-SNAPSHOT | ORM |
| mysql-connector-j | 8.0.33 | mysql驱动 |
| [wu-authorization-server-platform-starter](https://gitee.com/wujiawei1207537021/wu-framework-parent/tree/master/wu-smart-platform/wu-authorization-server-platform-starter) | 1.2.6-JDK17-SNAPSHOT | 用户授权体系 |
| [wu-authorization-server-platform-starter](https://gitee.com/wujiawei1207537021/wu-framework-parent/tree/master/wu-smart-platform/wu-authorization-server-platform-starter) | 1.3.0-JDK17-SNAPSHOT | 用户授权体系 |
### 使用环境
@@ -277,7 +287,7 @@ public class NettyClientSocket {
docker启动
docker run -d -it -p 18080:18080 --name wu-lazy-cloud-heartbeat-server registry.cn-hangzhou.aliyuncs.com/wu-lazy/wu-lazy-cloud-heartbeat-server:1.2.6-JDK17-SNAPSHOT
docker run -d -it -p 18080:18080 --name wu-lazy-cloud-heartbeat-server registry.cn-hangzhou.aliyuncs.com/wu-lazy/wu-lazy-cloud-heartbeat-server:1.3.0-JDK17-SNAPSHOT
http://127.0.0.1:18080/swagger-ui/index.html
@@ -286,15 +296,15 @@ public class NettyClientSocket {
#### 页面操作
启动项目后打开服务端界面
![img.png](url_info.png)
![url_info.png](url_info.png)
默认账号密码admin/admin
![img.png](login.png)
![login.png](login.png)
初始化项目
![img.png](init_menu.png)
![init_menu.png](init_menu.png)
添加角色
![img.png](init_role.png)
![init_role.png](init_role.png)
为用户授权
![img.png](authRoe2User.png)
@@ -304,12 +314,26 @@ public class NettyClientSocket {
客户端管理(客户端会自动注册)
![img.png](cloud_client.png)
网络映射管理(修改后者新增需要映射的客户端)
![img.png](mapping.png)
## 服务端渗透
- 服务端口池管理(服务器端需要开放的端口)
![server_permeate_port_pool.png](server_permeate_port_pool.png)
访客端口池管理(服务器端需要开放的端口
![img.png](visitor_port.png)
- 服务端渗透客户端(内网穿透)(修改后者新增需要映射的客户端
![server_permeate_client_mapping.png](server_permeate_client_mapping.png)
- 服务端渗透服务端
![server_permeate_server_mapping.png](server_permeate_server_mapping.png)
## 客户端渗透
- 客户端渗透端口池管理
![client_permeate_port_pool.png](client_permeate_port_pool.png)
- 客户端渗透客户端
![client_permeate_client_mapping.png](client_permeate_client_mapping.png)
- 客户端渗透服务端
![client_permeate_server_mapping.png](client_permeate_server_mapping.png)
## 报表
流量管理(每个客户端使用的流量)
![img.png](flow.png)

489
client-k8s.yaml Normal file
View File

@@ -0,0 +1,489 @@
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: wu-lazy-cloud-heartbeat-local-client-start
namespace: default
labels:
k8s.kuboard.cn/layer: monitor
k8s.kuboard.cn/name: wu-lazy-cloud-heartbeat-local-client-start
annotations:
k8s.kuboard.cn/displayName: 内网穿透服务端(云下)
spec:
replicas: 1
selector:
matchLabels:
k8s.kuboard.cn/layer: monitor
k8s.kuboard.cn/name: wu-lazy-cloud-heartbeat-local-client-start
template:
metadata:
creationTimestamp: null
labels:
k8s.kuboard.cn/layer: monitor
k8s.kuboard.cn/name: wu-lazy-cloud-heartbeat-local-client-start
annotations:
kubectl.kubernetes.io/restartedAt: '2025-01-16T22:39:39+08:00'
spec:
containers:
- name: wu-lazy-cloud-heartbeat-local-client-start
image: >-
registry.cn-hangzhou.aliyuncs.com/wu-lazy/wu-lazy-cloud-heartbeat-client-start:1.3.0-JDK17-SNAPSHOT
env:
- name: spring.lazy.netty.client.inet-host
value: 124.222.48.62
- name: spring.lazy.netty.client.inet-port
value: '30560'
- name: spring.lazy.netty.client.client-id
value: tencent
- name: JAVA_OPTS
value: '-Xms32m -Xmx64m'
- name: logging1.level.root
value: DEBUG
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
securityContext:
privileged: true
runAsUser: 0
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext: {}
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
revisionHistoryLimit: 10
progressDeadlineSeconds: 600
---
kind: StatefulSet
apiVersion: apps/v1
metadata:
name: cloud-mysql
namespace: default
labels:
k8s.eip.work/layer: db
k8s.eip.work/name: cloud-mysql
k8s.kuboard.cn/name: cloud-mysql
annotations:
analysis.crane.io/replicas-recommendation: |
replicas: 1
analysis.crane.io/resource-recommendation: |
containers:
- containerName: mysql
target:
cpu: 229m
memory: 2070Mi
k8s.eip.work/displayName: 数据库
k8s.eip.work/ingress: 'false'
k8s.eip.work/service: NodePort
k8s.eip.work/workload: cloud-mysql
k8s.kuboard.cn/workload: cloud-mysql
spec:
replicas: 0
selector:
matchLabels:
k8s.eip.work/layer: db
k8s.eip.work/name: cloud-mysql
k8s.kuboard.cn/name: cloud-mysql
template:
metadata:
creationTimestamp: null
labels:
k8s.eip.work/layer: db
k8s.eip.work/name: cloud-mysql
k8s.kuboard.cn/name: cloud-mysql
annotations:
kubectl.kubernetes.io/restartedAt: '2023-10-07T09:38:51+08:00'
spec:
volumes:
- name: mysql-cnf-map
configMap:
name: mysql-cnf
items:
- key: cnf
path: custom.cnf
defaultMode: 420
- name: tz
hostPath:
path: /usr/share/zoneinfo/Asia/Shanghai
type: File
containers:
- name: mysql
image: 'mysql:8.0.28'
env:
- name: MYSQL_ROOT_PASSWORD
value: wujiawei
resources:
limits:
cpu: 500m
memory: 256Mi
requests:
cpu: 500m
memory: 256Mi
volumeMounts:
- name: mysql-data
mountPath: /var/lib/mysql
- name: mysql-cnf-map
mountPath: /etc/mysql/conf.d/custom.cnf
subPath: custom.cnf
- name: tz
mountPath: /etc/localtime
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext: {}
schedulerName: default-scheduler
volumeClaimTemplates:
- kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mysql-data
creationTimestamp: null
annotations:
k8s.eip.work/pvcType: Dynamic
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: nfs-storage
volumeMode: Filesystem
status:
phase: Pending
serviceName: cloud-mysql
podManagementPolicy: OrderedReady
updateStrategy:
type: RollingUpdate
rollingUpdate:
partition: 0
revisionHistoryLimit: 10
persistentVolumeClaimRetentionPolicy:
whenDeleted: Retain
whenScaled: Retain
---
kind: Service
apiVersion: v1
metadata:
name: cloud-mysql
namespace: default
labels:
k8s.eip.work/layer: db
k8s.eip.work/name: cloud-mysql
k8s.kuboard.cn/name: cloud-mysql
spec:
ports:
- name: zmac2s
protocol: TCP
port: 3306
targetPort: 3306
nodePort: 30512
selector:
k8s.eip.work/layer: db
k8s.eip.work/name: cloud-mysql
type: NodePort
sessionAffinity: None
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
internalTrafficPolicy: Cluster
---
kind: Service
apiVersion: v1
metadata:
name: wu-lazy-cloud-heartbeat-local-client-start
namespace: default
labels:
k8s.kuboard.cn/layer: monitor
k8s.kuboard.cn/name: wu-lazy-cloud-heartbeat-local-client-start
spec:
ports:
- name: net
protocol: TCP
port: 6001
targetPort: 6001
- name: nas
protocol: TCP
port: 100
targetPort: 100
- name: kuboard
protocol: TCP
port: 101
targetPort: 101
- name: netty-client
protocol: TCP
port: 6004
targetPort: 6004
- name: nas-web-dav
protocol: TCP
port: 102
targetPort: 102
- name: rustdesk
protocol: TCP
port: 103
targetPort: 103
- name: nastool
protocol: TCP
port: 104
targetPort: 104
- name: jellyfin
protocol: TCP
port: 105
targetPort: 105
- name: qb
protocol: TCP
port: 106
targetPort: 106
- name: jackett
protocol: TCP
port: 107
targetPort: 107
- name: rustdesk-tcp
protocol: TCP
port: 108
targetPort: 108
- name: rustdesk-ip-tcp
protocol: TCP
port: 109
targetPort: 109
- name: rustdesk-ip-udp
protocol: UDP
port: 109
targetPort: 109
- name: nas-ssh
protocol: TCP
port: 110
targetPort: 110
selector:
k8s.kuboard.cn/layer: monitor
k8s.kuboard.cn/name: wu-lazy-cloud-heartbeat-local-client-start
type: ClusterIP
sessionAffinity: None
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
internalTrafficPolicy: Cluster
---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: wu-lazy-cloud-heartbeat-local-client-start
namespace: default
labels:
k8s.kuboard.cn/layer: monitor
k8s.kuboard.cn/name: wu-lazy-cloud-heartbeat-local-client-start
spec:
ingressClassName: traefik
rules:
- host: net.wu-framework.cn
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: wu-lazy-cloud-heartbeat-local-client-start
port:
number: 6001
- host: nas.wu-framework.cn
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: wu-lazy-cloud-heartbeat-local-client-start
port:
number: 100
- host: kuboard.wu-framework.cn
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: wu-lazy-cloud-heartbeat-local-client-start
port:
number: 101
- host: client.wu-framework.cn
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: wu-lazy-cloud-heartbeat-local-client-start
port:
number: 6004
- host: nas-dav.wu-framework.cn
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: wu-lazy-cloud-heartbeat-local-client-start
port:
number: 102
- host: rustdesk.wu-framework.cn
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: wu-lazy-cloud-heartbeat-local-client-start
port:
number: 103
- host: nastool.wu-framework.cn
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: wu-lazy-cloud-heartbeat-local-client-start
port:
number: 104
- host: jellyfin.wu-framework.cn
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: wu-lazy-cloud-heartbeat-local-client-start
port:
number: 105
- host: qb.wu-framework.cn
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: wu-lazy-cloud-heartbeat-local-client-start
port:
number: 106
- host: jackett.wu-framework.cn
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: wu-lazy-cloud-heartbeat-local-client-start
port:
number: 107
- host: rustdesk-tcp.wu-framework.cn
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: wu-lazy-cloud-heartbeat-local-client-start
port:
number: 108
- host: rustdesk-ip.wu-framework.cn
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: wu-lazy-cloud-heartbeat-local-client-start
port:
number: 109
- host: nas-ssh.wu-framework.cn
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: wu-lazy-cloud-heartbeat-local-client-start
port:
number: 110
---
kind: ConfigMap
apiVersion: v1
metadata:
name: mysql-cnf
namespace: default
data:
cnf: >-
[mysql]
no-auto-rehash
[mysqld]
skip-host-cache
skip-name-resolve
max_connections=100
lower_case_table_names=0
table_open_cache=64
default-time-zone='+08:00'
sql_mode='STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION'
log_timestamps='SYSTEM'
performance_schema_max_table_instances=400 #设置效果不明显
performance_schema=off #效果明显
table_definition_cache=400
slow_query_log=ON
binlog_expire_logs_seconds = 2
innodb_flush_log_at_trx_commit=2
group_concat_max_len=512
# 全局参数
innodb_buffer_pool_size = 20M
innodb_buffer_pool_chunk_size=20M #效果不明显
key_buffer_size = 1M
# 排序和连接参数
sort_buffer_size = 256K
join_buffer_size = 256K
# 线程缓存参数
thread_cache_size = 4
# 日志参数
innodb_log_file_size = 5M
innodb_log_buffer_size = 1M

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 114 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 110 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 113 KiB

After

Width:  |  Height:  |  Size: 86 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 124 KiB

After

Width:  |  Height:  |  Size: 100 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 148 KiB

130
pom.xml
View File

@@ -8,12 +8,12 @@
<parent>
<artifactId>wu-framework-parent</artifactId>
<groupId>top.wu2020</groupId>
<version>1.2.6-JDK17-SNAPSHOT</version>
<version>1.3.0-JDK17</version>
</parent>
<artifactId>wu-lazy-cloud-network</artifactId>
<packaging>pom</packaging>
<version>1.2.6-JDK17-SNAPSHOT</version>
<version>1.3.0-JDK17</version>
<description>云上云下</description>
@@ -23,6 +23,8 @@
<module>wu-lazy-cloud-heartbeat-server-cluster</module>
<module>wu-lazy-cloud-heartbeat-client</module>
<module>wu-lazy-cloud-heartbeat-common</module>
<module>wu-lazy-cloud-heartbeat-dns</module>
<module>wu-lazy-cloud-heartbeat-protocol-proxy</module>
<!-- 样例 -->
<module>wu-lazy-cloud-heartbeat-start</module>
@@ -70,19 +72,12 @@
<dependency>
<groupId>top.wu2020</groupId>
<artifactId>wu-framework-dependencies</artifactId>
<version>1.2.6-JDK17-SNAPSHOT</version>
<version>1.3.0-JDK17</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<repositories>
<repository>
<id>maven_central</id>
<name>Maven Central</name>
<url>https://repo.maven.apache.org/maven2/</url>
</repository>
</repositories>
<profiles>
@@ -182,55 +177,96 @@
<profile>
<id>native</id>
<build>
<pluginManagement>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<configuration>
<archive>
<manifestEntries>
<Spring-Boot-Native-Processed>true</Spring-Boot-Native-Processed>
</manifestEntries>
</archive>
</configuration>
</plugin>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>

</configuration>
<executions>
<execution>
<id>process-aot</id>
<goals>
<goal>process-aot</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.graalvm.buildtools</groupId>
<artifactId>native-maven-plugin</artifactId>
<configuration>
<classesDirectory>${project.build.outputDirectory}</classesDirectory>
<requiredVersion>22.3</requiredVersion>
</configuration>
<executions>
<execution>
<id>add-reachability-metadata</id>
<goals>
<goal>add-reachability-metadata</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</pluginManagement>
</build>
</profile>
<profile>
<id>nativeTest</id>
<dependencies>
<dependency>
<groupId>org.junit.platform</groupId>
<artifactId>junit-platform-launcher</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<executions>
<execution>
<id>process-test-aot</id>
<goals>
<goal>process-test-aot</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.graalvm.buildtools</groupId>
<artifactId>native-maven-plugin</artifactId>
<version>0.9.28</version>
<!-- 使用graalvm提供的可达性元数据很多第三方库就直接可以构建成可执行文件了 -->
<configuration>
<!-- for agent -->
<agent>
<defaultMode>Standard</defaultMode>
<options>
<builtinCallerFilter>true</builtinCallerFilter>
<builtinHeuristicFilter>true</builtinHeuristicFilter>
<enableExperimentalPredefinedClasses>true
</enableExperimentalPredefinedClasses>
<enableExperimentalUnsafeAllocationTracing>true
</enableExperimentalUnsafeAllocationTracing>
<trackReflectionMetadata>true</trackReflectionMetadata>
</options>
<metadataCopy>
<merge>true</merge>
</metadataCopy>
</agent>
<!-- for metadata repository -->
<metadataRepository>
<enabled>true</enabled>
</metadataRepository>
<classesDirectory>${project.build.outputDirectory}</classesDirectory>
<requiredVersion>22.3</requiredVersion>
</configuration>
<executions>
<execution>
<id>add-reachability-metadata</id>
<goals>
<goal>add-reachability-metadata</goal>
</goals>
</execution>
<execution>
<id>test-native</id>
<id>native-test</id>
<goals>
<goal>test</goal>
</goals>
<phase>test</phase>
</execution>
<execution>
<id>build-native</id>
<goals>
<goal>compile-no-fork</goal>
</goals>
<phase>package</phase>
</execution>
</executions>
</plugin>

Binary file not shown.

After

Width:  |  Height:  |  Size: 120 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 95 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 104 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 428 KiB

After

Width:  |  Height:  |  Size: 398 KiB

View File

@@ -8,4 +8,26 @@
[fix] 客户端添加按钮删除
[fix] 修改浏览器title内网穿透
[fix] HandleChannelTypeAdvanced 添加权重 order 越小越靠前
[fix] 新增网络映射新增、修改、删除、自动变更
[fix] 新增网络映射新增、修改、删除、自动变更
#### 1.2.7-JDK17-SNAPSHOT
[fix] 修复网络流量统计同步线程导致io阻塞问题
[fix] 修改默认读缓冲区大小2M写缓冲区大小1M
[fix] 操作系统默认读写缓冲区大小212992cat /proc/sys/net/core/wmem_max 、cat /proc/sys/net/core/rmem_max、cat /proc/sys/net/core/wmem_default、 cat /proc/sys/net/core/rmem_default ),通过如下命令进行修改
sudo sysctl -w net.core.rmem_default=4194304
sudo sysctl -w net.core.rmem_max=4194304
sudo sysctl -w net.core.wmem_default=4194304
sudo sysctl -w net.core.wmem_max=4194304
[fix] 修复下线客户端、删除映射无法刷新问题
#### 1.2.8-JDK17-SNAPSHOT
[change] 原《内网穿透》更改为服务端渗透客户端
[change] 新增服务端渗透服务端----本地同局域网内端口映射
[change] 新增客户端渗透服务端----本地端口映射到另一个服务端中的局域网端口
[change] 新增客户端渗透客户端----本地端口映射到另一个局域网端口
#### 1.3.0-JDK17-SNAPSHOT
[change] 添加appkey&appsecret 验证
[change] 支持同一个客户端ID多次注册
[change] 修复通道关闭导致调度线程池submit异常问题
[change] 添加记录客户端IP
[change] 1.2.9为大版本,报文中添加数据无法向下兼容,建议服务端与客户端保持版本一致
#### 下一版本计划https

View File

@@ -5,7 +5,7 @@
<parent>
<groupId>top.wu2020</groupId>
<artifactId>wu-lazy-cloud-network</artifactId>
<version>1.2.6-JDK17-SNAPSHOT</version>
<version>1.3.0-JDK17</version>
</parent>
<modelVersion>4.0.0</modelVersion>
@@ -23,6 +23,11 @@
<groupId>top.wu2020</groupId>
<artifactId>wu-lazy-cloud-heartbeat-common</artifactId>
</dependency>
<dependency>
<groupId>top.wu2020</groupId>
<artifactId>wu-lazy-cloud-heartbeat-protocol-proxy</artifactId>
<version>1.3.0-JDK17</version>
</dependency>
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId>
@@ -44,13 +49,6 @@
</dependency>
</dependencies>
<repositories>
<repository>
<id>maven_central</id>
<name>Maven Central</name>
<url>https://repo.maven.apache.org/maven2/</url>
</repository>
</repositories>
</project>

View File

@@ -5,5 +5,5 @@ import org.wu.framework.lazy.orm.core.stereotype.LazyScan;
@ComponentScan(basePackages = "org.framework.lazy.cloud.network.heartbeat.client")
@LazyScan(scanBasePackages = "org.framework.lazy.cloud.network.heartbeat.client.infrastructure.entity")
public class EnableHeartbeatClientAutoConfiguration {
public class EnableClientAutoConfiguration {
}

View File

@@ -1,7 +1,7 @@
package org.framework.lazy.cloud.network.heartbeat.client.application;
import org.framework.lazy.cloud.network.heartbeat.client.netty.event.ClientChangeEvent;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.event.ClientChangeEvent;
/**
* 客户端状态变更事件

View File

@@ -66,7 +66,17 @@ public class LazyNettyServerPropertiesQueryListCommand {
*/
@Schema(description ="类型配置、DB",name ="type",example = "")
private PropertiesType type;
/**
* 令牌key
*/
@Schema(description = "令牌key", name = "appKey", example = "")
private String appKey;
/**
* 令牌密钥
*/
@Schema(description = "令牌密钥", name = "appSecret", example = "")
private String appSecret;
/**
*
* 更新时间

View File

@@ -65,7 +65,17 @@ public class LazyNettyServerPropertiesQueryOneCommand {
*/
@Schema(description ="类型配置、DB",name ="type",example = "")
private PropertiesType type;
/**
* 令牌key
*/
@Schema(description = "令牌key", name = "appKey", example = "")
private String appKey;
/**
* 令牌密钥
*/
@Schema(description = "令牌密钥", name = "appSecret", example = "")
private String appSecret;
/**
*
* 更新时间

View File

@@ -66,7 +66,17 @@ public class LazyNettyServerPropertiesRemoveCommand {
*/
@Schema(description ="类型配置、DB",name ="type",example = "")
private PropertiesType type;
/**
* 令牌key
*/
@Schema(description = "令牌key", name = "appKey", example = "")
private String appKey;
/**
* 令牌密钥
*/
@Schema(description = "令牌密钥", name = "appSecret", example = "")
private String appSecret;
/**
*
* 更新时间

View File

@@ -65,7 +65,17 @@ public class LazyNettyServerPropertiesStoryCommand {
*/
@Schema(description ="类型配置、DB",name ="type",example = "")
private PropertiesType type;
/**
* 令牌key
*/
@Schema(description = "令牌key", name = "appKey", example = "")
private String appKey;
/**
* 令牌密钥
*/
@Schema(description = "令牌密钥", name = "appSecret", example = "")
private String appSecret;
/**
*
* 更新时间

View File

@@ -65,7 +65,17 @@ public class LazyNettyServerPropertiesUpdateCommand {
*/
@Schema(description ="类型配置、DB",name ="type",example = "")
private PropertiesType type;
/**
* 令牌key
*/
@Schema(description = "令牌key", name = "appKey", example = "")
private String appKey;
/**
* 令牌密钥
*/
@Schema(description = "令牌密钥", name = "appSecret", example = "")
private String appSecret;
/**
*
* 更新时间

View File

@@ -65,7 +65,17 @@ public class LazyNettyServerPropertiesDTO {
*/
@Schema(description ="类型配置、DB",name ="type",example = "")
private PropertiesType type;
/**
* 令牌key
*/
@Schema(description = "令牌key", name = "appKey", example = "")
private String appKey;
/**
* 令牌密钥
*/
@Schema(description = "令牌密钥", name = "appSecret", example = "")
private String appSecret;
/**
*
* 更新时间

View File

@@ -2,18 +2,21 @@ package org.framework.lazy.cloud.network.heartbeat.client.application.impl;
import jakarta.annotation.Resource;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.client.application.assembler.LazyNettyServerPropertiesDTOAssembler;
import org.framework.lazy.cloud.network.heartbeat.client.application.dto.LazyNettyServerPropertiesDTO;
import org.framework.lazy.cloud.network.heartbeat.client.domain.model.lazy.netty.server.properties.LazyNettyServerProperties;
import org.framework.lazy.cloud.network.heartbeat.client.domain.model.lazy.netty.server.properties.LazyNettyServerPropertiesRepository;
import org.framework.lazy.cloud.network.heartbeat.client.netty.socket.NettyClientSocket;
import org.framework.lazy.cloud.network.heartbeat.client.application.LazyNettyServerPropertiesApplication;
import org.framework.lazy.cloud.network.heartbeat.client.application.assembler.LazyNettyServerPropertiesDTOAssembler;
import org.framework.lazy.cloud.network.heartbeat.client.application.command.lazy.netty.server.properties.*;
import org.framework.lazy.cloud.network.heartbeat.client.application.dto.LazyNettyServerPropertiesDTO;
import org.framework.lazy.cloud.network.heartbeat.client.config.NettyClientProperties;
import org.framework.lazy.cloud.network.heartbeat.client.config.PropertiesType;
import org.framework.lazy.cloud.network.heartbeat.client.netty.event.ClientChangeEvent;
import org.framework.lazy.cloud.network.heartbeat.client.domain.model.lazy.netty.server.properties.LazyNettyServerProperties;
import org.framework.lazy.cloud.network.heartbeat.client.domain.model.lazy.netty.server.properties.LazyNettyServerPropertiesRepository;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.NettyClientSocket;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.event.ClientChangeEvent;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.socket.NettyTcpClientSocket;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.udp.socket.NettyUdpClientSocket;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.HandleChannelTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.enums.NettyClientStatus;
import org.framework.lazy.cloud.network.heartbeat.common.enums.ProtocolType;
import org.wu.framework.core.NormalUsedString;
import org.wu.framework.database.lazy.web.plus.stereotype.LazyApplication;
import org.wu.framework.lazy.orm.database.lambda.domain.LazyPage;
@@ -186,16 +189,35 @@ public class LazyNettyServerPropertiesApplicationImpl implements LazyNettyServer
String inetHost = lazyNettyServerProperties.getInetHost();
Integer inetPort = lazyNettyServerProperties.getInetPort();
String clientId = lazyNettyServerProperties.getClientId();
String appKey = lazyNettyServerProperties.getAppKey();
String appSecret = lazyNettyServerProperties.getAppSecret();
ProtocolType protocolType = lazyNettyServerProperties.getProtocolType();
NettyClientSocket nettyClientSocket;
if (ProtocolType.TCP.equals(protocolType)) {
nettyClientSocket = new
NettyTcpClientSocket(inetHost, inetPort, clientId,
NormalUsedString.DEFAULT, appKey, appSecret,
clientChangeEvent, handleChannelTypeAdvancedList);
} else if (ProtocolType.UDP.equals(protocolType)) {
nettyClientSocket = new
NettyUdpClientSocket(inetHost, inetPort, clientId,
NormalUsedString.DEFAULT, appKey, appSecret,
clientChangeEvent, handleChannelTypeAdvancedList);
} else {
nettyClientSocket = null;
}
if (nettyClientSocket == null) {
return;
}
NettyClientSocket nettyClientSocket = new
NettyClientSocket(inetHost, inetPort, clientId,
NormalUsedString.DEFAULT,
clientChangeEvent, handleChannelTypeAdvancedList);
cacheNettyClientSocketMap.put(lazyNettyServerProperties, nettyClientSocket);
// 更新状态为运行中
lazyNettyServerProperties.setConnectStatus(NettyClientStatus.RUNNING);
lazyNettyServerPropertiesRepository.story(lazyNettyServerProperties);
Thread thread = new Thread(() -> {
try {
nettyClientSocket.newConnect2Server();
@@ -235,7 +257,7 @@ public class LazyNettyServerPropertiesApplicationImpl implements LazyNettyServer
@Override
public void destroyOneClientSocket(LazyNettyServerProperties needCloseLazyNettyServerProperties) {
// 关闭指定socket
cacheNettyClientSocketMap.forEach(((nettyServerProperties, nettyClientSocket) -> {
cacheNettyClientSocketMap.forEach(((nettyServerProperties, nettyTcpClientSocket) -> {
String clientId = nettyServerProperties.getClientId();
String inetHost = nettyServerProperties.getInetHost();
Integer inetPort = nettyServerProperties.getInetPort();
@@ -245,7 +267,7 @@ public class LazyNettyServerPropertiesApplicationImpl implements LazyNettyServer
if (Objects.equals(clientId, needCloseClientId)
&& Objects.equals(inetPort, needCloseInetPort)
&& Objects.equals(inetHost, needCloseInetHost)) {
nettyClientSocket.shutdown();
nettyTcpClientSocket.shutdown();
// 关闭客户端:{}与服务端连接:{}:{}
log.warn("Close client: {} Connect to server: {}: {}", clientId, inetHost, inetPort);
}
@@ -258,8 +280,8 @@ public class LazyNettyServerPropertiesApplicationImpl implements LazyNettyServer
@Override
public void destroyClientSocket() {
// 关闭socket
cacheNettyClientSocketMap.forEach(((nettyServerProperties, nettyClientSocket) -> {
nettyClientSocket.shutdown();
cacheNettyClientSocketMap.forEach(((nettyServerProperties, nettyTcpClientSocket) -> {
nettyTcpClientSocket.shutdown();
String clientId = nettyServerProperties.getClientId();
String inetHost = nettyServerProperties.getInetHost();
Integer inetPort = nettyServerProperties.getInetPort();

View File

@@ -1,82 +1,386 @@
package org.framework.lazy.cloud.network.heartbeat.client.config;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.client.netty.socket.NettyClientSocket;
import org.springframework.boot.CommandLineRunner;
import org.framework.lazy.cloud.network.heartbeat.client.context.NettyClientSocketApplicationListener;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.event.ClientChangeEvent;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.advanced.*;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.udp.advanced.*;
import org.framework.lazy.cloud.network.heartbeat.client.netty.proxy.http.advanced.*;
import org.springframework.beans.factory.config.BeanDefinition;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.context.annotation.Bean;
import org.framework.lazy.cloud.network.heartbeat.client.netty.event.ClientChangeEvent;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.HandleChannelTypeAdvanced;
import org.wu.framework.core.NormalUsedString;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Import;
import org.springframework.context.annotation.Role;
import java.util.List;
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
/**
* description 自动配置
*
* @author 吴佳伟
* @date 2023/09/12 18:22
* @see InitConfig
*/
@Deprecated
@Slf4j
public class ClientAutoConfiguration implements CommandLineRunner {
private final NettyClientProperties nettyClientProperties;
private final ClientChangeEvent clientChangeEvent;
private final List<HandleChannelTypeAdvanced> handleChannelTypeAdvancedList; // 处理服务端发送过来的数据类型
@Import({NettyClientSocketApplicationListener.class})
@Role(BeanDefinition.ROLE_INFRASTRUCTURE)
@ConditionalOnProperty(prefix = NettyClientProperties.PREFIX, name = "enabled", havingValue = "true", matchIfMissing = true)
public class ClientAutoConfiguration {
public static final ThreadPoolExecutor NETTY_CLIENT_EXECUTOR = new ThreadPoolExecutor(1, 1, 200, TimeUnit.MILLISECONDS,
new ArrayBlockingQueue<>(1));
@Configuration()
static class ClientTcpConfiguration {
/**
* 服务端 处理客户端心跳
*
* @return ClientHandleTcpChannelHeartbeatTypeAdvanced
*/
@Bean
public ClientHandleTcpChannelHeartbeatTypeAdvanced clientHandleTcpChannelHeartbeatTypeAdvanced() {
return new ClientHandleTcpChannelHeartbeatTypeAdvanced();
}
public ClientAutoConfiguration(NettyClientProperties nettyClientProperties,
ClientChangeEvent clientChangeEvent,
List<HandleChannelTypeAdvanced> handleChannelTypeAdvancedList) {
this.nettyClientProperties = nettyClientProperties;
this.clientChangeEvent = clientChangeEvent;
this.handleChannelTypeAdvancedList = handleChannelTypeAdvancedList;
/**
* 处理 客户端代理的真实端口自动读写
*
* @return ClientHandleTcpDistributeSingleClientRealAutoReadConnectTypeAdvanced
*/
@Bean
public ClientHandleTcpDistributeSingleClientRealAutoReadConnectTypeAdvanced clientHandleTcpDistributeSingleClientRealAutoReadConnectTypeAdvanced() {
return new ClientHandleTcpDistributeSingleClientRealAutoReadConnectTypeAdvanced();
}
/**
* 处理 接收服务端发送过来的聊天信息
*
* @return ClientHandleTcpDistributeSingleClientMessageTypeAdvanced
*/
@Bean
public ClientHandleTcpDistributeSingleClientMessageTypeAdvanced clientHandleTcpDistributeSingleClientMessageTypeAdvanced() {
return new ClientHandleTcpDistributeSingleClientMessageTypeAdvanced();
}
/**
* 处理 客户端渗透服务端数据传输通道连接成功
*
* @return ClientHandleTcpDistributeClientTransferServerPermeateChannelConnectionSuccessfulTypeAdvanced
*/
@Bean
public ClientHandleTcpDistributeClientTransferServerPermeateChannelConnectionSuccessfulTypeAdvanced clientHandleTcpDistributeClientTransferServerPermeateChannelConnectionSuccessfulTypeAdvanced() {
return new ClientHandleTcpDistributeClientTransferServerPermeateChannelConnectionSuccessfulTypeAdvanced();
}
/**
* 处理 客户端渗透客户端数据传输通道连接成功
*
* @return ClientHandleTcpDistributeClientTransferClientPermeateChannelConnectionSuccessfulTypeAdvanced
*/
@Bean
public ClientHandleTcpDistributeClientTransferClientPermeateChannelConnectionSuccessfulTypeAdvanced clientHandleTcpDistributeClientTransferClientPermeateChannelConnectionSuccessfulTypeAdvanced() {
return new ClientHandleTcpDistributeClientTransferClientPermeateChannelConnectionSuccessfulTypeAdvanced();
}
@Bean
public ClientHandleTcpDistributeClientTransferClientPermeateChannelInitSuccessfulTypeAdvanced clientHandleTcpDistributeClientTransferClientPermeateChannelInitSuccessfulTypeAdvanced() {
return new ClientHandleTcpDistributeClientTransferClientPermeateChannelInitSuccessfulTypeAdvanced();
}
@Bean
public ClientHandleTcpDistributeClientTransferClientRequestTypeAdvanced clientHandleTcpDistributeClientTransferClientRequestTypeAdvanced() {
return new ClientHandleTcpDistributeClientTransferClientRequestTypeAdvanced();
}
@Bean
public ClientHandleTcpDistributeServicePermeateClientTransferClientResponseTypeAdvanced clientHandleTcpDistributeServicePermeateClientTransferClientResponseTypeAdvanced() {
return new ClientHandleTcpDistributeServicePermeateClientTransferClientResponseTypeAdvanced();
}
@Bean
public ClientHandleTcpDistributeSingleClientRealCloseVisitorTypeAdvanced clientHandleTcpDistributeSingleClientRealCloseVisitorTypeAdvanced() {
return new ClientHandleTcpDistributeSingleClientRealCloseVisitorTypeAdvanced();
}
@Bean
public ClientHandleTcpChannelTransferTypeAdvancedHandleDistributeTcpDistribute clientHandleTcpChannelTransferTypeAdvancedHandleDistributeTcpDistribute(NettyClientProperties nettyClientProperties) {
return new ClientHandleTcpChannelTransferTypeAdvancedHandleDistributeTcpDistribute(nettyClientProperties);
}
@Bean
public ClientHandleTcpDistributeConnectSuccessNotificationTypeAdvancedHandle clientHandleTcpDistributeConnectSuccessNotificationTypeAdvancedHandle(ClientChangeEvent clientChangeEvent) {
return new ClientHandleTcpDistributeConnectSuccessNotificationTypeAdvancedHandle(clientChangeEvent);
}
@Bean
public ClientHandleTcpClientChannelActiveAdvanced clientHandleTcpClientChannelActiveAdvanced(NettyClientProperties nettyClientProperties) {
return new ClientHandleTcpClientChannelActiveAdvanced(nettyClientProperties);
}
@Bean
public ClientHandleTcpDistributeDisconnectTypeAdvancedHandle clientHandleTcpDistributeDisconnectTypeAdvancedHandle(ClientChangeEvent clientChangeEvent) {
return new ClientHandleTcpDistributeDisconnectTypeAdvancedHandle(clientChangeEvent);
}
@Bean
public ClientHandleTcpDistributeStagingClosedTypeAdvanced clientHandleTcpDistributeStagingClosedTypeAdvanced() {
return new ClientHandleTcpDistributeStagingClosedTypeAdvanced();
}
@Bean
public ClientHandleTcpDistributeStagingOpenedTypeAdvanced clientHandleTcpDistributeStagingOpenedTypeAdvanced() {
return new ClientHandleTcpDistributeStagingOpenedTypeAdvanced();
}
/**
* 处理 客户端渗透服务端init信息
*
* @return ClientHandleTcpDistributeClientPermeateServerInitTypeAdvanced
*/
@Bean
public ClientHandleTcpDistributeClientPermeateServerInitTypeAdvanced clientHandleTcpDistributeClientPermeateServerInitTypeAdvanced(NettyClientProperties nettyClientProperties) {
return new ClientHandleTcpDistributeClientPermeateServerInitTypeAdvanced(nettyClientProperties);
}
/**
* 处理 客户端渗透服务端init close 信息
*
* @return ClientHandleTcpDistributeClientPermeateServerCloseTypeAdvanced
*/
@Bean
public ClientHandleTcpDistributeClientPermeateServerCloseTypeAdvanced clientHandleTcpDistributeClientPermeateServerCloseTypeAdvanced() {
return new ClientHandleTcpDistributeClientPermeateServerCloseTypeAdvanced();
}
@Bean
public ClientHandleTcpDistributeClientPermeateServerTransferTypeAdvanced clientHandleTcpDistributeClientPermeateServerTransferTypeAdvanced() {
return new ClientHandleTcpDistributeClientPermeateServerTransferTypeAdvanced();
}
@Bean
public ClientHandleTcpDistributeClientPermeateClientCloseTypeAdvanced clientHandleTcpDistributeClientPermeateClientCloseTypeAdvanced() {
return new ClientHandleTcpDistributeClientPermeateClientCloseTypeAdvanced();
}
@Bean
public ClientHandleTcpDistributeClientPermeateClientInitTypeAdvanced clientHandleTcpDistributeClientPermeateClientInitTypeAdvanced() {
return new ClientHandleTcpDistributeClientPermeateClientInitTypeAdvanced();
}
@Bean
public ClientHandleTcpDistributeClientPermeateClientTransferCloseTypeAdvanced clientHandleTcpDistributeClientPermeateClientTransferCloseTypeAdvanced() {
return new ClientHandleTcpDistributeClientPermeateClientTransferCloseTypeAdvanced();
}
@Bean
public ClientHandleTcpDistributeServicePermeateClientRealConnectTypeAdvanced clientHandleTcpDistributeServicePermeateClientRealConnectTypeAdvanced(
NettyClientProperties nettyClientProperties) {
return new ClientHandleTcpDistributeServicePermeateClientRealConnectTypeAdvanced(nettyClientProperties);
}
@Bean
public ClientHandleTcpDistributeClientPermeateServerTransferCloseTypeAdvanced clientHandleTcpDistributeClientPermeateServerTransferCloseTypeAdvanced() {
return new ClientHandleTcpDistributeClientPermeateServerTransferCloseTypeAdvanced();
}
}
@Configuration()
static class ClientUdpConfiguration {
/**
* 服务端 处理客户端心跳
*
* @return ClientHandleUdpChannelHeartbeatTypeAdvanced
*/
@Bean
public ClientHandleUdpChannelHeartbeatTypeAdvanced clientHandleUdpChannelHeartbeatTypeAdvanced() {
return new ClientHandleUdpChannelHeartbeatTypeAdvanced();
}
@Bean(destroyMethod = "shutdown")
public NettyClientSocket nettyClientSocket() {
String inetHost = nettyClientProperties.getInetHost();
int inetPort = nettyClientProperties.getInetPort();
String clientId = nettyClientProperties.getClientId();
return new NettyClientSocket(inetHost, inetPort, clientId, NormalUsedString.DEFAULT, clientChangeEvent, handleChannelTypeAdvancedList);
/**
* 处理 客户端代理的真实端口自动读写
*
* @return ClientHandleUdpDistributeSingleClientRealAutoReadConnectTypeAdvanced
*/
@Bean
public ClientHandleUdpDistributeSingleClientRealAutoReadConnectTypeAdvanced clientHandleUdpDistributeSingleClientRealAutoReadConnectTypeAdvanced() {
return new ClientHandleUdpDistributeSingleClientRealAutoReadConnectTypeAdvanced();
}
/**
* 处理 接收服务端发送过来的聊天信息
*
* @return ClientHandleUdpDistributeSingleClientMessageTypeAdvanced
*/
@Bean
public ClientHandleUdpDistributeSingleClientMessageTypeAdvanced clientHandleUdpDistributeSingleClientMessageTypeAdvanced() {
return new ClientHandleUdpDistributeSingleClientMessageTypeAdvanced();
}
/**
* 处理 客户端渗透服务端数据传输通道连接成功
*
* @return ClientHandleUdpDistributeClientTransferServerPermeateChannelConnectionSuccessfulTypeAdvanced
*/
@Bean
public ClientHandleUdpDistributeClientTransferServerPermeateChannelConnectionSuccessfulTypeAdvanced clientHandleUdpDistributeClientTransferServerPermeateChannelConnectionSuccessfulTypeAdvanced() {
return new ClientHandleUdpDistributeClientTransferServerPermeateChannelConnectionSuccessfulTypeAdvanced();
}
/**
* 处理 客户端渗透客户端数据传输通道连接成功
*
* @return ClientHandleUdpDistributeClientTransferClientPermeateChannelConnectionSuccessfulTypeAdvanced
*/
@Bean
public ClientHandleUdpDistributeClientTransferClientPermeateChannelConnectionSuccessfulTypeAdvanced clientHandleUdpDistributeClientTransferClientPermeateChannelConnectionSuccessfulTypeAdvanced() {
return new ClientHandleUdpDistributeClientTransferClientPermeateChannelConnectionSuccessfulTypeAdvanced();
}
@Bean
public ClientHandleUdpDistributeClientTransferClientPermeateChannelInitSuccessfulTypeAdvanced clientHandleUdpDistributeClientTransferClientPermeateChannelInitSuccessfulTypeAdvanced() {
return new ClientHandleUdpDistributeClientTransferClientPermeateChannelInitSuccessfulTypeAdvanced();
}
@Bean
public ClientHandleUdpDistributeClientTransferClientRequestTypeAdvanced clientHandleUdpDistributeClientTransferClientRequestTypeAdvanced() {
return new ClientHandleUdpDistributeClientTransferClientRequestTypeAdvanced();
}
@Bean
public ClientHandleUdpDistributeServicePermeateClientTransferClientResponseTypeAdvanced clientHandleUdpDistributeServicePermeateClientTransferClientResponseTypeAdvanced() {
return new ClientHandleUdpDistributeServicePermeateClientTransferClientResponseTypeAdvanced();
}
@Bean
public ClientHandleUdpDistributeSingleClientRealCloseVisitorTypeAdvanced clientHandleUdpDistributeSingleClientRealCloseVisitorTypeAdvanced() {
return new ClientHandleUdpDistributeSingleClientRealCloseVisitorTypeAdvanced();
}
@Bean
public ClientHandleUdpChannelTransferTypeAdvancedHandleDistribute clientHandleUdpChannelTransferTypeAdvancedHandleDistribute(NettyClientProperties nettyClientProperties) {
return new ClientHandleUdpChannelTransferTypeAdvancedHandleDistribute(nettyClientProperties);
}
@Bean
public ClientHandleUdpDistributeConnectSuccessNotificationTypeAdvancedHandle clientHandleUdpDistributeConnectSuccessNotificationTypeAdvancedHandle(ClientChangeEvent clientChangeEvent) {
return new ClientHandleUdpDistributeConnectSuccessNotificationTypeAdvancedHandle(clientChangeEvent);
}
@Bean
public ClientHandleUdpClientChannelActiveAdvanced clientHandleUdpClientChannelActiveAdvanced(NettyClientProperties nettyClientProperties) {
return new ClientHandleUdpClientChannelActiveAdvanced(nettyClientProperties);
}
@Bean
public ClientHandleUdpDistributeDisconnectTypeAdvancedHandle clientHandleUdpDistributeDisconnectTypeAdvancedHandle(ClientChangeEvent clientChangeEvent) {
return new ClientHandleUdpDistributeDisconnectTypeAdvancedHandle(clientChangeEvent);
}
@Bean
public ClientHandleUdpDistributeStagingClosedTypeAdvanced clientHandleUdpDistributeStagingClosedTypeAdvanced() {
return new ClientHandleUdpDistributeStagingClosedTypeAdvanced();
}
@Bean
public ClientHandleUdpDistributeStagingOpenedTypeAdvanced clientHandleUdpDistributeStagingOpenedTypeAdvanced() {
return new ClientHandleUdpDistributeStagingOpenedTypeAdvanced();
}
/**
* 处理 客户端渗透服务端init信息
*
* @return ClientHandleUdpDistributeClientPermeateServerInitTypeAdvanced
*/
@Bean
public ClientHandleUdpDistributeClientPermeateServerInitTypeAdvanced clientHandleUdpDistributeClientPermeateServerInitTypeAdvanced(NettyClientProperties nettyClientProperties) {
return new ClientHandleUdpDistributeClientPermeateServerInitTypeAdvanced(nettyClientProperties);
}
/**
* 处理 客户端渗透服务端init close 信息
*
* @return ClientHandleUdpDistributeClientPermeateServerCloseTypeAdvanced
*/
@Bean
public ClientHandleUdpDistributeClientPermeateServerCloseTypeAdvanced clientHandleUdpDistributeClientPermeateServerCloseTypeAdvanced() {
return new ClientHandleUdpDistributeClientPermeateServerCloseTypeAdvanced();
}
@Bean
public ClientHandleUdpDistributeClientPermeateServerTransferTypeAdvanced clientHandleUdpDistributeClientPermeateServerTransferTypeAdvanced() {
return new ClientHandleUdpDistributeClientPermeateServerTransferTypeAdvanced();
}
@Bean
public ClientHandleUdpDistributeClientPermeateClientCloseTypeAdvanced clientHandleUdpDistributeClientPermeateClientCloseTypeAdvanced() {
return new ClientHandleUdpDistributeClientPermeateClientCloseTypeAdvanced();
}
@Bean
public ClientHandleUdpDistributeClientPermeateClientInitTypeAdvanced clientHandleUdpDistributeClientPermeateClientInitTypeAdvanced() {
return new ClientHandleUdpDistributeClientPermeateClientInitTypeAdvanced();
}
@Bean
public ClientHandleUdpDistributeClientPermeateClientTransferCloseTypeAdvanced clientHandleUdpDistributeClientPermeateClientTransferCloseTypeAdvanced() {
return new ClientHandleUdpDistributeClientPermeateClientTransferCloseTypeAdvanced();
}
@Bean
public ClientHandleUdpDistributeServicePermeateClientRealConnectTypeAdvanced clientHandleUdpDistributeServicePermeateClientRealConnectTypeAdvanced(
NettyClientProperties nettyClientProperties) {
return new ClientHandleUdpDistributeServicePermeateClientRealConnectTypeAdvanced(nettyClientProperties);
}
@Bean
public ClientHandleUdpDistributeClientPermeateServerTransferCloseTypeAdvanced clientHandleUdpDistributeClientPermeateServerTransferCloseTypeAdvanced() {
return new ClientHandleUdpDistributeClientPermeateServerTransferCloseTypeAdvanced();
}
}
/**
* @param args
* @throws Exception
*/
@Override
public void run(String... args) throws Exception {
@Configuration
static class HttpProxyConfiguration {
@Bean
public ClientHandleDistributeHttpClientProxyServerTransferTypeAdvanced clientHandleDistributeHttpClientProxyServerTypeAdvanced() {
return new ClientHandleDistributeHttpClientProxyServerTransferTypeAdvanced();
}
String inetHost = nettyClientProperties.getInetHost();
int inetPort = nettyClientProperties.getInetPort();
String clientId = nettyClientProperties.getClientId();
NettyClientSocket nettyClientSocket = new NettyClientSocket(
inetHost, inetPort,
clientId, NormalUsedString.DEFAULT,
clientChangeEvent, handleChannelTypeAdvancedList);
Thread thread = new Thread(() -> {
try {
nettyClientSocket.newConnect2Server();
} catch (Exception e) {
throw new RuntimeException(e);
}
@Bean
public ClientHandleHttpClientProxyClientTypeAdvanced clientHandleHttpClientClientProxyClientTypeAdvanced() {
return new ClientHandleHttpClientProxyClientTypeAdvanced();
}
});
log.info("当前服务连接Netty客户端:{},Netty端口:{}", inetHost, inetPort);
NETTY_CLIENT_EXECUTOR.execute(thread);
@Bean
public ClientHandleHttpClientProxyServerTypeAdvanced clientHandleHttpClientClientProxyServerTypeAdvanced() {
return new ClientHandleHttpClientProxyServerTypeAdvanced();
}
@Bean
public ClientHandleDistributeHttpClientProxyServerServerRouteTypeAdvanced clientHandleDistributeHttpClientProxyServerServerRouteTypeAdvanced() {
return new ClientHandleDistributeHttpClientProxyServerServerRouteTypeAdvanced();
}
@Bean
public ClientHandleDistributeHttpClientProxyServerClientRouteTypeAdvanced clientHandleDistributeHttpClientProxyServerClientRouteTypeAdvanced() {
return new ClientHandleDistributeHttpClientProxyServerClientRouteTypeAdvanced();
}
@Bean
public ClientHandleDistributeHttpClientProxyClientConnectionTransferSuccessfulAdvanced clientHandleDistributeHttpClientProxyClientConnectionTransferSuccessfulAdvanced() {
return new ClientHandleDistributeHttpClientProxyClientConnectionTransferSuccessfulAdvanced();
}
@Bean
public ClientHandleDistributeHttpClientProxyClientTransferRequestAdvanced clientHandleDistributeHttpClientProxyClientTransferRequestAdvanced() {
return new ClientHandleDistributeHttpClientProxyClientTransferRequestAdvanced();
}
@Bean
public ClientHandleHttpDistributeClientProxyClientTransferResponseTypeAdvanced clientHandleHttpDistributeClientProxyClientTransferResponseTypeAdvanced(){
return new ClientHandleHttpDistributeClientProxyClientTransferResponseTypeAdvanced();
}
@Bean
public ClientHandleHttpDistributeClientProxyClientTransferCLoseTypeAdvanced clientHandleHttpDistributeClientProxyClientTransferCLoseTypeAdvanced(){
return new ClientHandleHttpDistributeClientProxyClientTransferCLoseTypeAdvanced();
}
@Bean
public ClientHandleDistributeHttpServerProxyClientConnectionSuccessfulTypeAdvanced clientHandleDistributeHttpServerProxyClientConnectionSuccessfulTypeAdvanced(){
return new ClientHandleDistributeHttpServerProxyClientConnectionSuccessfulTypeAdvanced();
}
@Bean
public ClientHandleDistributeHttpServerProxyClientTransferRequestAdvanced clientHandleDistributeHttpServerProxyClientTransferRequestAdvanced(){
return new ClientHandleDistributeHttpServerProxyClientTransferRequestAdvanced();
}
}
}

View File

@@ -1,90 +0,0 @@
package org.framework.lazy.cloud.network.heartbeat.client.config;
import org.framework.lazy.cloud.network.heartbeat.client.netty.advanced.*;
import org.framework.lazy.cloud.network.heartbeat.client.netty.advanced.*;
import org.springframework.beans.factory.config.BeanDefinition;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Role;
import org.framework.lazy.cloud.network.heartbeat.client.netty.event.ClientChangeEvent;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.HandleChannelTypeAdvanced;
import java.util.List;
@Role(BeanDefinition.ROLE_INFRASTRUCTURE)
@ConditionalOnProperty(prefix = NettyClientProperties.PREFIX, name = "enabled", havingValue = "true", matchIfMissing = true)
public class HeartbeatClientConfiguration {
/**
* 服务端 处理客户端心跳
*
* @return ClientHandleChannelHeartbeatTypeAdvanced
*/
@Bean
public ClientHandleChannelHeartbeatTypeAdvanced clientChannelHeartbeatTypeAdvanced() {
return new ClientHandleChannelHeartbeatTypeAdvanced();
}
/**
* 处理 客户端代理的真实端口自动读写
*
* @return ClientHandleDistributeSingleClientRealAutoReadConnectTypeAdvanced
*/
@Bean
public ClientHandleDistributeSingleClientRealAutoReadConnectTypeAdvanced handleDistributeSingleClientRealAutoReadConnectTypeAdvanced() {
return new ClientHandleDistributeSingleClientRealAutoReadConnectTypeAdvanced();
}
/**
* 处理 接收服务端发送过来的聊天信息
*
* @return ClientHandleDistributeSingleClientMessageTypeAdvanced
*/
@Bean
public ClientHandleDistributeSingleClientMessageTypeAdvanced handleDistributeSingleClientMessageTypeAdvanced() {
return new ClientHandleDistributeSingleClientMessageTypeAdvanced();
}
@Bean
public ClientHandleDistributeSingleClientRealCloseVisitorTypeAdvanced handleDistributeSingleClientRealCloseVisitorTypeAdvanced() {
return new ClientHandleDistributeSingleClientRealCloseVisitorTypeAdvanced();
}
@Bean
public ClientReportHandleChannelTransferTypeAdvancedHandleDistribute handleChannelTransferTypeAdvancedHandleDistribute(NettyClientProperties nettyClientProperties) {
return new ClientReportHandleChannelTransferTypeAdvancedHandleDistribute(nettyClientProperties);
}
@Bean
public HandleDistributeConnectSuccessNotificationTypeAdvancedHandle handleDistributeConnectSuccessNotificationTypeAdvancedHandle(ClientChangeEvent clientChangeEvent) {
return new HandleDistributeConnectSuccessNotificationTypeAdvancedHandle(clientChangeEvent);
}
@Bean
public HandleClientChannelActiveAdvanced handleClientChannelActiveAdvanced(NettyClientProperties nettyClientProperties) {
return new HandleClientChannelActiveAdvanced(nettyClientProperties);
}
@Bean
public HandleDistributeDisconnectTypeAdvancedHandle handleDistributeDisconnectTypeAdvancedHandle(ClientChangeEvent clientChangeEvent) {
return new HandleDistributeDisconnectTypeAdvancedHandle(clientChangeEvent);
}
@Bean
public HandleDistributeStagingClosedTypeAdvanced handleDistributeStagingClosedTypeAdvanced() {
return new HandleDistributeStagingClosedTypeAdvanced();
}
@Bean
public HandleDistributeStagingOpenedTypeAdvanced handleDistributeStagingOpenedTypeAdvanced() {
return new HandleDistributeStagingOpenedTypeAdvanced();
}
@Bean
public ClientHandleDistributeSingleClientRealConnectTypeAdvanced clientHandleDistributeSingleClientRealConnectTypeAdvanced(NettyClientProperties nettyClientProperties,
List<HandleChannelTypeAdvanced> handleChannelTypeAdvancedList) {
return new ClientHandleDistributeSingleClientRealConnectTypeAdvanced(nettyClientProperties, handleChannelTypeAdvancedList);
}
}

View File

@@ -1,86 +0,0 @@
package org.framework.lazy.cloud.network.heartbeat.client.config;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.client.infrastructure.entity.LazyNettyServerPropertiesDO;
import org.framework.lazy.cloud.network.heartbeat.client.application.LazyNettyServerPropertiesApplication;
import org.springframework.beans.factory.DisposableBean;
import org.springframework.beans.factory.config.BeanDefinition;
import org.springframework.boot.CommandLineRunner;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Role;
import org.wu.framework.lazy.orm.database.lambda.stream.lambda.LazyLambdaStream;
import org.wu.framework.lazy.orm.database.lambda.stream.wrapper.LazyWrappers;
import java.util.Objects;
/**
* 初始化配置
*/
@Role(BeanDefinition.ROLE_INFRASTRUCTURE)
@Slf4j
@Configuration
public class InitConfig implements CommandLineRunner, DisposableBean {
private final NettyClientProperties nettyClientProperties;
private final LazyLambdaStream lazyLambdaStream;
private final LazyNettyServerPropertiesApplication lazyNettyServerPropertiesApplication;
public InitConfig(NettyClientProperties nettyClientProperties, LazyLambdaStream lazyLambdaStream, LazyNettyServerPropertiesApplication lazyNettyServerPropertiesApplication) {
this.nettyClientProperties = nettyClientProperties;
this.lazyLambdaStream = lazyLambdaStream;
this.lazyNettyServerPropertiesApplication = lazyNettyServerPropertiesApplication;
}
@Override
public void run(String... args) throws Exception {
try {
// 存储配置到db
initDb2Config();
// 启动客户端连接
lazyNettyServerPropertiesApplication.starterAllClientSocket();
} catch (Exception e) {
e.printStackTrace();
}
}
/**
* 存储配置到db
*/
public void initDb2Config() {
String clientId = nettyClientProperties.getClientId();
String inetHost = nettyClientProperties.getInetHost();
int inetPort = nettyClientProperties.getInetPort();
if (Objects.isNull(clientId) ||
Objects.isNull(inetHost)) {
log.warn("配置信息为空,请通过页面添加配置信息:{}", nettyClientProperties);
return;
}
LazyNettyServerPropertiesDO lazyNettyServerPropertiesDO = new LazyNettyServerPropertiesDO();
lazyNettyServerPropertiesDO.setClientId(clientId);
lazyNettyServerPropertiesDO.setInetHost(inetHost);
lazyNettyServerPropertiesDO.setInetPort(inetPort);
lazyNettyServerPropertiesDO.setType(PropertiesType.CONFIG);
lazyNettyServerPropertiesDO.setIsDeleted(false);
// 根据服务端端口、port 唯一性验证
boolean exists = lazyLambdaStream.exists(LazyWrappers.<LazyNettyServerPropertiesDO>lambdaWrapper()
.eq(LazyNettyServerPropertiesDO::getInetHost, inetHost)
.eq(LazyNettyServerPropertiesDO::getInetPort, inetPort)
.eq(LazyNettyServerPropertiesDO::getClientId, clientId)
);
if (!exists) {
lazyLambdaStream.insert(lazyNettyServerPropertiesDO);
}
}
/**
* 程序关闭后执行
*/
@Override
public void destroy() {
lazyNettyServerPropertiesApplication.destroyClientSocket();
}
}

View File

@@ -1,6 +1,7 @@
package org.framework.lazy.cloud.network.heartbeat.client.config;
import lombok.Data;
import org.framework.lazy.cloud.network.heartbeat.common.enums.ProtocolType;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.Configuration;
@@ -29,6 +30,22 @@ public class NettyClientProperties {
*/
private String clientId;
/**
* 协议类型
*/
private ProtocolType protocolType = ProtocolType.TCP;
/**
*
* 令牌key
*/
private String appKey;
/**
*
* 令牌密钥
*/
private String appSecret;
/**
* 是否开启 默认是
*/

View File

@@ -0,0 +1,104 @@
package org.framework.lazy.cloud.network.heartbeat.client.context;
import jakarta.annotation.PreDestroy;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.client.application.LazyNettyServerPropertiesApplication;
import org.framework.lazy.cloud.network.heartbeat.client.config.NettyClientProperties;
import org.framework.lazy.cloud.network.heartbeat.client.config.PropertiesType;
import org.framework.lazy.cloud.network.heartbeat.client.infrastructure.entity.LazyNettyServerPropertiesDO;
import org.framework.lazy.cloud.network.heartbeat.common.enums.ProtocolType;
import org.springframework.beans.factory.DisposableBean;
import org.springframework.boot.context.event.ApplicationStartedEvent;
import org.springframework.context.ApplicationListener;
import org.springframework.stereotype.Component;
import org.wu.framework.lazy.orm.database.lambda.stream.lambda.LazyLambdaStream;
import org.wu.framework.lazy.orm.database.lambda.stream.wrapper.LazyWrappers;
import java.util.Objects;
@Slf4j
@Component
public class NettyClientSocketApplicationListener implements ApplicationListener<ApplicationStartedEvent>, DisposableBean {
private final NettyClientProperties nettyClientProperties;
private final LazyLambdaStream lazyLambdaStream;
private final LazyNettyServerPropertiesApplication lazyNettyServerPropertiesApplication;
public NettyClientSocketApplicationListener(NettyClientProperties nettyClientProperties, LazyLambdaStream lazyLambdaStream, LazyNettyServerPropertiesApplication lazyNettyServerPropertiesApplication) {
this.nettyClientProperties = nettyClientProperties;
this.lazyLambdaStream = lazyLambdaStream;
this.lazyNettyServerPropertiesApplication = lazyNettyServerPropertiesApplication;
}
/**
* 存储配置到db
*/
public void initDb2Config() {
try {
String clientId = nettyClientProperties.getClientId();
String inetHost = nettyClientProperties.getInetHost();
int inetPort = nettyClientProperties.getInetPort();
String appKey = nettyClientProperties.getAppKey();
String appSecret = nettyClientProperties.getAppSecret();
ProtocolType protocolType = nettyClientProperties.getProtocolType();
if (Objects.isNull(clientId) ||
Objects.isNull(inetHost)) {
log.warn("配置信息为空,请通过页面添加配置信息:{}", nettyClientProperties);
return;
}
LazyNettyServerPropertiesDO lazyNettyServerPropertiesDO = new LazyNettyServerPropertiesDO();
lazyNettyServerPropertiesDO.setClientId(clientId);
lazyNettyServerPropertiesDO.setInetHost(inetHost);
lazyNettyServerPropertiesDO.setInetPort(inetPort);
lazyNettyServerPropertiesDO.setType(PropertiesType.CONFIG);
lazyNettyServerPropertiesDO.setIsDeleted(false);
lazyNettyServerPropertiesDO.setAppKey(appKey);
lazyNettyServerPropertiesDO.setAppSecret(appSecret);
lazyNettyServerPropertiesDO.setProtocolType(protocolType);
// 根据服务端端口、port 唯一性验证
boolean exists = lazyLambdaStream.exists(LazyWrappers.<LazyNettyServerPropertiesDO>lambdaWrapper()
.eq(LazyNettyServerPropertiesDO::getInetHost, inetHost)
.eq(LazyNettyServerPropertiesDO::getInetPort, inetPort)
.eq(LazyNettyServerPropertiesDO::getClientId, clientId)
.eq(LazyNettyServerPropertiesDO::getProtocolType, protocolType)
);
if (!exists) {
lazyLambdaStream.insert(lazyNettyServerPropertiesDO);
}
}catch (Exception e){
e.printStackTrace();
}
}
/**
* Handle an application event.
*
* @param event the event to respond to
*/
@Override
public void onApplicationEvent(ApplicationStartedEvent event) {
try {
// 存储配置到db
initDb2Config();
// 启动客户端连接
lazyNettyServerPropertiesApplication.starterAllClientSocket();
} catch (Exception e) {
e.printStackTrace();
}
}
@PreDestroy
@Override
public void destroy() throws Exception {
lazyNettyServerPropertiesApplication.destroyClientSocket();
}
}

View File

@@ -1,14 +1,13 @@
package org.framework.lazy.cloud.network.heartbeat.client.domain.model.lazy.netty.server.properties;
import io.swagger.v3.oas.annotations.media.Schema;
import lombok.Data;
import lombok.experimental.Accessors;
import io.swagger.v3.oas.annotations.media.Schema;
import org.framework.lazy.cloud.network.heartbeat.common.enums.NettyClientStatus;
import org.framework.lazy.cloud.network.heartbeat.client.config.PropertiesType;
import org.framework.lazy.cloud.network.heartbeat.common.enums.NettyClientStatus;
import org.framework.lazy.cloud.network.heartbeat.common.enums.ProtocolType;
import java.lang.String;
import java.time.LocalDateTime;
import java.lang.Integer;
/**
* describe 服务端配置信息
*
@@ -66,6 +65,22 @@ public class LazyNettyServerProperties {
@Schema(description ="类型配置、DB",name ="type",example = "")
private PropertiesType type;
/**
* 协议类型
*/
@Schema(description = "协议类型", name = "protocol_type", example = "")
private ProtocolType protocolType;
/**
* 令牌key
*/
@Schema(description = "令牌key", name = "appKey", example = "")
private String appKey;
/**
* 令牌密钥
*/
@Schema(description = "令牌密钥", name = "appSecret", example = "")
private String appSecret;
/**
*
* 更新时间

View File

@@ -1,17 +1,17 @@
package org.framework.lazy.cloud.network.heartbeat.client.infrastructure.entity;
import io.swagger.v3.oas.annotations.media.Schema;
import lombok.Data;
import lombok.experimental.Accessors;
import org.framework.lazy.cloud.network.heartbeat.common.enums.NettyClientStatus;
import org.framework.lazy.cloud.network.heartbeat.client.config.PropertiesType;
import org.framework.lazy.cloud.network.heartbeat.common.enums.NettyClientStatus;
import org.framework.lazy.cloud.network.heartbeat.common.enums.ProtocolType;
import org.wu.framework.lazy.orm.core.stereotype.LazyTable;
import org.wu.framework.lazy.orm.core.stereotype.LazyTableField;
import org.wu.framework.lazy.orm.core.stereotype.*;
import io.swagger.v3.oas.annotations.media.Schema;
import org.wu.framework.lazy.orm.core.stereotype.LazyTableFieldId;
import org.wu.framework.lazy.orm.core.stereotype.LazyTableFieldUnique;
import java.lang.String;
import java.time.LocalDateTime;
import java.lang.Integer;
/**
* describe 服务端配置信息
*
@@ -26,6 +26,13 @@ import java.lang.Integer;
public class LazyNettyServerPropertiesDO {
/**
*
* 主键ID
*/
@Schema(description ="主键ID",name ="id",example = "")
@LazyTableFieldId(name = "id", comment = "主键ID")
private Long id;
/**
*
* 客户身份ID
@@ -75,6 +82,20 @@ public class LazyNettyServerPropertiesDO {
@LazyTableField(name="is_deleted",comment="是否删除")
private Boolean isDeleted;
/**
* 令牌key
*/
@Schema(description = "令牌key", name = "appKey", example = "")
@LazyTableField(name = "app_key", comment = "令牌key")
private String appKey;
/**
* 令牌密钥
*/
@Schema(description = "令牌密钥", name = "appSecret", example = "")
@LazyTableField(name = "app_secret", comment = "令牌密钥")
private String appSecret;
/**
*
* 类型配置、DB
@@ -83,6 +104,13 @@ public class LazyNettyServerPropertiesDO {
@LazyTableField(name="type",comment="类型配置、DB",columnType="varchar(255)")
private PropertiesType type;
/**
* 协议类型
*/
@Schema(description ="协议类型",name ="protocol_type",example = "")
@LazyTableField(name="protocol_type",comment="协议类型",columnType="varchar(255)")
private ProtocolType protocolType;
/**
*
* 更新时间

View File

@@ -1,30 +0,0 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.filter;
import io.netty.channel.Channel;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelPipeline;
import io.netty.channel.socket.SocketChannel;
import org.framework.lazy.cloud.network.heartbeat.client.netty.handler.NettyClientRealHandler;
public class NettyClientRealFilter extends ChannelInitializer<SocketChannel> {
/**
* This method will be called once the {@link Channel} was registered. After the method returns this instance
* will be removed from the {@link ChannelPipeline} of the {@link Channel}.
*
* @param ch the {@link Channel} which was registered.
* @throws Exception is thrown if an error occurs. In that case it will be handled by
* {@link #exceptionCaught(ChannelHandlerContext, Throwable)} which will by default connectionClose
* the {@link Channel}.
*/
@Override
protected void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
pipeline.addLast(new NettyClientRealHandler());
// // 解码、编码
// pipeline.addLast(new NettyProxyMsgDecoder(Integer.MAX_VALUE, 0, 4, -4, 0));
// pipeline.addLast(new NettMsgEncoder());
// pipeline.addLast(new NettyProxyMsgDecoder(Integer.MAX_VALUE, 0, 4, -4, 0));
// pipeline.addLast(new NettyProxyMsgEncoder());
}
}

View File

@@ -0,0 +1,64 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate;
import lombok.Data;
import lombok.NoArgsConstructor;
import org.framework.lazy.cloud.network.heartbeat.client.config.NettyClientProperties;
import org.framework.lazy.cloud.network.heartbeat.common.InternalNetworkPermeate;
import org.framework.lazy.cloud.network.heartbeat.common.adapter.ChannelFlowAdapter;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.HandleChannelTypeAdvanced;
import java.util.List;
@NoArgsConstructor
@Data
public class NettyClientPermeateClientVisitor implements InternalNetworkPermeate {
/**
* 当前客户端ID
*/
private String fromClientId;
/**
* 目标客户端ID
*/
private String toClientId;
/**
* 目标地址
*/
private String targetIp;
/**
* 目标端口
*/
private Integer targetPort;
/**
* 访问端口
*/
private Integer visitorPort;
/**
* 流量适配器
*/
private ChannelFlowAdapter channelFlowAdapter;
/**
* 服务端地址信息
*/
private NettyClientProperties nettyClientProperties;
/**
* 通道处理器
*/
private List<HandleChannelTypeAdvanced> handleChannelTypeAdvancedList;
/**
* 访客ID
*/
private String visitorId;
/**
* 是否是ssl
*/
private boolean isSsl;
}

View File

@@ -0,0 +1,46 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate;
import lombok.Data;
import lombok.NoArgsConstructor;
import org.framework.lazy.cloud.network.heartbeat.client.config.NettyClientProperties;
import org.framework.lazy.cloud.network.heartbeat.common.InternalNetworkPermeate;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.HandleChannelTypeAdvanced;
import java.util.List;
@NoArgsConstructor
@Data
public class NettyClientPermeateServerVisitor implements InternalNetworkPermeate {
/**
* 目标地址
*/
private String targetIp;
/**
* 目标端口
*/
private Integer targetPort;
/**
* 访问端口
*/
private Integer visitorPort;
/**
* 服务端地址信息
*/
private NettyClientProperties nettyClientProperties;
/**
* 通道处理器
*/
private List<HandleChannelTypeAdvanced> handleChannelTypeAdvancedList;
/**
* 是否是ssl
*/
private boolean isSsl;
}

View File

@@ -0,0 +1,15 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate;
public interface NettyClientSocket {
/**
* 创建客户端链接服务端
* @throws InterruptedException 异常信息
*/
void newConnect2Server() throws InterruptedException;
/**
* 关闭链接
*/
void shutdown();
}

View File

@@ -1,4 +1,4 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.event;
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.event;
/**
* 客户端状态变更事件

View File

@@ -1,4 +1,4 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.event;
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.event;
import jakarta.annotation.Resource;

View File

@@ -1,17 +1,17 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.advanced;
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.advanced;
import io.netty.channel.Channel;
import org.framework.lazy.cloud.network.heartbeat.common.MessageType;
import org.framework.lazy.cloud.network.heartbeat.common.constant.TcpMessageType;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.AbstractHandleChannelHeartbeatTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.permeate.tcp.AbstractTcpHandleChannelHeartbeatTypeAdvanced;
/**
* 服务端 处理客户端心跳
* TYPE_HEARTBEAT
* TCP_TYPE_HEARTBEAT
*/
public class ClientHandleChannelHeartbeatTypeAdvanced extends AbstractHandleChannelHeartbeatTypeAdvanced<NettyProxyMsg> {
public class ClientHandleTcpChannelHeartbeatTypeAdvanced extends AbstractTcpHandleChannelHeartbeatTypeAdvanced<NettyProxyMsg> {
/**
* 处理当前数据
@@ -22,7 +22,7 @@ public class ClientHandleChannelHeartbeatTypeAdvanced extends AbstractHandleChan
@Override
public void doHandler(Channel channel, NettyProxyMsg msg) {
NettyProxyMsg hb = new NettyProxyMsg();
hb.setType(MessageType.TYPE_HEARTBEAT);
hb.setType(TcpMessageType.TCP_TYPE_HEARTBEAT);
// channel.writeAndFlush(hb);
}

View File

@@ -0,0 +1,59 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.advanced;
import io.netty.buffer.ByteBuf;
import io.netty.channel.Channel;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.client.config.NettyClientProperties;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.permeate.tcp.client.AbstractHandleTcpDistributeServicePermeateClientTransferTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.enums.TcpMessageTypeEnums;
import org.framework.lazy.cloud.network.heartbeat.common.utils.ChannelAttributeKeyUtils;
/**
* 服务端处理客户端数据传输
*
* @see TcpMessageTypeEnums#TCP_DISTRIBUTE_CLIENT_TRANSFER
*/
@Slf4j
public class ClientHandleTcpChannelTransferTypeAdvancedHandleDistributeTcpDistribute extends AbstractHandleTcpDistributeServicePermeateClientTransferTypeAdvanced<NettyProxyMsg> {
private final NettyClientProperties nettyClientProperties;
public ClientHandleTcpChannelTransferTypeAdvancedHandleDistributeTcpDistribute(NettyClientProperties nettyClientProperties) {
this.nettyClientProperties = nettyClientProperties;
}
/**
* 处理当前数据
*
* @param channel 当前通道
* @param nettyProxyMsg 通道数据
*/
@Override
public void doHandler(Channel channel, NettyProxyMsg nettyProxyMsg) {
log.debug("接收到服务端需要内网穿透的数据:{}" , nettyProxyMsg);
String clientId = nettyClientProperties.getClientId();
byte[] visitorPort = nettyProxyMsg.getVisitorPort();
byte[] clientTargetIp = nettyProxyMsg.getClientTargetIp();
byte[] clientTargetPort = nettyProxyMsg.getClientTargetPort();
byte[] visitorId = nettyProxyMsg.getVisitorId();
// 真实服务通道
// Channel realChannel = NettyRealIdContext.getReal(new String(visitorId));
Channel nextChannel = ChannelAttributeKeyUtils.getNextChannel(channel);
if (nextChannel == null) {
log.error("无法获取访客:{} 真实服务", new String(visitorId));
return;
}
// 把数据转到真实服务
ByteBuf buf = channel.config().getAllocator().buffer(nettyProxyMsg.getData().length);
buf.writeBytes(nettyProxyMsg.getData());
nextChannel.writeAndFlush(buf);
}
}

View File

@@ -1,19 +1,19 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.advanced;
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.advanced;
import io.netty.channel.Channel;
import org.framework.lazy.cloud.network.heartbeat.client.config.NettyClientProperties;
import org.framework.lazy.cloud.network.heartbeat.common.ChannelContext;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.client.AbstractHandleClientChannelActiveAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.permeate.tcp.client.AbstractHandleTcpClientChannelActiveAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.utils.ChannelAttributeKeyUtils;
/**
* 客户端通道 is active
*/
public class HandleClientChannelActiveAdvanced extends AbstractHandleClientChannelActiveAdvanced<NettyProxyMsg> {
public class ClientHandleTcpClientChannelActiveAdvanced extends AbstractHandleTcpClientChannelActiveAdvanced<NettyProxyMsg> {
private final NettyClientProperties nettyClientProperties;
public HandleClientChannelActiveAdvanced(NettyClientProperties nettyClientProperties) {
public ClientHandleTcpClientChannelActiveAdvanced(NettyClientProperties nettyClientProperties) {
this.nettyClientProperties = nettyClientProperties;
}

View File

@@ -0,0 +1,43 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.advanced;
import io.netty.channel.Channel;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.NettyVisitorPortContext;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.permeate.tcp.client.AbstractHandleTcpDistributeClientPermeateClientCloseTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.enums.TcpMessageTypeEnums;
import org.framework.lazy.cloud.network.heartbeat.common.socket.PermeateVisitorSocket;
/**
* 客户端渗透客户端init close 信息
*
* @see TcpMessageTypeEnums#TCP_DISTRIBUTE_CLIENT_PERMEATE_CLIENT_CLOSE
*/
@Slf4j
public class ClientHandleTcpDistributeClientPermeateClientCloseTypeAdvanced extends AbstractHandleTcpDistributeClientPermeateClientCloseTypeAdvanced<NettyProxyMsg> {
/**
* 处理当前数据
*
* @param channel 当前通道
* @param nettyProxyMsg 通道数据
*/
@Override
public void doHandler(Channel channel, NettyProxyMsg nettyProxyMsg) {
// 初始化 客户端渗透服务端socket
byte[] msgVisitorPort = nettyProxyMsg.getVisitorPort();
Integer visitorPort = Integer.parseInt(new String(msgVisitorPort));
PermeateVisitorSocket visitorSocket = NettyVisitorPortContext.getVisitorSocket(visitorPort);
// 关闭当前客户端渗透服务端访客通道
try {
visitorSocket.close();
} catch (Exception e) {
e.printStackTrace();
}
}
}

View File

@@ -0,0 +1,71 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.advanced;
import io.netty.channel.Channel;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.client.config.NettyClientProperties;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.socket.NettyTcpClientPermeateClientVisitorSocket;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.HandleChannelTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.permeate.tcp.client.AbstractHandleTcpDistributeClientPermeateClientInitTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.enums.TcpMessageTypeEnums;
import org.wu.framework.spring.utils.SpringContextHolder;
import java.util.ArrayList;
import java.util.List;
/**
* 客户端渗透客户端init信息
*
* @see TcpMessageTypeEnums#TCP_DISTRIBUTE_CLIENT_PERMEATE_CLIENT_INIT
*/
@Slf4j
public class ClientHandleTcpDistributeClientPermeateClientInitTypeAdvanced extends AbstractHandleTcpDistributeClientPermeateClientInitTypeAdvanced<NettyProxyMsg> {
/**
* 处理当前数据
*
* @param channel 当前通道
* @param nettyProxyMsg 通道数据
*/
@Override
public void doHandler(Channel channel, NettyProxyMsg nettyProxyMsg) {
// 初始化 客户端渗透服务端socket
byte[] fromClientIdBytes = nettyProxyMsg.getClientId();
byte[] visitorPortBytes = nettyProxyMsg.getVisitorPort();
byte[] clientTargetIpBytes = nettyProxyMsg.getClientTargetIp();
byte[] clientTargetPortBytes = nettyProxyMsg.getClientTargetPort();
byte[] toClientIdBytes = nettyProxyMsg.getData();
String fromClientId = new String(fromClientIdBytes);
String toClientId = new String(toClientIdBytes);
Integer visitorPort = Integer.parseInt(new String(visitorPortBytes));
String clientTargetIp = new String(clientTargetIpBytes);
Integer clientTargetPort = Integer.parseInt(new String(clientTargetPortBytes));
List<HandleChannelTypeAdvanced> handleChannelTypeAdvancedList = new ArrayList<>(SpringContextHolder.getApplicationContext().getBeansOfType(HandleChannelTypeAdvanced.class).values());
// ChannelFlowAdapter channelFlowAdapter = SpringContextHolder.getBean(ChannelFlowAdapter.class);
NettyClientProperties nettyClientProperties = SpringContextHolder.getBean(NettyClientProperties.class);
log.info("client permeate client from client_id:【{}】 to_client_id【{}】with visitor_port【{}】, clientTargetIp:【{}】clientTargetPort:【{}】",
fromClientId, toClientId, visitorPort, clientTargetIp, clientTargetPort);
NettyTcpClientPermeateClientVisitorSocket nettyTcpClientPermeateClientVisitorSocket =
NettyTcpClientPermeateClientVisitorSocket.NettyClientPermeateClientVisitorSocketBuilder.builder()
.builderClientId(fromClientId)
.builderClientTargetIp(clientTargetIp)
.builderClientTargetPort(clientTargetPort)
.builderVisitorPort(visitorPort)
.builderNettyClientProperties(nettyClientProperties)
//.builderChannelFlowAdapter(channelFlowAdapter)
.builderToClientId(toClientId)
.builderHandleChannelTypeAdvancedList(handleChannelTypeAdvancedList)
.build();
try {
nettyTcpClientPermeateClientVisitorSocket.start();
} catch (Exception e) {
e.printStackTrace();
}
}
}

View File

@@ -0,0 +1,35 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.advanced;
import io.netty.channel.Channel;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.permeate.tcp.client.AbstractHandleTcpDistributeClientPermeateClientTransferCloseTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.enums.TcpMessageTypeEnums;
import org.framework.lazy.cloud.network.heartbeat.common.utils.ChannelAttributeKeyUtils;
/**
* 下发客户端渗透客户端通信通道关闭
*
* @see TcpMessageTypeEnums#TCP_DISTRIBUTE_CLIENT_PERMEATE_CLIENT_TRANSFER_CLOSE
*/
@Slf4j
public class ClientHandleTcpDistributeClientPermeateClientTransferCloseTypeAdvanced extends AbstractHandleTcpDistributeClientPermeateClientTransferCloseTypeAdvanced<NettyProxyMsg> {
/**
* 处理当前数据
*
* @param channel 当前通道
* @param nettyProxyMsg 通道数据
*/
@Override
public void doHandler(Channel channel, NettyProxyMsg nettyProxyMsg) {
// 关闭客户端真实通道、访客通道
Channel realChannel = ChannelAttributeKeyUtils.getNextChannel(channel);
realChannel.close();// 真实通道关闭
channel.close(); // 数据传输通道关闭
}
}

View File

@@ -0,0 +1,43 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.advanced;
import io.netty.channel.Channel;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.NettyVisitorPortContext;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.permeate.tcp.client.AbstractHandleTcpDistributeClientPermeateServerCloseTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.enums.TcpMessageTypeEnums;
import org.framework.lazy.cloud.network.heartbeat.common.socket.PermeateVisitorSocket;
/**
* 客户端渗透服务端init close 信息
*
* @see TcpMessageTypeEnums#TCP_DISTRIBUTE_CLIENT_PERMEATE_SERVER_CLOSE
*/
@Slf4j
public class ClientHandleTcpDistributeClientPermeateServerCloseTypeAdvanced extends AbstractHandleTcpDistributeClientPermeateServerCloseTypeAdvanced<NettyProxyMsg> {
/**
* 处理当前数据
*
* @param channel 当前通道
* @param nettyProxyMsg 通道数据
*/
@Override
public void doHandler(Channel channel, NettyProxyMsg nettyProxyMsg) {
// 初始化 客户端渗透服务端socket
byte[] msgVisitorPort = nettyProxyMsg.getVisitorPort();
Integer visitorPort = Integer.parseInt(new String(msgVisitorPort));
PermeateVisitorSocket visitorSocket = NettyVisitorPortContext.getVisitorSocket(visitorPort);
// 关闭当前客户端渗透服务端访客通道
try {
visitorSocket.close();
} catch (Exception e) {
e.printStackTrace();
}
}
}

View File

@@ -0,0 +1,63 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.advanced;
import io.netty.channel.Channel;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.client.config.NettyClientProperties;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.socket.NettyTcpClientPermeateServerVisitorSocket;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.HandleChannelTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.permeate.tcp.client.AbstractHandleTcpDistributeClientPermeateServerInitTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.enums.TcpMessageTypeEnums;
import org.wu.framework.spring.utils.SpringContextHolder;
import java.util.ArrayList;
import java.util.List;
/**
* 客户端渗透服务端init信息
*
* @see TcpMessageTypeEnums#TCP_DISTRIBUTE_CLIENT_PERMEATE_SERVER_INIT
*/
@Slf4j
public class ClientHandleTcpDistributeClientPermeateServerInitTypeAdvanced extends AbstractHandleTcpDistributeClientPermeateServerInitTypeAdvanced<NettyProxyMsg> {
private final NettyClientProperties nettyClientProperties;
public ClientHandleTcpDistributeClientPermeateServerInitTypeAdvanced(NettyClientProperties nettyClientProperties) {
this.nettyClientProperties = nettyClientProperties;
}
/**
* 处理当前数据
*
* @param channel 当前通道
* @param nettyProxyMsg 通道数据
*/
@Override
public void doHandler(Channel channel, NettyProxyMsg nettyProxyMsg) {
// 初始化 客户端渗透服务端socket
byte[] clientIdBytes = nettyProxyMsg.getClientId();
byte[] visitorPort = nettyProxyMsg.getVisitorPort();
byte[] clientTargetIp = nettyProxyMsg.getClientTargetIp();
byte[] clientTargetPort = nettyProxyMsg.getClientTargetPort();
List<HandleChannelTypeAdvanced> handleChannelTypeAdvancedList = new ArrayList<>(SpringContextHolder.getApplicationContext().getBeansOfType(HandleChannelTypeAdvanced.class).values());
NettyTcpClientPermeateServerVisitorSocket nettyTcpClientPermeateServerVisitorSocket = NettyTcpClientPermeateServerVisitorSocket.NettyVisitorSocketBuilder.builder()
.builderClientId(new String(clientIdBytes))
.builderClientTargetIp(new String(clientTargetIp))
.builderClientTargetPort(Integer.parseInt(new String(clientTargetPort)))
.builderVisitorPort(Integer.parseInt(new String(visitorPort)))
.builderNettyClientProperties(nettyClientProperties)
.builderHandleChannelTypeAdvancedList(handleChannelTypeAdvancedList)
.build();
try {
nettyTcpClientPermeateServerVisitorSocket.start();
} catch (Exception e) {
log.error(e.getMessage(), e);
e.printStackTrace();
}
}
}

View File

@@ -0,0 +1,34 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.advanced;
import io.netty.channel.Channel;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.permeate.tcp.client.AbstractHandleTcpDistributeClientPermeateServerTransferCloseTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.enums.TcpMessageTypeEnums;
import org.framework.lazy.cloud.network.heartbeat.common.utils.ChannelAttributeKeyUtils;
/**
* 下发 客户端渗透服务端通信通道关闭
*
* @see TcpMessageTypeEnums#TCP_DISTRIBUTE_CLIENT_PERMEATE_SERVER_TRANSFER_CLOSE
*/
@Slf4j
public class ClientHandleTcpDistributeClientPermeateServerTransferCloseTypeAdvanced extends AbstractHandleTcpDistributeClientPermeateServerTransferCloseTypeAdvanced<NettyProxyMsg> {
/**
* 处理当前数据
*
* @param channel 当前通道
* @param nettyProxyMsg 通道数据
*/
@Override
public void doHandler(Channel channel, NettyProxyMsg nettyProxyMsg) {
// 关闭本地通信通道
Channel nextChannel = ChannelAttributeKeyUtils.getNextChannel(channel);
channel.close();
nextChannel.close();
}
}

View File

@@ -0,0 +1,52 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.advanced;
import io.netty.buffer.ByteBuf;
import io.netty.channel.Channel;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.permeate.tcp.client.AbstractHandleTcpDistributeClientPermeateServerTransferTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.enums.TcpMessageTypeEnums;
import org.framework.lazy.cloud.network.heartbeat.common.utils.ChannelAttributeKeyUtils;
/**
* 服务端处理客户端数据传输
*
* @see TcpMessageTypeEnums#TCP_DISTRIBUTE_CLIENT_TRANSFER
*/
@Slf4j
public class ClientHandleTcpDistributeClientPermeateServerTransferTypeAdvanced extends AbstractHandleTcpDistributeClientPermeateServerTransferTypeAdvanced<NettyProxyMsg> {
/**
* 处理当前数据
*
* @param channel 当前通道
* @param nettyProxyMsg 通道数据
*/
@Override
public void doHandler(Channel channel, NettyProxyMsg nettyProxyMsg) {
log.debug("客户端渗透服务端返回数据:{}" , new String(nettyProxyMsg.getData()));
byte[] visitorPort = nettyProxyMsg.getVisitorPort();
byte[] clientTargetIp = nettyProxyMsg.getClientTargetIp();
byte[] clientTargetPort = nettyProxyMsg.getClientTargetPort();
byte[] visitorId = nettyProxyMsg.getVisitorId();
// 真实服务通道
// Channel realChannel = NettyRealIdContext.getReal(new String(visitorId));
Channel nextChannel = ChannelAttributeKeyUtils.getNextChannel(channel);
if (nextChannel == null) {
log.error("无法获取访客:{} 真实服务", new String(visitorId));
return;
}
// 把数据转到真实服务
ByteBuf buf = channel.config().getAllocator().buffer(nettyProxyMsg.getData().length);
buf.writeBytes(nettyProxyMsg.getData());
nextChannel.writeAndFlush(buf);
}
}

View File

@@ -0,0 +1,61 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.advanced;
import io.netty.channel.Channel;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.client.config.NettyClientProperties;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.socket.NettyTcpClientPermeateClientRealSocket;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.HandleChannelTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.permeate.tcp.client.AbstractHandleTcpDistributeClientTransferClientPermeateChannelConnectionSuccessfulTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.enums.TcpMessageTypeEnums;
import org.wu.framework.spring.utils.SpringContextHolder;
import java.util.ArrayList;
import java.util.List;
/**
* 客户端渗透客户端数据传输通道连接成功
*
*
* @see TcpMessageTypeEnums#TCP_DISTRIBUTE_CLIENT_TRANSFER_CLIENT_PERMEATE_CHANNEL_CONNECTION_SUCCESSFUL
*/
@Slf4j
public class ClientHandleTcpDistributeClientTransferClientPermeateChannelConnectionSuccessfulTypeAdvanced extends AbstractHandleTcpDistributeClientTransferClientPermeateChannelConnectionSuccessfulTypeAdvanced<NettyProxyMsg> {
/**
* 处理当前数据
*
* @param channel 当前通道
* @param nettyProxyMsg 通道数据
*/
@Override
public void doHandler(Channel channel, NettyProxyMsg nettyProxyMsg) {
// 创建connect 然后发送创建成功
byte[] msgClientId = nettyProxyMsg.getClientId();
byte[] msgClientTargetIp = nettyProxyMsg.getClientTargetIp();
byte[] msgClientTargetPort = nettyProxyMsg.getClientTargetPort();
byte[] msgVisitorId = nettyProxyMsg.getVisitorId();
byte[] msgVisitorPort = nettyProxyMsg.getVisitorPort();
String clientId=new String(msgClientId);
String clientTargetIp=new String(msgClientTargetIp);
Integer clientTargetPort=Integer.parseInt(new String(msgClientTargetPort));
String visitorId=new String(msgVisitorId);
Integer visitorPort=Integer.parseInt(new String(msgVisitorPort));
NettyClientProperties nettyClientProperties = SpringContextHolder.getBean(NettyClientProperties.class);
List<HandleChannelTypeAdvanced> handleChannelTypeAdvancedList = new ArrayList<>(SpringContextHolder.getApplicationContext().getBeansOfType(HandleChannelTypeAdvanced.class).values());
NettyTcpClientPermeateClientRealSocket.buildRealServer(
clientId,
clientTargetIp,
clientTargetPort,
visitorPort,
visitorId,
nettyClientProperties,
handleChannelTypeAdvancedList
);
}
}

View File

@@ -0,0 +1,44 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.advanced;
import io.netty.channel.Channel;
import io.netty.channel.ChannelOption;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.permeate.tcp.client.AbstractHandleTcpDistributeClientTransferClientPermeateChannelInitSuccessfulTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.enums.TcpMessageTypeEnums;
import org.framework.lazy.cloud.network.heartbeat.common.utils.ChannelAttributeKeyUtils;
/**
* 下发 客户端渗透客户端数据传输通道init 成功
*
*
* @see TcpMessageTypeEnums#TCP_DISTRIBUTE_CLIENT_TRANSFER_CLIENT_PERMEATE_CHANNEL_CONNECTION_SUCCESSFUL
*/
@Slf4j
public class ClientHandleTcpDistributeClientTransferClientPermeateChannelInitSuccessfulTypeAdvanced extends AbstractHandleTcpDistributeClientTransferClientPermeateChannelInitSuccessfulTypeAdvanced<NettyProxyMsg> {
/**
* 处理当前数据
*
* @param channel 当前通道
* @param nettyProxyMsg 通道数据
*/
@Override
public void doHandler(Channel channel, NettyProxyMsg nettyProxyMsg) {
// 连接成功 开启自动读取写
byte[] msgClientId = nettyProxyMsg.getClientId();
byte[] msgClientTargetIp = nettyProxyMsg.getClientTargetIp();
byte[] msgClientTargetPort = nettyProxyMsg.getClientTargetPort();
byte[] msgVisitorId = nettyProxyMsg.getVisitorId();
byte[] msgVisitorPort = nettyProxyMsg.getVisitorPort();
Channel nextChannel = ChannelAttributeKeyUtils.getNextChannel(channel);
nextChannel.config().setOption(ChannelOption.AUTO_READ, true);
}
}

View File

@@ -0,0 +1,41 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.advanced;
import io.netty.buffer.ByteBuf;
import io.netty.channel.Channel;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.permeate.tcp.client.AbstractHandleTcpDistributeClientTransferClientRequestTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.enums.TcpMessageTypeEnums;
import org.framework.lazy.cloud.network.heartbeat.common.utils.ChannelAttributeKeyUtils;
/**
* 下发客户端渗透客户端数据传输
*
*
* @see TcpMessageTypeEnums#TCP_DISTRIBUTE_CLIENT_PERMEATE_CLIENT_TRANSFER_REQUEST
*/
@Slf4j
public class ClientHandleTcpDistributeClientTransferClientRequestTypeAdvanced extends AbstractHandleTcpDistributeClientTransferClientRequestTypeAdvanced<NettyProxyMsg> {
/**
* 处理当前数据
*
* @param channel 当前通道
* @param nettyProxyMsg 通道数据
*/
@Override
public void doHandler(Channel channel, NettyProxyMsg nettyProxyMsg) {
Channel nextChannel = ChannelAttributeKeyUtils.getNextChannel(channel);
// 把数据转到真实服务
ByteBuf buf = channel.config().getAllocator().buffer(nettyProxyMsg.getData().length);
buf.writeBytes(nettyProxyMsg.getData());
log.debug("client permeate client send request to real socket byte:{} ",new String(nettyProxyMsg.getData()));
nextChannel.writeAndFlush(buf);
}
}

View File

@@ -0,0 +1,43 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.advanced;
import io.netty.channel.Channel;
import io.netty.channel.ChannelOption;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.handler.NettyTcpClientPermeateServerVisitorHandler;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.socket.NettyTcpClientPermeateServerVisitorTransferSocket;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.permeate.tcp.client.AbstractHandleTcpDistributeClientTransferServerPermeateChannelConnectionSuccessfulTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.enums.TcpMessageTypeEnums;
import org.framework.lazy.cloud.network.heartbeat.common.utils.ChannelAttributeKeyUtils;
/**
* 客户端渗透服务端数据传输通道连接成功
* @see NettyTcpClientPermeateServerVisitorTransferSocket
* @see NettyTcpClientPermeateServerVisitorHandler
*
* @see TcpMessageTypeEnums#TCP_DISTRIBUTE_CLIENT_TRANSFER_SERVER_PERMEATE_CHANNEL_CONNECTION_SUCCESSFUL
*/
@Slf4j
public class ClientHandleTcpDistributeClientTransferServerPermeateChannelConnectionSuccessfulTypeAdvanced extends AbstractHandleTcpDistributeClientTransferServerPermeateChannelConnectionSuccessfulTypeAdvanced<NettyProxyMsg> {
/**
* 处理当前数据
*
* @param transferChannel 当前通道
* @param nettyProxyMsg 通道数据
*/
@Override
public void doHandler(Channel transferChannel, NettyProxyMsg nettyProxyMsg) {
// 连接成功 开启自动读取写
byte[] msgVisitorId = nettyProxyMsg.getVisitorId();
String visitorId = new String(msgVisitorId);
Channel nextChannel = ChannelAttributeKeyUtils.getNextChannel(transferChannel);
nextChannel.config().setOption(ChannelOption.AUTO_READ, true);
}
}

View File

@@ -1,11 +1,11 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.advanced;
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.advanced;
import com.alibaba.fastjson.JSONObject;
import io.netty.channel.Channel;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.client.netty.event.ClientChangeEvent;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.event.ClientChangeEvent;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.client.AbstractHandleDistributeConnectSuccessNotificationTypeAdvancedHandle;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.permeate.tcp.client.AbstractHandleTcpDistributeConnectSuccessNotificationTypeAdvancedHandle;
import java.util.List;
@@ -13,12 +13,12 @@ import java.util.List;
* 客户端连接成功通知
*/
@Slf4j
public class HandleDistributeConnectSuccessNotificationTypeAdvancedHandle extends AbstractHandleDistributeConnectSuccessNotificationTypeAdvancedHandle<NettyProxyMsg> {
public class ClientHandleTcpDistributeConnectSuccessNotificationTypeAdvancedHandle extends AbstractHandleTcpDistributeConnectSuccessNotificationTypeAdvancedHandle<NettyProxyMsg> {
private final ClientChangeEvent clientChangeEvent;
public HandleDistributeConnectSuccessNotificationTypeAdvancedHandle(ClientChangeEvent clientChangeEvent) {
public ClientHandleTcpDistributeConnectSuccessNotificationTypeAdvancedHandle(ClientChangeEvent clientChangeEvent) {
this.clientChangeEvent = clientChangeEvent;
}
@@ -36,8 +36,8 @@ public class HandleDistributeConnectSuccessNotificationTypeAdvancedHandle extend
// 存储其他客户端状态
List<String> clientIdList = JSONObject.parseArray(new String(msg.getData()), String.class);
for (String tenantId : clientIdList) {
clientChangeEvent.clientOnLine(tenantId);
for (String clientId : clientIdList) {
clientChangeEvent.clientOnLine(clientId);
}
}

View File

@@ -1,11 +1,11 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.advanced;
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.advanced;
import io.netty.channel.Channel;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.client.netty.event.ClientChangeEvent;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.event.ClientChangeEvent;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.client.AbstractHandleDistributeDisconnectTypeAdvancedHandle;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.permeate.tcp.client.AbstractHandleTcpDistributeDisconnectTypeAdvancedHandle;
/**
@@ -13,12 +13,12 @@ import org.framework.lazy.cloud.network.heartbeat.common.advanced.client.Abstrac
* TYPE_DISCONNECT
*/
@Slf4j
public class HandleDistributeDisconnectTypeAdvancedHandle extends AbstractHandleDistributeDisconnectTypeAdvancedHandle<NettyProxyMsg> {
public class ClientHandleTcpDistributeDisconnectTypeAdvancedHandle extends AbstractHandleTcpDistributeDisconnectTypeAdvancedHandle<NettyProxyMsg> {
private final ClientChangeEvent clientChangeEvent;
public HandleDistributeDisconnectTypeAdvancedHandle(ClientChangeEvent clientChangeEvent) {
public ClientHandleTcpDistributeDisconnectTypeAdvancedHandle(ClientChangeEvent clientChangeEvent) {
this.clientChangeEvent = clientChangeEvent;
}

View File

@@ -0,0 +1,59 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.advanced;
import io.netty.channel.Channel;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.client.config.NettyClientProperties;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.socket.NettyTcpServerPermeateClientRealSocket;
import org.framework.lazy.cloud.network.heartbeat.common.InternalNetworkPenetrationRealClient;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.HandleChannelTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.permeate.tcp.client.AbstractHandleTcpDistributeServicePermeateClientRealConnectTypeAdvanced;
import org.wu.framework.spring.utils.SpringContextHolder;
import java.util.ArrayList;
import java.util.List;
/**
* 客户端创建真实代理同奥
*/
@Slf4j
public class ClientHandleTcpDistributeServicePermeateClientRealConnectTypeAdvanced extends AbstractHandleTcpDistributeServicePermeateClientRealConnectTypeAdvanced<NettyProxyMsg> {
private final NettyClientProperties nettyClientProperties;// 服务端地址信息
public ClientHandleTcpDistributeServicePermeateClientRealConnectTypeAdvanced(NettyClientProperties nettyClientProperties) {
this.nettyClientProperties = nettyClientProperties;
}
/**
* 处理当前数据
*
* @param channel 当前通道
* @param msg 通道数据
*/
@Override
protected void doHandler(Channel channel, NettyProxyMsg msg) {
// 创建真实端口监听
byte[] clientIdBytes = msg.getClientId();
byte[] visitorPort = msg.getVisitorPort();
byte[] clientTargetIp = msg.getClientTargetIp();
byte[] clientTargetPort = msg.getClientTargetPort();
byte[] visitorIdBytes = msg.getVisitorId();
InternalNetworkPenetrationRealClient internalNetworkPenetrationRealClient =
InternalNetworkPenetrationRealClient
.builder()
.clientId(new String(clientIdBytes))
.visitorPort(Integer.valueOf(new String(visitorPort)))
.clientTargetIp(new String(clientTargetIp))
.clientTargetPort(Integer.valueOf(new String(clientTargetPort)))
.visitorId(new String(visitorIdBytes))
.build();
List<HandleChannelTypeAdvanced> handleChannelTypeAdvancedList = new ArrayList<>(SpringContextHolder.getApplicationContext().getBeansOfType(HandleChannelTypeAdvanced.class).values());
// 绑定真实服务端口
NettyTcpServerPermeateClientRealSocket.buildRealServer(internalNetworkPenetrationRealClient, nettyClientProperties, handleChannelTypeAdvancedList);
}
}

View File

@@ -0,0 +1,41 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.advanced;
import io.netty.buffer.ByteBuf;
import io.netty.channel.Channel;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.permeate.tcp.client.AbstractHandleTcpDistributeServicePermeateClientTransferClientResponseTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.enums.TcpMessageTypeEnums;
import org.framework.lazy.cloud.network.heartbeat.common.utils.ChannelAttributeKeyUtils;
/**
* 下发客户端渗透客户端数据传输响应
*
*
* @see TcpMessageTypeEnums#TCP_DISTRIBUTE_CLIENT_PERMEATE_CLIENT_TRANSFER_RESPONSE
*/
@Slf4j
public class ClientHandleTcpDistributeServicePermeateClientTransferClientResponseTypeAdvanced extends AbstractHandleTcpDistributeServicePermeateClientTransferClientResponseTypeAdvanced<NettyProxyMsg> {
/**
* 处理当前数据
*
* @param channel 当前通道
* @param nettyProxyMsg 通道数据
*/
@Override
public void doHandler(Channel channel, NettyProxyMsg nettyProxyMsg) {
Channel nextChannel = ChannelAttributeKeyUtils.getNextChannel(channel);
// 把数据转到真实服务
ByteBuf buf = channel.config().getAllocator().buffer(nettyProxyMsg.getData().length);
buf.writeBytes(nettyProxyMsg.getData());
nextChannel.writeAndFlush(buf);
}
}

View File

@@ -1,15 +1,15 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.advanced;
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.advanced;
import io.netty.channel.Channel;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.client.AbstractHandleDistributeSingleClientMessageTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.permeate.tcp.client.AbstractHandleTcpDistributeSingleClientMessageTypeAdvanced;
/**
* 接收服务端发送过来的聊天信息
*/
@Slf4j
public class ClientHandleDistributeSingleClientMessageTypeAdvanced extends AbstractHandleDistributeSingleClientMessageTypeAdvanced<NettyProxyMsg> {
public class ClientHandleTcpDistributeSingleClientMessageTypeAdvanced extends AbstractHandleTcpDistributeSingleClientMessageTypeAdvanced<NettyProxyMsg> {
/**
* 处理当前数据
*

View File

@@ -1,14 +1,15 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.advanced;
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.advanced;
import io.netty.channel.Channel;
import io.netty.channel.ChannelOption;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.NettyRealIdContext;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.client.AbstractHandleDistributeSingleClientRealAutoReadConnectTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.permeate.tcp.client.AbstractHandleTcpDistributeSingleClientRealAutoReadConnectTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.utils.ChannelAttributeKeyUtils;
@Slf4j
public class ClientHandleDistributeSingleClientRealAutoReadConnectTypeAdvanced extends AbstractHandleDistributeSingleClientRealAutoReadConnectTypeAdvanced<NettyProxyMsg> {
public class ClientHandleTcpDistributeSingleClientRealAutoReadConnectTypeAdvanced extends AbstractHandleTcpDistributeSingleClientRealAutoReadConnectTypeAdvanced<NettyProxyMsg> {
/**
* 处理当前数据
*
@@ -21,8 +22,9 @@ public class ClientHandleDistributeSingleClientRealAutoReadConnectTypeAdvanced e
byte[] visitorId = nettyProxyMsg.getVisitorId();
// 获取访客对应的真实代理通道
Channel realChannel = NettyRealIdContext.getReal(visitorId);
if (realChannel != null) {
realChannel.config().setOption(ChannelOption.AUTO_READ, true);
Channel nextChannel = ChannelAttributeKeyUtils.getNextChannel(channel);
if (nextChannel != null) {
nextChannel.config().setOption(ChannelOption.AUTO_READ, true);
}
}

View File

@@ -1,14 +1,14 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.advanced;
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.advanced;
import io.netty.channel.Channel;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.common.NettyCommunicationIdContext;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.NettyRealIdContext;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.client.AbstractHandleDistributeSingleClientRealCloseVisitorTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.permeate.tcp.client.AbstractHandleTcpDistributeSingleClientRealCloseVisitorTypeAdvanced;
@Slf4j
public class ClientHandleDistributeSingleClientRealCloseVisitorTypeAdvanced extends AbstractHandleDistributeSingleClientRealCloseVisitorTypeAdvanced<NettyProxyMsg> {
public class ClientHandleTcpDistributeSingleClientRealCloseVisitorTypeAdvanced extends AbstractHandleTcpDistributeSingleClientRealCloseVisitorTypeAdvanced<NettyProxyMsg> {
/**
* 处理当前数据
*

View File

@@ -1,15 +1,15 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.advanced;
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.advanced;
import io.netty.channel.Channel;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.client.AbstractHandleDistributeStagingClosedTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.permeate.tcp.client.AbstractHandleTcpDistributeStagingClosedTypeAdvanced;
/**
* 服务端下发暂存关闭消息处理
*/
@Slf4j
public class HandleDistributeStagingClosedTypeAdvanced extends AbstractHandleDistributeStagingClosedTypeAdvanced<NettyProxyMsg> {
public class ClientHandleTcpDistributeStagingClosedTypeAdvanced extends AbstractHandleTcpDistributeStagingClosedTypeAdvanced<NettyProxyMsg> {
/**

View File

@@ -1,18 +1,18 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.advanced;
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.advanced;
import io.netty.channel.Channel;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.client.AbstractHandleDistributeStagingOpenedTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.permeate.tcp.client.AbstractHandleTcpDistributeStagingOpenedTypeAdvanced;
/**
* 服务端下发暂存开启消息处理
*/
@Slf4j
public class HandleDistributeStagingOpenedTypeAdvanced extends AbstractHandleDistributeStagingOpenedTypeAdvanced<NettyProxyMsg> {
public class ClientHandleTcpDistributeStagingOpenedTypeAdvanced extends AbstractHandleTcpDistributeStagingOpenedTypeAdvanced<NettyProxyMsg> {
public HandleDistributeStagingOpenedTypeAdvanced() {
public ClientHandleTcpDistributeStagingOpenedTypeAdvanced() {
}

View File

@@ -1,30 +1,30 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.filter;
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.filter;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelPipeline;
import io.netty.channel.socket.SocketChannel;
import io.netty.handler.codec.string.StringDecoder;
import io.netty.handler.codec.string.StringEncoder;
import io.netty.handler.timeout.IdleStateHandler;
import org.framework.lazy.cloud.network.heartbeat.client.netty.handler.NettyClientHandler;
import org.framework.lazy.cloud.network.heartbeat.client.netty.socket.NettyClientSocket;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.handler.NettyTcpClientHandler;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.socket.NettyTcpClientSocket;
import org.framework.lazy.cloud.network.heartbeat.common.adapter.ChannelTypeAdapter;
import org.framework.lazy.cloud.network.heartbeat.common.decoder.NettyProxyMsgDecoder;
import org.framework.lazy.cloud.network.heartbeat.common.encoder.NettyProxyMsgEncoder;
import org.framework.lazy.cloud.network.heartbeat.common.filter.DebugChannelInitializer;
public class NettyClientFilter extends ChannelInitializer<SocketChannel> {
public class NettyTcpClientFilter extends DebugChannelInitializer<SocketChannel> {
private final ChannelTypeAdapter channelTypeAdapter;
private final NettyClientSocket nettyClientSocket;
private final NettyTcpClientSocket nettyTcpClientSocket;
public NettyClientFilter(ChannelTypeAdapter channelTypeAdapter, NettyClientSocket nettyClientSocket) {
public NettyTcpClientFilter(ChannelTypeAdapter channelTypeAdapter, NettyTcpClientSocket nettyTcpClientSocket) {
this.channelTypeAdapter = channelTypeAdapter;
this.nettyClientSocket = nettyClientSocket;
this.nettyTcpClientSocket = nettyTcpClientSocket;
}
@Override
protected void initChannel(SocketChannel ch) throws Exception {
protected void initChannel0(SocketChannel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
/* * 解码和编码,应和服务端一致 * */
@@ -40,6 +40,6 @@ public class NettyClientFilter extends ChannelInitializer<SocketChannel> {
pipeline.addLast(new IdleStateHandler(0, 4, 0));
pipeline.addLast("decoder", new StringDecoder());
pipeline.addLast("encoder", new StringEncoder());
pipeline.addLast("doHandler", new NettyClientHandler(channelTypeAdapter, nettyClientSocket)); //客户端的逻辑
pipeline.addLast("doHandler", new NettyTcpClientHandler(channelTypeAdapter, nettyTcpClientSocket)); //客户端的逻辑
}
}

View File

@@ -0,0 +1,27 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.filter;
import io.netty.channel.Channel;
import io.netty.channel.ChannelPipeline;
import io.netty.channel.socket.SocketChannel;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.handler.NettyTcpClientPermeateClientRealHandler;
import org.framework.lazy.cloud.network.heartbeat.common.decoder.TransferDecoder;
import org.framework.lazy.cloud.network.heartbeat.common.encoder.TransferEncoder;
import org.framework.lazy.cloud.network.heartbeat.common.filter.DebugChannelInitializer;
public class NettyTcpClientPermeateClientRealFilter extends DebugChannelInitializer<SocketChannel> {
/**
* This method will be called once the {@link Channel} was registered. After the method returns this instance
* will be removed from the {@link ChannelPipeline} of the {@link Channel}.
*
* @param ch the {@link Channel} which was registered.
*/
@Override
protected void initChannel0(SocketChannel ch) {
ChannelPipeline pipeline = ch.pipeline();
// 解码、编码
pipeline.addLast(new TransferDecoder(Integer.MAX_VALUE, 1024 * 1024*10));
pipeline.addLast(new TransferEncoder());
pipeline.addLast(new NettyTcpClientPermeateClientRealHandler());
}
}

View File

@@ -0,0 +1,46 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.filter;
import io.netty.channel.Channel;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelPipeline;
import io.netty.channel.socket.SocketChannel;
import io.netty.handler.timeout.IdleStateHandler;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.handler.NettyTcpClientPermeateClientTransferHandler;
import org.framework.lazy.cloud.network.heartbeat.common.adapter.ChannelTypeAdapter;
import org.framework.lazy.cloud.network.heartbeat.common.decoder.NettyProxyMsgDecoder;
import org.framework.lazy.cloud.network.heartbeat.common.encoder.NettyProxyMsgEncoder;
import org.framework.lazy.cloud.network.heartbeat.common.filter.DebugChannelInitializer;
/**
* netty 客户端渗透通信通道
*/
public class NettyTcpClientPermeateClientTransferFilter extends DebugChannelInitializer<SocketChannel> {
private final ChannelTypeAdapter channelTypeAdapter;
public NettyTcpClientPermeateClientTransferFilter(ChannelTypeAdapter channelTypeAdapter) {
this.channelTypeAdapter = channelTypeAdapter;
}
/**
* This method will be called once the {@link Channel} was registered. After the method returns this instance
* will be removed from the {@link ChannelPipeline} of the {@link Channel}.
*
* @param ch the {@link Channel} which was registered.
* @throws Exception is thrown if an error occurs. In that case it will be handled by
* {@link #exceptionCaught(ChannelHandlerContext, Throwable)} which will by default connectionClose
* the {@link Channel}.
*/
@Override
protected void initChannel0(SocketChannel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
// // 解码、编码
// pipeline.addLast(new NettyProxyMsgDecoder(Integer.MAX_VALUE, 0, 4, -4, 0));
// pipeline.addLast(new NettMsgEncoder());
pipeline.addLast(new IdleStateHandler(0, 4, 0));
pipeline.addLast(new NettyProxyMsgDecoder(Integer.MAX_VALUE, 0, 4, -4, 0));
pipeline.addLast(new NettyProxyMsgEncoder());
pipeline.addLast(new NettyTcpClientPermeateClientTransferHandler(channelTypeAdapter));
}
}

View File

@@ -0,0 +1,46 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.filter;
import io.netty.channel.Channel;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelPipeline;
import io.netty.channel.socket.SocketChannel;
import io.netty.handler.timeout.IdleStateHandler;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.handler.NettyTcpClientPermeateClientTransferRealHandler;
import org.framework.lazy.cloud.network.heartbeat.common.adapter.ChannelTypeAdapter;
import org.framework.lazy.cloud.network.heartbeat.common.decoder.NettyProxyMsgDecoder;
import org.framework.lazy.cloud.network.heartbeat.common.encoder.NettyProxyMsgEncoder;
import org.framework.lazy.cloud.network.heartbeat.common.filter.DebugChannelInitializer;
/**
* netty 客户端连接真实服服务端访客拦截器
*/
public class NettyTcpClientPermeateClientTransferRealFilter extends DebugChannelInitializer<SocketChannel> {
private final ChannelTypeAdapter channelTypeAdapter;
public NettyTcpClientPermeateClientTransferRealFilter(ChannelTypeAdapter channelTypeAdapter) {
this.channelTypeAdapter = channelTypeAdapter;
}
/**
* This method will be called once the {@link Channel} was registered. After the method returns this instance
* will be removed from the {@link ChannelPipeline} of the {@link Channel}.
*
* @param ch the {@link Channel} which was registered.
* @throws Exception is thrown if an error occurs. In that case it will be handled by
* {@link #exceptionCaught(ChannelHandlerContext, Throwable)} which will by default connectionClose
* the {@link Channel}.
*/
@Override
protected void initChannel0(SocketChannel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
// // 解码、编码
// pipeline.addLast(new NettyProxyMsgDecoder(Integer.MAX_VALUE, 0, 4, -4, 0));
// pipeline.addLast(new NettMsgEncoder());
pipeline.addLast(new IdleStateHandler(0, 4, 0));
pipeline.addLast(new NettyProxyMsgDecoder(Integer.MAX_VALUE, 0, 4, -4, 0));
pipeline.addLast(new NettyProxyMsgEncoder());
pipeline.addLast(new NettyTcpClientPermeateClientTransferRealHandler(channelTypeAdapter));
}
}

View File

@@ -0,0 +1,36 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.filter;
import io.netty.channel.Channel;
import io.netty.channel.ChannelDuplexHandler;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelPipeline;
import io.netty.channel.socket.SocketChannel;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.NettyClientPermeateClientVisitor;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.handler.NettyTcpClientPermeateClientVisitorHandler;
import org.framework.lazy.cloud.network.heartbeat.common.filter.DebugChannelInitializer;
public class NettyTcpClientPermeateClientVisitorFilter extends DebugChannelInitializer<SocketChannel> {
private final NettyClientPermeateClientVisitor nettyClientPermeateClientVisitor;
public NettyTcpClientPermeateClientVisitorFilter(NettyClientPermeateClientVisitor nettyClientPermeateClientVisitor) {
this.nettyClientPermeateClientVisitor = nettyClientPermeateClientVisitor;
}
/**
* This method will be called once the {@link Channel} was registered. After the method returns this instance
* will be removed from the {@link ChannelPipeline} of the {@link Channel}.
*
* @param ch the {@link Channel} which was registered.
* @throws Exception is thrown if an error occurs. In that case it will be handled by
* {@link #exceptionCaught(ChannelHandlerContext, Throwable)} which will by default connectionClose
* the {@link Channel}.
*/
@Override
protected void initChannel0(SocketChannel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
pipeline.addLast(new ChannelDuplexHandler());
pipeline.addLast(new NettyTcpClientPermeateClientVisitorHandler(nettyClientPermeateClientVisitor));
}
}

View File

@@ -0,0 +1,45 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.filter;
import io.netty.channel.Channel;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelPipeline;
import io.netty.channel.socket.SocketChannel;
import io.netty.handler.timeout.IdleStateHandler;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.handler.NettyTcpClientPermeateServerTransferHandler;
import org.framework.lazy.cloud.network.heartbeat.common.adapter.ChannelTypeAdapter;
import org.framework.lazy.cloud.network.heartbeat.common.decoder.NettyProxyMsgDecoder;
import org.framework.lazy.cloud.network.heartbeat.common.encoder.NettyProxyMsgEncoder;
import org.framework.lazy.cloud.network.heartbeat.common.filter.DebugChannelInitializer;
/**
* netty 客户端渗透通信通道
*/
public class NettyTcpClientPermeateServerTransferFilter extends DebugChannelInitializer<SocketChannel> {
private final ChannelTypeAdapter channelTypeAdapter;
public NettyTcpClientPermeateServerTransferFilter(ChannelTypeAdapter channelTypeAdapter) {
this.channelTypeAdapter = channelTypeAdapter;
}
/**
* This method will be called once the {@link Channel} was registered. After the method returns this instance
* will be removed from the {@link ChannelPipeline} of the {@link Channel}.
*
* @param ch the {@link Channel} which was registered.
* @throws Exception is thrown if an error occurs. In that case it will be handled by
* {@link #exceptionCaught(ChannelHandlerContext, Throwable)} which will by default connectionClose
* the {@link Channel}.
*/
@Override
protected void initChannel0(SocketChannel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
// // 解码、编码
// pipeline.addLast(new NettyProxyMsgDecoder(Integer.MAX_VALUE, 0, 4, -4, 0));
// pipeline.addLast(new NettMsgEncoder());
pipeline.addLast(new IdleStateHandler(0, 4, 0));
pipeline.addLast(new NettyProxyMsgDecoder(Integer.MAX_VALUE, 0, 4, -4, 0));
pipeline.addLast(new IdleStateHandler(0, 4, 0));
pipeline.addLast(new NettyProxyMsgEncoder());
pipeline.addLast(new NettyTcpClientPermeateServerTransferHandler(channelTypeAdapter));
}
}

View File

@@ -0,0 +1,36 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.filter;
import io.netty.channel.Channel;
import io.netty.channel.ChannelDuplexHandler;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelPipeline;
import io.netty.channel.socket.SocketChannel;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.NettyClientPermeateServerVisitor;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.handler.NettyTcpClientPermeateServerVisitorHandler;
import org.framework.lazy.cloud.network.heartbeat.common.filter.DebugChannelInitializer;
public class NettyTcpClientPermeateServerVisitorFilter extends DebugChannelInitializer<SocketChannel> {
private final NettyClientPermeateServerVisitor nettyClientPermeateServerVisitor;
public NettyTcpClientPermeateServerVisitorFilter(NettyClientPermeateServerVisitor nettyClientPermeateServerVisitor) {
this.nettyClientPermeateServerVisitor = nettyClientPermeateServerVisitor;
}
/**
* This method will be called once the {@link Channel} was registered. After the method returns this instance
* will be removed from the {@link ChannelPipeline} of the {@link Channel}.
*
* @param ch the {@link Channel} which was registered.
* @throws Exception is thrown if an error occurs. In that case it will be handled by
* {@link #exceptionCaught(ChannelHandlerContext, Throwable)} which will by default connectionClose
* the {@link Channel}.
*/
@Override
protected void initChannel0(SocketChannel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
pipeline.addLast(new ChannelDuplexHandler());
pipeline.addLast(new NettyTcpClientPermeateServerVisitorHandler(nettyClientPermeateServerVisitor));
}
}

View File

@@ -0,0 +1,27 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.filter;
import io.netty.channel.Channel;
import io.netty.channel.ChannelPipeline;
import io.netty.channel.socket.SocketChannel;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.handler.NettyTcpServerPermeateClientRealHandler;
import org.framework.lazy.cloud.network.heartbeat.common.decoder.TransferDecoder;
import org.framework.lazy.cloud.network.heartbeat.common.encoder.TransferEncoder;
import org.framework.lazy.cloud.network.heartbeat.common.filter.DebugChannelInitializer;
public class NettyTcpServerPermeateClientRealFilter extends DebugChannelInitializer<SocketChannel> {
/**
* This method will be called once the {@link Channel} was registered. After the method returns this instance
* will be removed from the {@link ChannelPipeline} of the {@link Channel}.
*
* @param ch the {@link Channel} which was registered.
*/
@Override
protected void initChannel0(SocketChannel ch) {
ChannelPipeline pipeline = ch.pipeline();
// 解码、编码
pipeline.addLast(new TransferDecoder(Integer.MAX_VALUE, 1024 * 1024*10));
pipeline.addLast(new TransferEncoder());
pipeline.addLast(new NettyTcpServerPermeateClientRealHandler());
}
}

View File

@@ -1,22 +1,22 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.filter;
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.filter;
import io.netty.channel.Channel;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelPipeline;
import io.netty.channel.socket.SocketChannel;
import org.framework.lazy.cloud.network.heartbeat.client.netty.handler.NettyClientVisitorRealHandler;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.handler.NettyTcpServerPermeateClientTransferHandler;
import org.framework.lazy.cloud.network.heartbeat.common.adapter.ChannelTypeAdapter;
import org.framework.lazy.cloud.network.heartbeat.common.decoder.NettyProxyMsgDecoder;
import org.framework.lazy.cloud.network.heartbeat.common.encoder.NettyProxyMsgEncoder;
import org.framework.lazy.cloud.network.heartbeat.common.filter.DebugChannelInitializer;
/**
* netty 客户端连接真实服服务端访客拦截器
*/
public class NettyClientVisitorRealFilter extends ChannelInitializer<SocketChannel> {
public class NettyTcpServerPermeateClientTransferFilter extends DebugChannelInitializer<SocketChannel> {
private final ChannelTypeAdapter channelTypeAdapter;
public NettyClientVisitorRealFilter(ChannelTypeAdapter channelTypeAdapter) {
public NettyTcpServerPermeateClientTransferFilter(ChannelTypeAdapter channelTypeAdapter) {
this.channelTypeAdapter = channelTypeAdapter;
}
@@ -30,13 +30,13 @@ public class NettyClientVisitorRealFilter extends ChannelInitializer<SocketChann
* the {@link Channel}.
*/
@Override
protected void initChannel(SocketChannel ch) throws Exception {
protected void initChannel0(SocketChannel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
// // 解码编码
// pipeline.addLast(new NettyProxyMsgDecoder(Integer.MAX_VALUE, 0, 4, -4, 0));
// pipeline.addLast(new NettMsgEncoder());
pipeline.addLast(new NettyProxyMsgDecoder(Integer.MAX_VALUE, 0, 4, -4, 0));
pipeline.addLast(new NettyProxyMsgEncoder());
pipeline.addLast(new NettyClientVisitorRealHandler(channelTypeAdapter));
pipeline.addLast(new NettyTcpServerPermeateClientTransferHandler(channelTypeAdapter));
}
}

View File

@@ -1,4 +1,4 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.handler;
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.handler;
import io.netty.channel.Channel;
import io.netty.channel.ChannelHandlerContext;
@@ -7,8 +7,8 @@ import io.netty.channel.SimpleChannelInboundHandler;
import io.netty.handler.timeout.IdleState;
import io.netty.handler.timeout.IdleStateEvent;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.client.netty.socket.NettyClientSocket;
import org.framework.lazy.cloud.network.heartbeat.common.MessageType;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.socket.NettyTcpClientSocket;
import org.framework.lazy.cloud.network.heartbeat.common.constant.TcpMessageType;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.adapter.ChannelTypeAdapter;
@@ -23,15 +23,15 @@ import java.util.concurrent.TimeUnit;
* @date 2023/09/13 10:29
*/
@Slf4j
public class NettyClientHandler extends SimpleChannelInboundHandler<NettyProxyMsg> {
public class NettyTcpClientHandler extends SimpleChannelInboundHandler<NettyProxyMsg> {
private final ChannelTypeAdapter channelTypeAdapter;
private final NettyClientSocket nettyClientSocket;
private final NettyTcpClientSocket nettyTcpClientSocket;
public NettyClientHandler(ChannelTypeAdapter channelTypeAdapter, NettyClientSocket nettyClientSocket) {
public NettyTcpClientHandler(ChannelTypeAdapter channelTypeAdapter, NettyTcpClientSocket nettyTcpClientSocket) {
this.channelTypeAdapter = channelTypeAdapter;
this.nettyClientSocket = nettyClientSocket;
this.nettyTcpClientSocket = nettyTcpClientSocket;
}
/**
@@ -58,11 +58,11 @@ public class NettyClientHandler extends SimpleChannelInboundHandler<NettyProxyMs
// 建立连接时
log.info("When establishing a connection{}" , new Date());
ctx.fireChannelActive();
String clientId = nettyClientSocket.getClientId();
String clientId = nettyTcpClientSocket.getClientId();
// 处理客户端连接成功
Channel channel = ctx.channel();
NettyProxyMsg nettyMsg = new NettyProxyMsg();
nettyMsg.setType(MessageType.CLIENT_CHANNEL_ACTIVE);
nettyMsg.setType(TcpMessageType.TCP_CLIENT_CHANNEL_ACTIVE);
nettyMsg.setClientId(clientId);
channelTypeAdapter.handler(channel, nettyMsg);
@@ -79,7 +79,7 @@ public class NettyClientHandler extends SimpleChannelInboundHandler<NettyProxyMs
final EventLoop eventLoop = ctx.channel().eventLoop();
eventLoop.schedule(() -> {
try {
nettyClientSocket.newConnect2Server();
nettyTcpClientSocket.newConnect2Server();
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
@@ -95,18 +95,18 @@ public class NettyClientHandler extends SimpleChannelInboundHandler<NettyProxyMs
public void userEventTriggered(ChannelHandlerContext ctx, Object obj) throws Exception {
if (obj instanceof IdleStateEvent event) {
if (IdleState.WRITER_IDLE.equals(event.state())) { //如果写通道处于空闲状态,就发送心跳命令
String clientId = nettyClientSocket.getClientId();
String clientId = nettyTcpClientSocket.getClientId();
NettyProxyMsg nettyMsg = new NettyProxyMsg();
nettyMsg.setType(MessageType.TYPE_HEARTBEAT);
nettyMsg.setType(TcpMessageType.TCP_TYPE_HEARTBEAT);
nettyMsg.setData(clientId.getBytes(StandardCharsets.UTF_8));
nettyMsg.setClientId(clientId.getBytes(StandardCharsets.UTF_8));
ctx.writeAndFlush(nettyMsg);// 发送心跳数据
} else if (event.state() == IdleState.WRITER_IDLE) { // 如果检测到写空闲状态关闭连接
// 离线暂存通知
String clientId = nettyClientSocket.getClientId();
String clientId = nettyTcpClientSocket.getClientId();
Channel channel = ctx.channel();
NettyProxyMsg nettyMsg = new NettyProxyMsg();
nettyMsg.setType(MessageType.DISTRIBUTE_CLIENT_DISCONNECTION_NOTIFICATION);
nettyMsg.setType(TcpMessageType.TCP_DISTRIBUTE_CLIENT_DISCONNECTION_NOTIFICATION);
nettyMsg.setClientId(clientId.getBytes(StandardCharsets.UTF_8));
channelTypeAdapter.handler(channel, nettyMsg);
ctx.close();

View File

@@ -0,0 +1,81 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.handler;
import io.netty.channel.Channel;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.SimpleChannelInboundHandler;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.common.constant.TcpMessageType;
import org.framework.lazy.cloud.network.heartbeat.common.NettyByteBuf;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.utils.ChannelAttributeKeyUtils;
/**
* 来自客户端 真实服务器返回的数据请求
*/
@Slf4j
public class NettyTcpClientPermeateClientRealHandler extends SimpleChannelInboundHandler<NettyByteBuf> {
@Override
public void channelRead0(ChannelHandlerContext ctx,NettyByteBuf nettyByteBuf) {
byte[] bytes = nettyByteBuf.getData();
log.debug("bytes.length:{}",bytes.length);
log.debug("接收客户端真实服务数据:{}", new String(bytes));
String visitorId = ChannelAttributeKeyUtils.getVisitorId(ctx.channel());
Integer visitorPort = ChannelAttributeKeyUtils.getVisitorPort(ctx.channel());
String clientId = ChannelAttributeKeyUtils.getClientId(ctx.channel());
// 访客通信通道 上报服务端代理完成
Channel nextChannel = ChannelAttributeKeyUtils.getNextChannel(ctx.channel());
NettyProxyMsg returnMessage = new NettyProxyMsg();
returnMessage.setType(TcpMessageType.TCP_REPORT_CLIENT_TRANSFER_CLIENT_RESPONSE);
returnMessage.setVisitorId(visitorId);
returnMessage.setClientId(clientId);
returnMessage.setVisitorPort(visitorPort);
returnMessage.setData(bytes);
nextChannel.writeAndFlush(returnMessage);
}
@Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
super.channelActive(ctx);
}
@Override
public void channelInactive(ChannelHandlerContext ctx) throws Exception {
String visitorId = ChannelAttributeKeyUtils.getVisitorId(ctx.channel());
// 客户端真实通信通道
Channel nextChannel = ChannelAttributeKeyUtils.getNextChannel(ctx.channel());
if (nextChannel != null) {
// 上报关闭这个客户端的访客通道
NettyProxyMsg closeVisitorMsg = new NettyProxyMsg();
closeVisitorMsg.setType(TcpMessageType.TCP_REPORT_CLIENT_PERMEATE_CLIENT_TRANSFER_CLOSE);
closeVisitorMsg.setVisitorId(visitorId);
nextChannel.writeAndFlush(closeVisitorMsg);
}
super.channelInactive(ctx);
}
@Override
public void channelWritabilityChanged(ChannelHandlerContext ctx) throws Exception {
// 获取访客的传输通道
if (ctx.channel().isWritable()) {
log.debug("Channel is writable again");
// 恢复之前暂停的操作,如写入数据
} else {
log.debug("Channel is not writable");
// 暂停写入操作,等待可写状态
}
log.info("channelWritabilityChanged!");
}
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
super.exceptionCaught(ctx, cause);
}
}

View File

@@ -0,0 +1,70 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.handler;
import io.netty.channel.Channel;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.SimpleChannelInboundHandler;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.common.constant.TcpMessageType;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.adapter.ChannelTypeAdapter;
import org.framework.lazy.cloud.network.heartbeat.common.utils.ChannelAttributeKeyUtils;
/**
* 客户端访客通信通道 处理器
*/
@Slf4j
public class NettyTcpClientPermeateClientTransferHandler extends SimpleChannelInboundHandler<NettyProxyMsg> {
private final ChannelTypeAdapter channelTypeAdapter;
public NettyTcpClientPermeateClientTransferHandler(ChannelTypeAdapter channelTypeAdapter) {
this.channelTypeAdapter = channelTypeAdapter;
}
@Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
super.channelActive(ctx);
}
@Override
public void channelRead0(ChannelHandlerContext ctx, NettyProxyMsg nettyProxyMsg) throws Exception {
Channel channel = ctx.channel();
channelTypeAdapter.handler(channel, nettyProxyMsg);
}
@Override
public void channelInactive(ChannelHandlerContext ctx) throws Exception {
String clientId = ChannelAttributeKeyUtils.getClientId(ctx.channel());
String visitorId = ChannelAttributeKeyUtils.getVisitorId(ctx.channel());
// 关闭访客
Channel nextChannel = ChannelAttributeKeyUtils.getNextChannel(ctx.channel());
if (nextChannel != null) {
// 上报关闭这个客户端的访客通道
NettyProxyMsg closeVisitorMsg = new NettyProxyMsg();
closeVisitorMsg.setType(TcpMessageType.TCP_REPORT_CLIENT_PERMEATE_CLIENT_TRANSFER_CLOSE);
closeVisitorMsg.setVisitorId(visitorId);
nextChannel.writeAndFlush(closeVisitorMsg);
}
super.channelInactive(ctx);
}
@Override
public void channelWritabilityChanged(ChannelHandlerContext ctx) throws Exception {
if (ctx.channel().isWritable()) {
log.debug("Channel is writable again");
// 恢复之前暂停的操作,如写入数据
} else {
log.debug("Channel is not writable");
// 暂停写入操作,等待可写状态
}
log.info("channelWritabilityChanged!");
}
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
super.exceptionCaught(ctx, cause);
}
}

View File

@@ -0,0 +1,73 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.handler;
import io.netty.channel.Channel;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.SimpleChannelInboundHandler;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.common.constant.TcpMessageType;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.adapter.ChannelTypeAdapter;
import org.framework.lazy.cloud.network.heartbeat.common.utils.ChannelAttributeKeyUtils;
/**
* 客户端访客通信通道 处理器
*/
@Slf4j
public class NettyTcpClientPermeateClientTransferRealHandler extends SimpleChannelInboundHandler<NettyProxyMsg> {
private final ChannelTypeAdapter channelTypeAdapter;
public NettyTcpClientPermeateClientTransferRealHandler(ChannelTypeAdapter channelTypeAdapter) {
this.channelTypeAdapter = channelTypeAdapter;
}
@Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
super.channelActive(ctx);
}
@Override
public void channelRead0(ChannelHandlerContext ctx, NettyProxyMsg nettyProxyMsg) throws Exception {
Channel channel = ctx.channel();
channelTypeAdapter.handler(channel, nettyProxyMsg);
}
@Override
public void channelInactive(ChannelHandlerContext ctx) throws Exception {
String clientId = ChannelAttributeKeyUtils.getClientId(ctx.channel());
String visitorId = ChannelAttributeKeyUtils.getVisitorId(ctx.channel());
Channel nextChannel = ChannelAttributeKeyUtils.getNextChannel(ctx.channel());
log.warn("close client permeate client transfer real clientId:{} visitorId:{}", clientId, visitorId);
// 关闭访客
if (nextChannel != null) {
// 上报关闭这个客户端的访客通道
NettyProxyMsg closeVisitorMsg = new NettyProxyMsg();
closeVisitorMsg.setType(TcpMessageType.TCP_REPORT_CLIENT_PERMEATE_CLIENT_TRANSFER_CLOSE);
closeVisitorMsg.setVisitorId(visitorId);
nextChannel.writeAndFlush(closeVisitorMsg);
}
super.channelInactive(ctx);
}
@Override
public void channelWritabilityChanged(ChannelHandlerContext ctx) throws Exception {
// 处理客户端本地真实通道问题
if (ctx.channel().isWritable()) {
log.debug("Channel is writable again");
// 恢复之前暂停的操作,如写入数据
} else {
log.debug("Channel is not writable");
// 暂停写入操作,等待可写状态
}
log.info("channelWritabilityChanged!");
}
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
super.exceptionCaught(ctx, cause);
}
}

View File

@@ -0,0 +1,133 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.handler;
import io.netty.buffer.ByteBuf;
import io.netty.channel.Channel;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelOption;
import io.netty.channel.SimpleChannelInboundHandler;
import io.netty.util.internal.StringUtil;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.NettyClientPermeateClientVisitor;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.advanced.ClientHandleTcpDistributeClientTransferServerPermeateChannelConnectionSuccessfulTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.socket.NettyTcpClientPermeateClientVisitorTransferSocket;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.socket.NettyTcpClientPermeateServerVisitorTransferSocket;
import org.framework.lazy.cloud.network.heartbeat.common.constant.TcpMessageType;
import org.framework.lazy.cloud.network.heartbeat.common.NettyCommunicationIdContext;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.NettyRealIdContext;
import org.framework.lazy.cloud.network.heartbeat.common.utils.ChannelAttributeKeyUtils;
import java.util.UUID;
@Slf4j
public class NettyTcpClientPermeateClientVisitorHandler extends SimpleChannelInboundHandler<ByteBuf> {
private final NettyClientPermeateClientVisitor nettyClientPermeateClientVisitor;
// private final NettyChannelPool nettyChannelPool = new DefaultNettyChannelPool(10);
public NettyTcpClientPermeateClientVisitorHandler(NettyClientPermeateClientVisitor nettyClientPermeateClientVisitor) {
this.nettyClientPermeateClientVisitor = nettyClientPermeateClientVisitor;
}
/**
* @param ctx
* @throws Exception
* @see NettyTcpClientPermeateServerVisitorTransferSocket
* @see ClientHandleTcpDistributeClientTransferServerPermeateChannelConnectionSuccessfulTypeAdvanced
*/
@Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
// 访客连接上代理服务器了
Channel visitorChannel = ctx.channel();
// 先不读取访客数据
visitorChannel.config().setOption(ChannelOption.AUTO_READ, false);
// 生成访客ID
String visitorId = UUID.randomUUID().toString();
// 绑定访客真实通道
NettyRealIdContext.pushReal(visitorChannel, visitorId);
// 当前通道绑定访客ID
ChannelAttributeKeyUtils.buildVisitorId(visitorChannel, visitorId);
// 判断是否有可用的通道 如果没有创建新的通道
// Channel transferChannel = nettyChannelPool.availableChannel(visitorId);
// 创建访客连接客户端通道
NettyTcpClientPermeateClientVisitorTransferSocket.buildTransferServer(nettyClientPermeateClientVisitor,visitorChannel);
log.info("客户端渗透客户端访客:【{}】端口连接成功了",visitorId);
super.channelActive(ctx);
}
@Override
public void channelRead0(ChannelHandlerContext ctx, ByteBuf buf) {
// 访客通道
Channel visitorChannel = ctx.channel();
String visitorId = ChannelAttributeKeyUtils.getVisitorId(visitorChannel);
Channel nextChannel = ChannelAttributeKeyUtils.getNextChannel(visitorChannel);
byte[] bytes = new byte[buf.readableBytes()];
buf.readBytes(bytes);
// 获取客户端通道,而后进行数据下发
log.debug("【客户端渗透客户端】访客端口成功接收数据:{}", new String(bytes));
// 使用访客的通信通道
Integer visitorPort = nettyClientPermeateClientVisitor.getVisitorPort();
String clientId = nettyClientPermeateClientVisitor.getNettyClientProperties().getClientId();
NettyProxyMsg nettyProxyMsg = new NettyProxyMsg();
nettyProxyMsg.setType(TcpMessageType.TCP_REPORT_CLIENT_PERMEATE_CLIENT_TRANSFER_REQUEST);
nettyProxyMsg.setVisitorId(visitorId);
nettyProxyMsg.setClientId(clientId);
nettyProxyMsg.setVisitorPort(visitorPort);
nettyProxyMsg.setData(bytes);
nextChannel.writeAndFlush(nettyProxyMsg);
log.debug("【客户端渗透客户端】访客端口成功发送数据了");
}
@Override
public void channelInactive(ChannelHandlerContext ctx) throws Exception {
Channel channel = ctx.channel();
String visitorId = ChannelAttributeKeyUtils.getVisitorId(channel);
String clientId = ChannelAttributeKeyUtils.getClientId(channel);
if (StringUtil.isNullOrEmpty(visitorId)) {
super.channelInactive(ctx);
return;
}
Channel nextChannel = ChannelAttributeKeyUtils.getNextChannel(channel);
// 通信通道自动读写打开 ,然后关闭通信通道
if (nextChannel != null && nextChannel.isActive()) {
// nextChannel.config().setOption(ChannelOption.AUTO_READ, true);
// 通知客户端 关闭访问通道、真实通道
NettyProxyMsg myMsg = new NettyProxyMsg();
myMsg.setType(TcpMessageType.TCP_REPORT_CLIENT_PERMEATE_CLIENT_TRANSFER_CLOSE);
nextChannel.writeAndFlush(myMsg);
}
// 关闭 访客通信通道、访客真实通道
NettyRealIdContext.clear(visitorId);
NettyCommunicationIdContext.clear(visitorId);
log.warn("【客户端渗透客户端】访客:【{}】端口断开连接",visitorId);
super.channelInactive(ctx);
}
@Override
public void channelWritabilityChanged(ChannelHandlerContext ctx) throws Exception {
if (ctx.channel().isWritable()) {
log.debug("Channel is writable again");
// 恢复之前暂停的操作,如写入数据
} else {
log.debug("Channel is not writable");
// 暂停写入操作,等待可写状态
}
log.info("channelWritabilityChanged!");
}
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
cause.printStackTrace();
log.error("exceptionCaught");
}
}

View File

@@ -0,0 +1,91 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.handler;
import io.netty.channel.Channel;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.SimpleChannelInboundHandler;
import io.netty.handler.timeout.IdleState;
import io.netty.handler.timeout.IdleStateEvent;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.common.constant.TcpMessageType;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.adapter.ChannelTypeAdapter;
import org.framework.lazy.cloud.network.heartbeat.common.utils.ChannelAttributeKeyUtils;
/**
* 客户端访客通信通道 处理器
*/
@Slf4j
public class NettyTcpClientPermeateServerTransferHandler extends SimpleChannelInboundHandler<NettyProxyMsg> {
private final ChannelTypeAdapter channelTypeAdapter;
public NettyTcpClientPermeateServerTransferHandler(ChannelTypeAdapter channelTypeAdapter) {
this.channelTypeAdapter = channelTypeAdapter;
}
@Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
super.channelActive(ctx);
}
@Override
public void channelRead0(ChannelHandlerContext ctx, NettyProxyMsg nettyProxyMsg) throws Exception {
Channel channel = ctx.channel();
channelTypeAdapter.handler(channel, nettyProxyMsg);
}
@Override
public void channelInactive(ChannelHandlerContext ctx) throws Exception {
String clientId = ChannelAttributeKeyUtils.getClientId(ctx.channel());
String visitorId = ChannelAttributeKeyUtils.getVisitorId(ctx.channel());
// 关闭访客
Channel nextChannel = ChannelAttributeKeyUtils.getNextChannel(ctx.channel());
if (nextChannel != null) {
// 上报关闭服务端客户端真实通道
NettyProxyMsg closeVisitorMsg = new NettyProxyMsg();
closeVisitorMsg.setType(TcpMessageType.TCP_TCP_REPORT_CLIENT_PERMEATE_SERVER_TRANSFER_CLOSE);
closeVisitorMsg.setVisitorId(visitorId);
nextChannel.writeAndFlush(closeVisitorMsg);
}
super.channelInactive(ctx);
}
@Override
public void channelWritabilityChanged(ChannelHandlerContext ctx) throws Exception {
if (ctx.channel().isWritable()) {
log.debug("Channel is writable again");
// 恢复之前暂停的操作,如写入数据
} else {
log.debug("Channel is not writable");
// 暂停写入操作,等待可写状态
}
log.info("channelWritabilityChanged!");
}
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
super.exceptionCaught(ctx, cause);
}
/**
* 心跳请求处理 * 每4秒发送一次心跳请求; *
*/
@Override
public void userEventTriggered(ChannelHandlerContext ctx, Object obj) throws Exception {
if (obj instanceof IdleStateEvent event) {
if (IdleState.WRITER_IDLE.equals(event.state())) { //如果写通道处于空闲状态,就发送心跳命令
NettyProxyMsg nettyMsg = new NettyProxyMsg();
nettyMsg.setType(TcpMessageType.TCP_TYPE_HEARTBEAT);
ctx.writeAndFlush(nettyMsg);// 发送心跳数据
} else if (event.state() == IdleState.WRITER_IDLE) { // 如果检测到写空闲状态,关闭连接
// 离线、暂存通知
ctx.close();
}
} else {
super.userEventTriggered(ctx, obj);
}
}
}

View File

@@ -0,0 +1,147 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.handler;
import io.netty.buffer.ByteBuf;
import io.netty.channel.Channel;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelOption;
import io.netty.channel.SimpleChannelInboundHandler;
import io.netty.util.internal.StringUtil;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.NettyClientPermeateServerVisitor;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.advanced.ClientHandleTcpDistributeClientTransferServerPermeateChannelConnectionSuccessfulTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.socket.NettyTcpClientPermeateServerVisitorTransferSocket;
import org.framework.lazy.cloud.network.heartbeat.common.constant.TcpMessageType;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.utils.ChannelAttributeKeyUtils;
import java.util.UUID;
@Slf4j
public class NettyTcpClientPermeateServerVisitorHandler extends SimpleChannelInboundHandler<ByteBuf> {
private final NettyClientPermeateServerVisitor nettyClientPermeateServerVisitor;
// private final NettyChannelPool nettyChannelPool = new DefaultNettyChannelPool(10);
public NettyTcpClientPermeateServerVisitorHandler(NettyClientPermeateServerVisitor nettyClientPermeateServerVisitor) {
this.nettyClientPermeateServerVisitor = nettyClientPermeateServerVisitor;
}
/**
* @param ctx
* @throws Exception
* @see NettyTcpClientPermeateServerVisitorTransferSocket
* @see ClientHandleTcpDistributeClientTransferServerPermeateChannelConnectionSuccessfulTypeAdvanced
*/
@Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
// 访客连接上代理服务器了
Channel visitorChannel = ctx.channel();
// 先不读取访客数据
visitorChannel.config().setOption(ChannelOption.AUTO_READ, false);
// 生成访客ID
String visitorId = UUID.randomUUID().toString();
Integer visitorPort = nettyClientPermeateServerVisitor.getVisitorPort();
log.info("this channel with visitor port:{} use visitorId:{}", visitorPort, visitorId);
ChannelAttributeKeyUtils.buildVisitorId(visitorChannel, visitorId);
// 判断是否有可用的通道 如果没有创建新的通道
// Channel transferChannel = nettyChannelPool.availableChannel(visitorId);
// 创建访客连接服务端通道
NettyTcpClientPermeateServerVisitorTransferSocket.buildTransferServer(nettyClientPermeateServerVisitor,visitorChannel);
log.debug("客户端渗透服务端访客端口连接成功了,访客ID:{}", visitorId);
super.channelActive(ctx);
}
@Override
public void channelRead0(ChannelHandlerContext ctx, ByteBuf buf) {
// 访客通道
Channel visitorChannel = ctx.channel();
String visitorId = ChannelAttributeKeyUtils.getVisitorId(visitorChannel);
Channel nextChannel = ChannelAttributeKeyUtils.getNextChannel(visitorChannel);
byte[] bytes = new byte[buf.readableBytes()];
buf.readBytes(bytes);
// 获取客户端通道,而后进行数据下发
log.debug("【客户端渗透服务端】访客端口成功接收数据:{}", new String(bytes));
// 使用访客的通信通道
Integer visitorPort = nettyClientPermeateServerVisitor.getVisitorPort();
String clientId = nettyClientPermeateServerVisitor.getNettyClientProperties().getClientId();
NettyProxyMsg nettyProxyMsg = new NettyProxyMsg();
nettyProxyMsg.setType(TcpMessageType.TCP_REPORT_CLIENT_PERMEATE_SERVER_TRANSFER);
nettyProxyMsg.setVisitorId(visitorId);
nettyProxyMsg.setClientId(clientId);
nettyProxyMsg.setVisitorPort(visitorPort);
nettyProxyMsg.setData(bytes);
nextChannel.writeAndFlush(nettyProxyMsg);
// 处理访客流量
log.debug("【客户端渗透服务端】访客端口成功发送数据了 访客ID:{}", visitorId);
}
@Override
public void channelInactive(ChannelHandlerContext ctx) throws Exception {
Channel channel = ctx.channel();
String visitorId = ChannelAttributeKeyUtils.getVisitorId(channel);
log.info("channel inactive:{}", visitorId);
if (StringUtil.isNullOrEmpty(visitorId)) {
super.channelInactive(ctx);
return;
}
Channel nextChannel = ChannelAttributeKeyUtils.getNextChannel(channel);
// 通信通道自动读写打开 ,然后关闭通信通道
if (nextChannel != null && nextChannel.isActive()) {
// 通知服务端 关闭访问通道、真实通道
NettyProxyMsg myMsg = new NettyProxyMsg();
myMsg.setType(TcpMessageType.TCP_TCP_REPORT_CLIENT_PERMEATE_SERVER_TRANSFER_CLOSE);
myMsg.setVisitorId(visitorId);
nextChannel.writeAndFlush(myMsg);
//通信通道
nextChannel.close();
log.debug("关闭访问通道、真实通道 with visitorId:{}", visitorId);
}else {
log.debug("channel inactive:{}", nextChannel);
}
// 访客通道关闭
channel.close();
log.warn("【客户端渗透服务端】访客端口断开连接,访客ID:{}", visitorId);
super.channelInactive(ctx);
}
@Override
public void channelWritabilityChanged(ChannelHandlerContext ctx) throws Exception {
if (ctx.channel().isWritable()) {
log.info("Channel is writable again");
// 恢复之前暂停的操作,如写入数据
} else {
log.info("Channel is not writable");
// 暂停写入操作,等待可写状态
}
log.info("channelWritabilityChanged!");
}
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
log.error("exceptionCaught");
Channel channel = ctx.channel();
String clientId = ChannelAttributeKeyUtils.getClientId(channel);
String visitorId = ChannelAttributeKeyUtils.getVisitorId(channel);
// 使用通信通道 下发关闭访客
Channel nextChannel = ChannelAttributeKeyUtils.getNextChannel(ctx.channel());
if (nextChannel != null) {
// 下发关闭访客
NettyProxyMsg closeRealClient = new NettyProxyMsg();
closeRealClient.setType(TcpMessageType.TCP_TCP_REPORT_CLIENT_PERMEATE_SERVER_TRANSFER_CLOSE);
closeRealClient.setClientId(clientId);
closeRealClient.setVisitorId(visitorId);
nextChannel.writeAndFlush(closeRealClient);
}
ctx.close();
}
}

View File

@@ -1,46 +1,46 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.handler;
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.handler;
import io.netty.buffer.ByteBuf;
import io.netty.channel.Channel;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelOption;
import io.netty.channel.SimpleChannelInboundHandler;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.common.MessageType;
import org.framework.lazy.cloud.network.heartbeat.common.NettyCommunicationIdContext;
import org.framework.lazy.cloud.network.heartbeat.common.constant.TcpMessageType;
import org.framework.lazy.cloud.network.heartbeat.common.NettyByteBuf;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.utils.ChannelAttributeKeyUtils;
import org.wu.framework.core.utils.ObjectUtils;
/**
* 来自客户端 真实服务器返回的数据请求
*/
@Slf4j
public class NettyClientRealHandler extends SimpleChannelInboundHandler<ByteBuf> {
public class NettyTcpServerPermeateClientRealHandler extends SimpleChannelInboundHandler<NettyByteBuf> {
@Override
public void channelRead0(ChannelHandlerContext ctx, ByteBuf buf) throws Exception {
public void channelRead0(ChannelHandlerContext ctx,NettyByteBuf nettyByteBuf) {
// 客户端发送真实数据到代理了
byte[] bytes = new byte[buf.readableBytes()];
buf.readBytes(bytes);
byte[] bytes = nettyByteBuf.getData();
log.debug("bytes.length:{}",bytes.length);
log.debug("接收客户端真实服务数据:{}", new String(bytes));
String visitorId = ChannelAttributeKeyUtils.getVisitorId(ctx.channel());
Integer visitorPort = ChannelAttributeKeyUtils.getVisitorPort(ctx.channel());
String clientId = ChannelAttributeKeyUtils.getClientId(ctx.channel());
// 访客通信通道 上报服务端代理完成
Channel visitorChannel = NettyCommunicationIdContext.getVisitor(visitorId);
Channel visitor = ChannelAttributeKeyUtils.getNextChannel(ctx.channel());
NettyProxyMsg returnMessage = new NettyProxyMsg();
returnMessage.setType(MessageType.REPORT_CLIENT_TRANSFER);
returnMessage.setType(TcpMessageType.TCP_REPORT_CLIENT_TRANSFER);
returnMessage.setVisitorId(visitorId);
returnMessage.setClientId(clientId);
returnMessage.setVisitorPort(visitorPort);
returnMessage.setData(bytes);
visitorChannel.writeAndFlush(returnMessage);
visitor.writeAndFlush(returnMessage);
}
@Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
super.channelActive(ctx);
@@ -51,11 +51,11 @@ public class NettyClientRealHandler extends SimpleChannelInboundHandler<ByteBuf>
String clientId = ChannelAttributeKeyUtils.getClientId(ctx.channel());
String visitorId = ChannelAttributeKeyUtils.getVisitorId(ctx.channel());
// 客户端真实通信通道
Channel visitor = NettyCommunicationIdContext.getVisitor(visitorId);
Channel visitor = ChannelAttributeKeyUtils.getNextChannel(ctx.channel());
if (visitor != null) {
// 上报关闭这个客户端的访客通道
NettyProxyMsg closeVisitorMsg = new NettyProxyMsg();
closeVisitorMsg.setType(MessageType.REPORT_SINGLE_CLIENT_CLOSE_VISITOR);
closeVisitorMsg.setType(TcpMessageType.TCP_REPORT_SERVICE_PERMEATE_CLIENT_CLIENT_CLOSE_VISITOR);
closeVisitorMsg.setVisitorId(visitorId);
visitor.writeAndFlush(closeVisitorMsg);
}
@@ -65,6 +65,7 @@ public class NettyClientRealHandler extends SimpleChannelInboundHandler<ByteBuf>
@Override
public void channelWritabilityChanged(ChannelHandlerContext ctx) throws Exception {
// String vid = ctx.channel().attr(Constant.VID).get();
// if (StringUtil.isNullOrEmpty(vid)) {
// super.channelWritabilityChanged(ctx);
@@ -75,7 +76,19 @@ public class NettyClientRealHandler extends SimpleChannelInboundHandler<ByteBuf>
// proxyChannel.config().setOption(ChannelOption.AUTO_READ, ctx.channel().isWritable());
// }
super.channelWritabilityChanged(ctx);
// 获取访客的传输通道
String visitorId = ChannelAttributeKeyUtils.getVisitorId(ctx.channel());
if(ObjectUtils.isEmpty(visitorId)) {
super.channelWritabilityChanged(ctx);
return;
}
Channel visitorCommunicationChannel = ChannelAttributeKeyUtils.getNextChannel(ctx.channel());
if (visitorCommunicationChannel != null) {
log.debug("visitorId:{} transfer AUTO_READ:{} ",visitorId,ctx.channel().isWritable());
visitorCommunicationChannel.config().setOption(ChannelOption.AUTO_READ, ctx.channel().isWritable());
}
}
@Override

View File

@@ -0,0 +1,77 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.handler;
import io.netty.channel.Channel;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelOption;
import io.netty.channel.SimpleChannelInboundHandler;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.common.*;
import org.framework.lazy.cloud.network.heartbeat.common.adapter.ChannelTypeAdapter;
import org.framework.lazy.cloud.network.heartbeat.common.constant.TcpMessageType;
import org.framework.lazy.cloud.network.heartbeat.common.utils.ChannelAttributeKeyUtils;
import org.wu.framework.core.utils.ObjectUtils;
/**
* 客户端访客通信通道 处理器
*/
@Slf4j
public class NettyTcpServerPermeateClientTransferHandler extends SimpleChannelInboundHandler<NettyProxyMsg> {
private final ChannelTypeAdapter channelTypeAdapter;
public NettyTcpServerPermeateClientTransferHandler(ChannelTypeAdapter channelTypeAdapter) {
this.channelTypeAdapter = channelTypeAdapter;
}
@Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
super.channelActive(ctx);
}
@Override
public void channelRead0(ChannelHandlerContext ctx, NettyProxyMsg nettyProxyMsg) throws Exception {
Channel channel = ctx.channel();
channelTypeAdapter.handler(channel, nettyProxyMsg);
}
@Override
public void channelInactive(ChannelHandlerContext ctx) throws Exception {
String clientId = ChannelAttributeKeyUtils.getClientId(ctx.channel());
String visitorId = ChannelAttributeKeyUtils.getVisitorId(ctx.channel());
// 关闭访客
Channel nextChannel = ChannelAttributeKeyUtils.getNextChannel(ctx.channel());
if (nextChannel != null) {
// 上报关闭这个客户端的访客通道
NettyProxyMsg closeVisitorMsg = new NettyProxyMsg();
closeVisitorMsg.setType(TcpMessageType.TCP_REPORT_SERVICE_PERMEATE_CLIENT_CLIENT_CLOSE_VISITOR);
closeVisitorMsg.setVisitorId(visitorId);
nextChannel.writeAndFlush(closeVisitorMsg);
}
super.channelInactive(ctx);
}
@Override
public void channelWritabilityChanged(ChannelHandlerContext ctx) throws Exception {
// 处理客户端本地真实通道问题
String visitorId = ChannelAttributeKeyUtils.getVisitorId(ctx.channel());
if(ObjectUtils.isEmpty(visitorId)) {
super.channelWritabilityChanged(ctx);
return;
}
// Channel realChannel = NettyRealIdContext.getReal(visitorId);
Channel realChannel = ChannelAttributeKeyUtils.getNextChannel(ctx.channel());
if (realChannel != null) {
log.debug("visitorId:{} transfer AUTO_READ:{} ",visitorId,ctx.channel().isWritable());
realChannel.config().setOption(ChannelOption.AUTO_READ, ctx.channel().isWritable());
}
}
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
super.exceptionCaught(ctx, cause);
}
}

View File

@@ -1,4 +1,4 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.handler;
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.handler;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInboundHandlerAdapter;
@@ -7,7 +7,7 @@ import io.netty.handler.timeout.IdleStateEvent;
import java.util.Date;
public class HeartBeatClientHandler extends ChannelInboundHandlerAdapter {
public class TcpHeartBeatClientHandler extends ChannelInboundHandlerAdapter {
private final int lossConnectCount = 0;
@Override

View File

@@ -0,0 +1,175 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.socket;
import io.netty.bootstrap.Bootstrap;
import io.netty.channel.*;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.nio.NioSocketChannel;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.client.config.NettyClientProperties;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.filter.NettyTcpClientPermeateClientRealFilter;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.filter.NettyTcpClientPermeateClientTransferRealFilter;
import org.framework.lazy.cloud.network.heartbeat.common.*;
import org.framework.lazy.cloud.network.heartbeat.common.adapter.ChannelTypeAdapter;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.HandleChannelTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.constant.TcpMessageType;
import org.framework.lazy.cloud.network.heartbeat.common.utils.ChannelAttributeKeyUtils;
import java.util.List;
import java.util.concurrent.TimeUnit;
/**
* 客户端连接真实服务
*/
@Slf4j
public class NettyTcpClientPermeateClientRealSocket {
static EventLoopGroup eventLoopGroup = new NioEventLoopGroup();
public static void buildRealServer(String clientId,
String clientTargetIp,
Integer clientTargetPort,
Integer visitorPort,
String visitorId,
NettyClientProperties nettyClientProperties,
List<HandleChannelTypeAdvanced> handleChannelTypeAdvancedList) {
try {
Bootstrap bootstrap = new Bootstrap();
bootstrap.group(eventLoopGroup).channel(NioSocketChannel.class)
// 设置读缓冲区为2M
.option(ChannelOption.SO_RCVBUF, 2048 * 1024)
// 设置写缓冲区为1M
.option(ChannelOption.SO_SNDBUF, 1024 * 1024)
// .option(ChannelOption.TCP_NODELAY, false)
.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, 1000 * 60)//连接超时时间设置为 60 秒
// .option(ChannelOption.SO_BACKLOG, 128)//务端接受连接的队列长度 默认128
// .option(ChannelOption.RCVBUF_ALLOCATOR, new NettyRecvByteBufAllocator(1024 * 1024))//用于Channel分配接受Buffer的分配器 默认AdaptiveRecvByteBufAllocator.DEFAULT
.option(ChannelOption.WRITE_BUFFER_WATER_MARK, new WriteBufferWaterMark(1024 * 1024, 1024 * 1024 * 2))
.handler(new NettyTcpClientPermeateClientRealFilter())
;
bootstrap.connect(clientTargetIp, clientTargetPort).addListener((ChannelFutureListener) future -> {
if (future.isSuccess()) {
// 客户端链接真实服务成功 设置自动读写false 等待访客连接成功后设置成true
Channel realChannel = future.channel();
realChannel.config().setOption(ChannelOption.AUTO_READ, false);
log.info("访客通过 客户端:【{}】,visitorId:{},绑定本地服务,IP:{},端口:{} 新建通道成功", clientId,visitorId, clientTargetIp, clientTargetPort);
// 客户端真实通道
NettyRealIdContext.pushReal(realChannel, visitorId);
// 绑定访客ID到当前真实通道属性
ChannelAttributeKeyUtils.buildVisitorId(realChannel, visitorId);
ChannelAttributeKeyUtils.buildClientId(realChannel, clientId);
ChannelAttributeKeyUtils.buildVisitorPort(realChannel, visitorPort);
// 连接服务端 然后绑定通道
// 新建一个通道处理
newVisitorConnect2Server(clientId,
clientTargetIp,
clientTargetPort,
visitorPort,
visitorId,realChannel, nettyClientProperties, handleChannelTypeAdvancedList);
} else {
log.error("客户:【{}】,无法连接当前网络内的目标IP【{}】,目标端口:【{}】", clientId, clientTargetIp, clientTargetPort);
}
});
} catch (Exception e) {
e.printStackTrace();
}
}
/**
* 创建访客连接服务端
*
* @param nettyClientProperties 服务端配置信息
* @param handleChannelTypeAdvancedList 处理器适配器
* @throws InterruptedException 异常
*/
protected static void newVisitorConnect2Server(String clientId,
String clientTargetIp,
Integer clientTargetPort,
Integer visitorPort,
String visitorId,
Channel realChannel,
NettyClientProperties nettyClientProperties,
List<HandleChannelTypeAdvanced> handleChannelTypeAdvancedList) throws InterruptedException {
Bootstrap bootstrap = new Bootstrap();
bootstrap.group(eventLoopGroup)
.channel(NioSocketChannel.class)
.option(ChannelOption.SO_KEEPALIVE, true)
// 设置读缓冲区为2M
.option(ChannelOption.SO_RCVBUF, 2048 * 1024)
// 设置写缓冲区为1M
.option(ChannelOption.SO_SNDBUF, 1024 * 1024)
// .option(ChannelOption.TCP_NODELAY, false)
.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, 1000 * 60)//连接超时时间设置为 60 秒
// .option(ChannelOption.SO_BACKLOG, 256)//务端接受连接的队列长度 默认128
// .option(ChannelOption.RCVBUF_ALLOCATOR, new NettyRecvByteBufAllocator(1024 * 1024))//用于Channel分配接受Buffer的分配器 默认AdaptiveRecvByteBufAllocator.DEFAULT
.option(ChannelOption.WRITE_BUFFER_WATER_MARK, new WriteBufferWaterMark(1024 * 1024, 1024 * 1024 * 2))
.handler(new NettyTcpClientPermeateClientTransferRealFilter(new ChannelTypeAdapter(handleChannelTypeAdvancedList)))
;
String inetHost = nettyClientProperties.getInetHost();
int inetPort = nettyClientProperties.getInetPort();
// local client id
// String clientId = nettyClientProperties.getClientId();
// 客户端新建访客通道 连接服务端IP:{},连接服务端端口:{}
log.info("client creates a new visitor channel to connect to server IP: {}, connecting to server port: {} with visitorId:{} & clientId:{}", inetHost, inetPort,visitorId,clientId);
ChannelFuture future = bootstrap.connect(inetHost, inetPort);
future.addListener((ChannelFutureListener) futureListener -> {
Channel transferChannel = futureListener.channel();
if (futureListener.isSuccess()) {
realChannel.config().setOption(ChannelOption.AUTO_READ, true);
// 通知服务端访客连接成功
NettyProxyMsg nettyProxyMsg = new NettyProxyMsg();
nettyProxyMsg.setVisitorId(visitorId);
nettyProxyMsg.setClientId(clientId);
nettyProxyMsg.setClientTargetIp(clientTargetIp);
nettyProxyMsg.setClientTargetPort(clientTargetPort);
nettyProxyMsg.setVisitorPort(visitorPort);
nettyProxyMsg.setType(TcpMessageType.TCP_REPORT_CLIENT_PERMEATE_CLIENT_TRANSFER_CHANNEL_INIT_SUCCESSFUL);
transferChannel.writeAndFlush(nettyProxyMsg);
ChannelAttributeKeyUtils.buildNextChannel(transferChannel, realChannel);
ChannelAttributeKeyUtils.buildNextChannel(realChannel, transferChannel);
// 绑定客户端真实通信通道
ChannelAttributeKeyUtils.buildVisitorId(transferChannel, visitorId);
ChannelAttributeKeyUtils.buildClientId(transferChannel, clientId);
ChannelAttributeKeyUtils.buildVisitorPort(transferChannel, visitorPort);
} else {
log.info("无法连接到服务端....");
eventLoopGroup.schedule(() -> {
try {
newVisitorConnect2Server(clientId,
clientTargetIp,
clientTargetPort,
visitorPort,
visitorId,realChannel, nettyClientProperties, handleChannelTypeAdvancedList);
} catch (Exception e) {
e.printStackTrace();
}
}, 2, TimeUnit.SECONDS);
}
});
}
}

View File

@@ -0,0 +1,288 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.socket;
import io.netty.bootstrap.ServerBootstrap;
import io.netty.channel.*;
import io.netty.channel.socket.nio.NioServerSocketChannel;
import lombok.Getter;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.client.config.NettyClientProperties;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.NettyClientPermeateClientVisitor;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.filter.NettyTcpClientPermeateClientVisitorFilter;
import org.framework.lazy.cloud.network.heartbeat.common.NettyClientVisitorContext;
import org.framework.lazy.cloud.network.heartbeat.common.NettyVisitorPortContext;
import org.framework.lazy.cloud.network.heartbeat.common.adapter.ChannelFlowAdapter;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.HandleChannelTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.factory.EventLoopGroupFactory;
import org.framework.lazy.cloud.network.heartbeat.common.socket.PermeateVisitorSocket;
import org.framework.lazy.cloud.network.heartbeat.common.utils.ChannelAttributeKeyUtils;
import java.util.List;
/**
* 内网穿透 客户端渗透客户端通道
*
* @see NettyVisitorPortContext
* @see NettyClientVisitorContext
*/
@Slf4j
public class NettyTcpClientPermeateClientVisitorSocket implements PermeateVisitorSocket {
private final NettyTcpClientPermeateClientVisitorFilter nettyTcpClientPermeateClientVisitorFilter;
@Getter
private final String clientId;
@Getter
private final int visitorPort;
public NettyTcpClientPermeateClientVisitorSocket(NettyTcpClientPermeateClientVisitorFilter nettyTcpClientPermeateClientVisitorFilter, String clientId, int visitorPort) {
this.nettyTcpClientPermeateClientVisitorFilter = nettyTcpClientPermeateClientVisitorFilter;
this.clientId = clientId;
this.visitorPort = visitorPort;
}
/**
* 启动客户端本地端口渗透到服务端端口
*
*/
@Override
public void start() {
Channel visitor = NettyVisitorPortContext.getVisitorChannel(visitorPort);
if (visitor == null) {
ServerBootstrap bootstrap = new ServerBootstrap();
EventLoopGroup bossGroup = EventLoopGroupFactory.createBossGroup();
EventLoopGroup workerGroup = EventLoopGroupFactory.createWorkerGroup();
bootstrap
.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
// 设置读缓冲区为2M
.childOption(ChannelOption.SO_RCVBUF, 2048 * 1024)
// 设置写缓冲区为1M
.childOption(ChannelOption.SO_SNDBUF, 1024 * 1024)
.childOption(ChannelOption.SO_KEEPALIVE, true)
// .childOption(ChannelOption.TCP_NODELAY, false)
.childOption(ChannelOption.CONNECT_TIMEOUT_MILLIS, 1000 * 60)//连接超时时间设置为 60 秒
// .childOption(ChannelOption.RCVBUF_ALLOCATOR, new NettyRecvByteBufAllocator(1024 * 1024))//用于Channel分配接受Buffer的分配器 默认 AdaptiveRecvByteBufAllocator.DEFAULT
.childOption(ChannelOption.WRITE_BUFFER_WATER_MARK, new WriteBufferWaterMark(1024 * 1024, 1024 * 1024 * 2))
.childHandler(nettyTcpClientPermeateClientVisitorFilter);
try {
bootstrap.bind(visitorPort).sync().addListener((ChannelFutureListener) future -> {
if (future.isSuccess()) {
Channel channel = future.channel();
ChannelAttributeKeyUtils.buildVisitorPort(channel,visitorPort);
// 这里时异步处理
log.info("客户端:[{}]访客端口:[{}] 开启", clientId, visitorPort);
NettyVisitorPortContext.pushVisitorChannel(visitorPort, channel);
} else {
log.error("客户端:[{}]访客端口:[{}]绑定失败", clientId, visitorPort);
}
});
NettyVisitorPortContext.pushVisitorSocket(visitorPort, this);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
} else {
log.warn("客户端渗透服务端:[{}]访客端口:[{}] 重复启动", clientId, visitorPort);
}
}
@Override
public void close() {
Channel visitor = NettyVisitorPortContext.getVisitorChannel(visitorPort);
if (visitor != null) {
// close channel
visitor.close();
// remove visitor
NettyVisitorPortContext.removeVisitorChannel(visitorPort);
// remove client this
NettyVisitorPortContext.removeVisitorSocket(visitorPort);
log.warn("关闭客户端 :【{}】 访客户端口:【{}】", clientId, visitorPort);
} else {
log.warn("关闭访客端口失败 未找到客户端通道 客户端 :【{}】 访客户端口:【{}】", clientId, visitorPort);
}
}
public static final class NettyClientPermeateClientVisitorSocketBuilder {
/**
* 客户端ID
*/
private String clientId;
/**
* 客户端目标地址
*/
private String clientTargetIp;
/**
* 客户端目标端口
*/
private Integer clientTargetPort;
/**
* 访问端口
*/
private Integer visitorPort;
/**
* 访客ID
*/
private String visitorId;
/**
* 目标客户端ID
*/
private String toClientId;
/**
* 流量适配器
*/
private ChannelFlowAdapter channelFlowAdapter;
/**
* 服务端地址信息
*/
private NettyClientProperties nettyClientProperties;
/**
* 处理器
*/
private List<HandleChannelTypeAdvanced> handleChannelTypeAdvancedList;
public static NettyClientPermeateClientVisitorSocketBuilder builder() {
return new NettyClientPermeateClientVisitorSocketBuilder();
}
/**
* 填充客户端
*
* @param clientId 客户端
* @return 返回当前对象
*/
public NettyClientPermeateClientVisitorSocketBuilder builderClientId(String clientId) {
this.clientId = clientId;
return this;
}
/**
* 绑定客户端目标IP
*
* @param clientTargetIp 客户端目标IP
* @return 当前对象
*/
public NettyClientPermeateClientVisitorSocketBuilder builderClientTargetIp(String clientTargetIp) {
this.clientTargetIp = clientTargetIp;
return this;
}
/**
* 绑定客户端目标端口
*
* @param clientTargetPort 客户端目标端口
* @return 当前对象
*/
public NettyClientPermeateClientVisitorSocketBuilder builderClientTargetPort(Integer clientTargetPort) {
this.clientTargetPort = clientTargetPort;
return this;
}
/**
* 绑定访客端口
*
* @param visitorPort 访客端口
* @return 当前对象
*/
public NettyClientPermeateClientVisitorSocketBuilder builderVisitorPort(Integer visitorPort) {
this.visitorPort = visitorPort;
return this;
}
/**
* 绑定流量适配器
*
* @param channelFlowAdapter 流量适配器
* @return 当前对象
*/
public NettyClientPermeateClientVisitorSocketBuilder builderChannelFlowAdapter(ChannelFlowAdapter channelFlowAdapter) {
this.channelFlowAdapter = channelFlowAdapter;
return this;
} /**
* 绑定流量适配器
*
* @param handleChannelTypeAdvancedList 流量适配器
* @return 当前对象
*/
public NettyClientPermeateClientVisitorSocketBuilder builderHandleChannelTypeAdvancedList(List<HandleChannelTypeAdvanced> handleChannelTypeAdvancedList) {
this.handleChannelTypeAdvancedList = handleChannelTypeAdvancedList;
return this;
}
/**
* 服务端地址信息
*
* @param nettyClientProperties 客户服务端地址配置属性
* @return 返回当前对象
*/
public NettyClientPermeateClientVisitorSocketBuilder builderNettyClientProperties(NettyClientProperties nettyClientProperties) {
this.nettyClientProperties = nettyClientProperties;
return this;
}
/**
* 绑定访客ID
*
* @param visitorId 访客ID
* @return 当前对象
*/
public NettyClientPermeateClientVisitorSocketBuilder builderVisitorId(String visitorId) {
this.visitorId = visitorId;
return this;
}
/**
* 目标客户端ID
*
* @param toClientId 目标客户端ID
* @return 当前对象
*/
public NettyClientPermeateClientVisitorSocketBuilder builderToClientId(String toClientId) {
this.toClientId = toClientId;
return this;
}
public NettyTcpClientPermeateClientVisitorSocket build() {
if (clientTargetIp == null) {
throw new IllegalArgumentException("clientTargetIp must not null");
}
if (clientTargetPort == null) {
throw new IllegalArgumentException("clientTargetPort must not null");
}
if (visitorPort == null) {
throw new IllegalArgumentException("visitorPort must not null");
}
NettyClientPermeateClientVisitor nettyClientPermeateClientVisitor = new NettyClientPermeateClientVisitor();
nettyClientPermeateClientVisitor.setFromClientId(nettyClientProperties.getClientId());
nettyClientPermeateClientVisitor.setToClientId(toClientId);
nettyClientPermeateClientVisitor.setTargetIp(clientTargetIp);
nettyClientPermeateClientVisitor.setTargetPort(clientTargetPort);
nettyClientPermeateClientVisitor.setVisitorPort(visitorPort);
nettyClientPermeateClientVisitor.setNettyClientProperties(nettyClientProperties);
nettyClientPermeateClientVisitor.setChannelFlowAdapter(channelFlowAdapter);
nettyClientPermeateClientVisitor.setHandleChannelTypeAdvancedList(handleChannelTypeAdvancedList);
NettyTcpClientPermeateClientVisitorFilter visitorFilter = new NettyTcpClientPermeateClientVisitorFilter(nettyClientPermeateClientVisitor);
return new NettyTcpClientPermeateClientVisitorSocket(visitorFilter, clientId, visitorPort);
}
}
}

View File

@@ -0,0 +1,101 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.socket;
import io.netty.bootstrap.Bootstrap;
import io.netty.channel.*;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.nio.NioSocketChannel;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.client.config.NettyClientProperties;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.NettyClientPermeateClientVisitor;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.filter.NettyTcpClientPermeateClientTransferFilter;
import org.framework.lazy.cloud.network.heartbeat.common.constant.TcpMessageType;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.adapter.ChannelTypeAdapter;
import org.framework.lazy.cloud.network.heartbeat.common.utils.ChannelAttributeKeyUtils;
import java.util.concurrent.TimeUnit;
/**
* 客户端渗透服务端传输通道
*/
@Slf4j
public class NettyTcpClientPermeateClientVisitorTransferSocket {
static EventLoopGroup eventLoopGroup = new NioEventLoopGroup();
/**
* 连接服务端通信通道
*/
public static void buildTransferServer(NettyClientPermeateClientVisitor nettyClientPermeateClientVisitor, Channel visitorChannel) {
Bootstrap bootstrap = new Bootstrap();
bootstrap.group(eventLoopGroup)
.channel(NioSocketChannel.class)
.option(ChannelOption.SO_KEEPALIVE, true)
// 设置读缓冲区为2M
.option(ChannelOption.SO_RCVBUF, 2048 * 1024)
// 设置写缓冲区为1M
.option(ChannelOption.SO_SNDBUF, 1024 * 1024)
// .option(ChannelOption.TCP_NODELAY, false)
.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, 1000 * 60)//连接超时时间设置为 60 秒
// .option(ChannelOption.SO_BACKLOG, 256)//务端接受连接的队列长度 默认128
// .option(ChannelOption.RCVBUF_ALLOCATOR, new NettyRecvByteBufAllocator(1024 * 1024))//用于Channel分配接受Buffer的分配器 默认AdaptiveRecvByteBufAllocator.DEFAULT
.option(ChannelOption.WRITE_BUFFER_WATER_MARK, new WriteBufferWaterMark(1024 * 1024, 1024 * 1024 * 2))
.handler(new NettyTcpClientPermeateClientTransferFilter(new ChannelTypeAdapter(nettyClientPermeateClientVisitor.getHandleChannelTypeAdvancedList())))
;
NettyClientProperties nettyClientProperties = nettyClientPermeateClientVisitor.getNettyClientProperties();
String inetHost = nettyClientProperties.getInetHost();
int inetPort = nettyClientProperties.getInetPort();
// local client id
String clientId = nettyClientProperties.getClientId();
String targetIp = nettyClientPermeateClientVisitor.getTargetIp();
Integer targetPort = nettyClientPermeateClientVisitor.getTargetPort();
String visitorId = ChannelAttributeKeyUtils.getVisitorId(visitorChannel);
Integer visitorPort = nettyClientPermeateClientVisitor.getVisitorPort();
String toClientId = nettyClientPermeateClientVisitor.getToClientId();
// 客户端新建访客通道 连接服务端IP:{},连接服务端端口:{}
log.info("Client creates a new visitor channel to connect to server IP: {}, connecting to server port: {} with clientId:【{}】 toClientId:【{}】 & visitorId:【{}】", inetHost, inetPort, clientId, toClientId, visitorId);
ChannelFuture future = bootstrap.connect(inetHost, inetPort);
// 使用的客户端ID:{}
future.addListener((ChannelFutureListener) futureListener -> {
Channel transferChannel = futureListener.channel();
if (futureListener.isSuccess()) {
NettyProxyMsg nettyProxyMsg = new NettyProxyMsg();
nettyProxyMsg.setType(TcpMessageType.TCP_REPORT_CLIENT_TRANSFER_CLIENT_PERMEATE_CHANNEL_CONNECTION_SUCCESSFUL);
// other clientId
nettyProxyMsg.setClientId(toClientId);
nettyProxyMsg.setVisitorPort(visitorPort);
nettyProxyMsg.setClientTargetIp(targetIp);
nettyProxyMsg.setClientTargetPort(targetPort);
nettyProxyMsg.setVisitorId(visitorId);
transferChannel.writeAndFlush(nettyProxyMsg);
// 绑定客户端真实通信通道
ChannelAttributeKeyUtils.buildVisitorId(transferChannel, visitorId);
ChannelAttributeKeyUtils.buildClientId(transferChannel, clientId);
// 传输通道打开后自动读取
ChannelAttributeKeyUtils.buildNextChannel(visitorChannel, transferChannel);
ChannelAttributeKeyUtils.buildNextChannel(transferChannel, visitorChannel);
} else {
log.info("无法连接到服务端....");
eventLoopGroup.schedule(() -> {
try {
buildTransferServer(nettyClientPermeateClientVisitor, visitorChannel);
} catch (Exception e) {
e.printStackTrace();
}
}, 2, TimeUnit.SECONDS);
}
});
}
}

View File

@@ -0,0 +1,232 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.socket;
import io.netty.bootstrap.ServerBootstrap;
import io.netty.channel.*;
import io.netty.channel.socket.nio.NioServerSocketChannel;
import lombok.Getter;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.client.config.NettyClientProperties;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.NettyClientPermeateServerVisitor;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.filter.NettyTcpClientPermeateServerVisitorFilter;
import org.framework.lazy.cloud.network.heartbeat.common.NettyClientVisitorContext;
import org.framework.lazy.cloud.network.heartbeat.common.NettyVisitorPortContext;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.HandleChannelTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.factory.EventLoopGroupFactory;
import org.framework.lazy.cloud.network.heartbeat.common.socket.PermeateVisitorSocket;
import org.framework.lazy.cloud.network.heartbeat.common.utils.ChannelAttributeKeyUtils;
import java.util.List;
/**
* 内网穿透 客户端渗透服务端通道
*
* @see NettyVisitorPortContext
* @see NettyClientVisitorContext
*/
@Slf4j
public class NettyTcpClientPermeateServerVisitorSocket implements PermeateVisitorSocket {
private final NettyTcpClientPermeateServerVisitorFilter nettyTcpClientPermeateServerVisitorFilter;
@Getter
private final String clientId;
@Getter
private final int visitorPort;
public NettyTcpClientPermeateServerVisitorSocket(NettyTcpClientPermeateServerVisitorFilter nettyTcpClientPermeateServerVisitorFilter, String clientId, int visitorPort) {
this.nettyTcpClientPermeateServerVisitorFilter = nettyTcpClientPermeateServerVisitorFilter;
this.clientId = clientId;
this.visitorPort = visitorPort;
}
/**
* 启动客户端本地端口渗透到服务端端口
*
*/
@Override
public void start() {
PermeateVisitorSocket visitor = NettyVisitorPortContext.getVisitorSocket(visitorPort);
if (visitor == null) {
ServerBootstrap bootstrap = new ServerBootstrap();
EventLoopGroup bossGroup = EventLoopGroupFactory.createBossGroup();
EventLoopGroup workerGroup = EventLoopGroupFactory.createWorkerGroup();
bootstrap
.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
// 设置读缓冲区为2M
.childOption(ChannelOption.SO_RCVBUF, 2048 * 1024)
// 设置写缓冲区为1M
.childOption(ChannelOption.SO_SNDBUF, 1024 * 1024)
.childOption(ChannelOption.SO_KEEPALIVE, true)
// .childOption(ChannelOption.TCP_NODELAY, false)
.childOption(ChannelOption.CONNECT_TIMEOUT_MILLIS, 1000 * 60)//连接超时时间设置为 60 秒
// .childOption(ChannelOption.RCVBUF_ALLOCATOR, new NettyRecvByteBufAllocator(1024 * 1024))//用于Channel分配接受Buffer的分配器 默认 AdaptiveRecvByteBufAllocator.DEFAULT
.childOption(ChannelOption.WRITE_BUFFER_WATER_MARK, new WriteBufferWaterMark(1024 * 1024, 1024 * 1024 * 2))
.childHandler(nettyTcpClientPermeateServerVisitorFilter);
try {
bootstrap.bind(visitorPort).sync().addListener((ChannelFutureListener) future -> {
if (future.isSuccess()) {
// 这里时异步处理
log.info("客户端:[{}]访客端口:[{}] 开启", clientId, visitorPort);
Channel channel = future.channel();
ChannelAttributeKeyUtils.buildVisitorPort(channel,visitorPort);
} else {
log.error("客户端:[{}]访客端口:[{}]绑定失败", clientId, visitorPort);
}
});
NettyVisitorPortContext.pushVisitorSocket(visitorPort, this);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
} else {
log.warn("客户端渗透服务端:[{}]访客端口:[{}] 重复启动", clientId, visitorPort);
}
}
@Override
public void close() {
PermeateVisitorSocket permeateVisitorSocket = NettyVisitorPortContext.getVisitorSocket(visitorPort);
if (permeateVisitorSocket != null) {
log.warn("关闭客户端 :【{}】 访客户端口:【{}】", clientId, visitorPort);
} else {
log.warn("关闭访客端口失败 未找到客户端通道 客户端 :【{}】 访客户端口:【{}】", clientId, visitorPort);
}
}
public static final class NettyVisitorSocketBuilder {
/**
* 客户端ID
*/
private String clientId;
/**
* 客户端目标地址
*/
private String clientTargetIp;
/**
* 客户端目标端口
*/
private Integer clientTargetPort;
/**
* 访问端口
*/
private Integer visitorPort;
/**
* 服务端地址信息
*/
private NettyClientProperties nettyClientProperties;
/**
* 处理器
*/
private List<HandleChannelTypeAdvanced> handleChannelTypeAdvancedList;
public static NettyVisitorSocketBuilder builder() {
return new NettyVisitorSocketBuilder();
}
/**
* 填充客户端
*
* @param clientId 客户端
* @return 返回当前对象
*/
public NettyVisitorSocketBuilder builderClientId(String clientId) {
this.clientId = clientId;
return this;
}
/**
* 绑定客户端目标IP
*
* @param clientTargetIp 客户端目标IP
* @return 当前对象
*/
public NettyVisitorSocketBuilder builderClientTargetIp(String clientTargetIp) {
this.clientTargetIp = clientTargetIp;
return this;
}
/**
* 绑定客户端目标端口
*
* @param clientTargetPort 客户端目标端口
* @return 当前对象
*/
public NettyVisitorSocketBuilder builderClientTargetPort(Integer clientTargetPort) {
this.clientTargetPort = clientTargetPort;
return this;
}
/**
* 绑定访客端口
*
* @param visitorPort 访客端口
* @return 当前对象
*/
public NettyVisitorSocketBuilder builderVisitorPort(Integer visitorPort) {
this.visitorPort = visitorPort;
return this;
}
/**
* 绑定流量适配器
*
* @param handleChannelTypeAdvancedList 流量适配器
* @return 当前对象
*/
public NettyVisitorSocketBuilder builderHandleChannelTypeAdvancedList(List<HandleChannelTypeAdvanced> handleChannelTypeAdvancedList) {
this.handleChannelTypeAdvancedList = handleChannelTypeAdvancedList;
return this;
}
/**
* 服务端地址信息
*
* @param nettyClientProperties 客户服务端地址配置属性
* @return 返回当前对象
*/
public NettyVisitorSocketBuilder builderNettyClientProperties(NettyClientProperties nettyClientProperties) {
this.nettyClientProperties = nettyClientProperties;
return this;
}
public NettyTcpClientPermeateServerVisitorSocket build() {
if (clientTargetIp == null) {
throw new IllegalArgumentException("clientTargetIp must not null");
}
if (clientTargetPort == null) {
throw new IllegalArgumentException("clientTargetPort must not null");
}
if (visitorPort == null) {
throw new IllegalArgumentException("visitorPort must not null");
}
NettyClientPermeateServerVisitor nettyClientPermeateServerVisitor = new NettyClientPermeateServerVisitor();
nettyClientPermeateServerVisitor.setTargetIp(clientTargetIp);
nettyClientPermeateServerVisitor.setTargetPort(clientTargetPort);
nettyClientPermeateServerVisitor.setVisitorPort(visitorPort);
nettyClientPermeateServerVisitor.setNettyClientProperties(nettyClientProperties);
nettyClientPermeateServerVisitor.setHandleChannelTypeAdvancedList(handleChannelTypeAdvancedList);
NettyTcpClientPermeateServerVisitorFilter nettyTcpClientPermeateServerVisitorFilter = new NettyTcpClientPermeateServerVisitorFilter(nettyClientPermeateServerVisitor);
return new NettyTcpClientPermeateServerVisitorSocket(nettyTcpClientPermeateServerVisitorFilter, clientId, visitorPort);
}
}
}

View File

@@ -0,0 +1,100 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.socket;
import io.netty.bootstrap.Bootstrap;
import io.netty.channel.*;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.nio.NioSocketChannel;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.client.config.NettyClientProperties;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.NettyClientPermeateServerVisitor;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.filter.NettyTcpClientPermeateServerTransferFilter;
import org.framework.lazy.cloud.network.heartbeat.common.constant.TcpMessageType;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.adapter.ChannelTypeAdapter;
import org.framework.lazy.cloud.network.heartbeat.common.utils.ChannelAttributeKeyUtils;
import java.util.concurrent.TimeUnit;
/**
* 客户端渗透服务端传输通道
*/
@Slf4j
public class NettyTcpClientPermeateServerVisitorTransferSocket {
static EventLoopGroup eventLoopGroup = new NioEventLoopGroup();
/**
* 连接服务端通信通道
* <p>
* nettyClientPermeateServerVisitor
*/
public static void buildTransferServer(NettyClientPermeateServerVisitor nettyClientPermeateServerVisitor, Channel visitorChannel) {
Bootstrap bootstrap = new Bootstrap();
bootstrap.group(eventLoopGroup)
.channel(NioSocketChannel.class)
.option(ChannelOption.SO_KEEPALIVE, true)
// 设置读缓冲区为2M
.option(ChannelOption.SO_RCVBUF, 2048 * 1024)
// 设置写缓冲区为1M
.option(ChannelOption.SO_SNDBUF, 1024 * 1024)
// .option(ChannelOption.TCP_NODELAY, false)
.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, 1000 * 60)//连接超时时间设置为 60 秒
// .option(ChannelOption.SO_BACKLOG, 256)//务端接受连接的队列长度 默认128
// .option(ChannelOption.RCVBUF_ALLOCATOR, new NettyRecvByteBufAllocator(1024 * 1024))//用于Channel分配接受Buffer的分配器 默认AdaptiveRecvByteBufAllocator.DEFAULT
.option(ChannelOption.WRITE_BUFFER_WATER_MARK, new WriteBufferWaterMark(1024 * 1024, 1024 * 1024 * 2))
.handler(new NettyTcpClientPermeateServerTransferFilter(new ChannelTypeAdapter(nettyClientPermeateServerVisitor.getHandleChannelTypeAdvancedList())))
;
NettyClientProperties nettyClientProperties = nettyClientPermeateServerVisitor.getNettyClientProperties();
String inetHost = nettyClientProperties.getInetHost();
int inetPort = nettyClientProperties.getInetPort();
// local client id
String clientId = nettyClientProperties.getClientId();
String targetIp = nettyClientPermeateServerVisitor.getTargetIp();
Integer targetPort = nettyClientPermeateServerVisitor.getTargetPort();
String visitorId = ChannelAttributeKeyUtils.getVisitorId(visitorChannel);
Integer visitorPort = nettyClientPermeateServerVisitor.getVisitorPort();
// 客户端新建访客通道 连接服务端IP:{},连接服务端端口:{}
log.debug("Client creates a new visitor channel to connect to server IP: {}, connecting to server port: {}", inetHost, inetPort);
ChannelFuture future = bootstrap.connect(inetHost, inetPort);
// 使用的客户端ID:{}
log.info("Client ID used: {}", clientId);
future.addListener((ChannelFutureListener) futureListener -> {
Channel transferChannel = futureListener.channel();
if (futureListener.isSuccess()) {
NettyProxyMsg myMsg = new NettyProxyMsg();
myMsg.setType(TcpMessageType.TCP_REPORT_CLIENT_TRANSFER_SERVER_PERMEATE_CHANNEL_CONNECTION_SUCCESSFUL);
myMsg.setClientId(clientId);
myMsg.setVisitorPort(visitorPort);
myMsg.setClientTargetIp(targetIp);
myMsg.setClientTargetPort(targetPort);
myMsg.setVisitorId(visitorId);
ChannelAttributeKeyUtils.buildVisitorId(transferChannel, visitorId);
ChannelAttributeKeyUtils.buildClientId(transferChannel, clientId);
// 传输通道打开后自动读取
ChannelAttributeKeyUtils.buildNextChannel(visitorChannel, transferChannel);
ChannelAttributeKeyUtils.buildNextChannel(transferChannel, visitorChannel);
transferChannel.writeAndFlush(myMsg);
} else {
log.warn("客户端渗透服务端通信通道中断....");
eventLoopGroup.schedule(() -> {
try {
buildTransferServer(nettyClientPermeateServerVisitor, visitorChannel);
} catch (Exception e) {
e.printStackTrace();
}
}, 2, TimeUnit.SECONDS);
}
});
}
}

View File

@@ -0,0 +1,150 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.socket;
import io.netty.bootstrap.Bootstrap;
import io.netty.channel.*;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.nio.NioSocketChannel;
import lombok.Getter;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.NettyClientSocket;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.event.ClientChangeEvent;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.filter.NettyTcpClientFilter;
import org.framework.lazy.cloud.network.heartbeat.common.constant.TcpMessageType;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.NettyServerContext;
import org.framework.lazy.cloud.network.heartbeat.common.adapter.ChannelTypeAdapter;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.HandleChannelTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.utils.ChannelAttributeKeyUtils;
import java.net.InetAddress;
import java.util.List;
import java.util.concurrent.TimeUnit;
/**
* 客户端连接服务端
*/
@Slf4j
public class NettyTcpClientSocket implements NettyClientSocket {
private final EventLoopGroup eventLoopGroup = new NioEventLoopGroup();
/**
* 服务端host
*/
private final String inetHost;
/**
* 服务端端口
*/
private final int inetPort;
/**
* 当前客户端id
*/
@Getter
private final String clientId;
/**
* 当前连接的服务端ID
*/
private final String serverId;
private final String appKey;
private final String appSecret;
/**
* 客户端状态变更事件
*/
@Getter
private final ClientChangeEvent clientChangeEvent;
private final List<HandleChannelTypeAdvanced> handleChannelTypeAdvancedList; // 处理服务端发送过来的数据类型
public NettyTcpClientSocket(String inetHost,
int inetPort,
String clientId,
String serverId,
String appKey,
String appSecret,
ClientChangeEvent clientChangeEvent,
List<HandleChannelTypeAdvanced> handleChannelTypeAdvancedList) {
this.inetHost = inetHost;
this.inetPort = inetPort;
this.clientId = clientId;
this.serverId = serverId;
this.appKey = appKey;
this.appSecret = appSecret;
this.clientChangeEvent = clientChangeEvent;
this.handleChannelTypeAdvancedList = handleChannelTypeAdvancedList;
}
protected void newTcpConnect2Server(String inetHost, int inetPort, String clientId, String serverId, ClientChangeEvent clientChangeEvent) throws InterruptedException {
Bootstrap bootstrap = new Bootstrap();
bootstrap.group(eventLoopGroup)
.channel(NioSocketChannel.class)
.option(ChannelOption.SO_RCVBUF, 2048 * 1024)
// 设置写缓冲区为1M
.option(ChannelOption.SO_SNDBUF, 1024 * 1024)
.option(ChannelOption.SO_KEEPALIVE, true)
// .childOption(ChannelOption.TCP_NODELAY, false)
.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, 1000 * 60)//连接超时时间设置为 60 秒
// .childOption(ChannelOption.RCVBUF_ALLOCATOR, new NettyRecvByteBufAllocator(1024 * 1024))//用于Channel分配接受Buffer的分配器 默认AdaptiveRecvByteBufAllocator.DEFAULT
.option(ChannelOption.WRITE_BUFFER_WATER_MARK, new WriteBufferWaterMark(1024 * 1024, 1024 * 1024 * 2))
.handler(new NettyTcpClientFilter(new ChannelTypeAdapter(handleChannelTypeAdvancedList), this))
;
log.info("use clientId:{} connect to server IP:{},server port :{}", clientId, inetHost, inetPort);
ChannelFuture future = bootstrap.connect(inetHost, inetPort);
// 客户端连接服务端的channel
Channel serviceChannel = future.channel();
future.addListener((ChannelFutureListener) futureListener -> {
if (futureListener.isSuccess()) {
log.info("clientId:{},connect to server IP:{},server port :{} isSuccess ", clientId, inetHost, inetPort);
// 告诉服务端这条连接是client的连接
NettyProxyMsg nettyMsg = new NettyProxyMsg();
nettyMsg.setType(TcpMessageType.TCP_REPORT_CLIENT_CONNECT_SUCCESS);
nettyMsg.setClientId(clientId);
String hostAddress = InetAddress.getLocalHost().getHostAddress();
nettyMsg.setOriginalIpString(hostAddress);
nettyMsg.setData((clientId).getBytes());
nettyMsg.setAppKeyString(appKey);
nettyMsg.setAppSecretString(appSecret);
ChannelAttributeKeyUtils.buildClientId(serviceChannel, clientId);
serviceChannel.writeAndFlush(nettyMsg);
NettyServerContext.pushServerEndpointChannel(serverId, clientId, serviceChannel);
// 在线 客户端注册服务端成功
clientChangeEvent.clientOnLine(inetHost, inetPort,serverId, clientId);
} else {
log.warn("Reconnect every 2 seconds....");
// 离线
NettyServerContext.removeServerEndpointChannels(serverId, clientId);
clientChangeEvent.clientOffLine(inetHost, inetPort,serverId, clientId);
eventLoopGroup.schedule(() -> {
try {
newTcpConnect2Server(inetHost, inetPort, clientId, serverId, clientChangeEvent);
} catch (InterruptedException e) {
e.printStackTrace();
}
}, 2, TimeUnit.SECONDS);
}
});
}
/**
* 关闭连接
*/
public void shutdown() {
if ((eventLoopGroup != null) && (!eventLoopGroup.isShutdown())) {
eventLoopGroup.shutdownGracefully();
}
}
/**
* 创建客户端链接服务端
*
* @throws InterruptedException 异常信息
*/
@Override
public void newConnect2Server() throws InterruptedException {
newTcpConnect2Server(inetHost, inetPort, clientId, serverId, clientChangeEvent);
}
}

View File

@@ -1,4 +1,4 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.socket;
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.socket;
import io.netty.bootstrap.Bootstrap;
@@ -6,13 +6,13 @@ import io.netty.channel.*;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.nio.NioSocketChannel;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.client.netty.filter.NettyClientRealFilter;
import org.framework.lazy.cloud.network.heartbeat.client.config.NettyClientProperties;
import org.framework.lazy.cloud.network.heartbeat.client.netty.filter.NettyClientVisitorRealFilter;
import org.framework.lazy.cloud.network.heartbeat.common.*;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.filter.NettyTcpServerPermeateClientRealFilter;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.tcp.filter.NettyTcpServerPermeateClientTransferFilter;
import org.framework.lazy.cloud.network.heartbeat.common.*;
import org.framework.lazy.cloud.network.heartbeat.common.adapter.ChannelTypeAdapter;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.HandleChannelTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.constant.TcpMessageType;
import org.framework.lazy.cloud.network.heartbeat.common.utils.ChannelAttributeKeyUtils;
import java.util.List;
@@ -22,7 +22,7 @@ import java.util.concurrent.TimeUnit;
* 客户端连接真实服务
*/
@Slf4j
public class NettyClientRealSocket {
public class NettyTcpServerPermeateClientRealSocket {
static EventLoopGroup eventLoopGroup = new NioEventLoopGroup();
/**
@@ -54,7 +54,20 @@ public class NettyClientRealSocket {
String visitorId = internalNetworkPenetrationRealClient.getVisitorId();
Bootstrap bootstrap = new Bootstrap();
bootstrap.group(eventLoopGroup).channel(NioSocketChannel.class)
.handler(new NettyClientRealFilter());
// 设置读缓冲区为2M
.option(ChannelOption.SO_RCVBUF, 2048 * 1024)
// 设置写缓冲区为1M
.option(ChannelOption.SO_SNDBUF, 1024 * 1024)
// .option(ChannelOption.TCP_NODELAY, false)
.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, 1000 * 60)//连接超时时间设置为 60
// .option(ChannelOption.SO_BACKLOG, 128)//务端接受连接的队列长度 默认128
// .option(ChannelOption.RCVBUF_ALLOCATOR, new NettyRecvByteBufAllocator(1024 * 1024))//用于Channel分配接受Buffer的分配器 默认AdaptiveRecvByteBufAllocator.DEFAULT
.option(ChannelOption.WRITE_BUFFER_WATER_MARK, new WriteBufferWaterMark(1024 * 1024, 1024 * 1024 * 2))
.handler(new NettyTcpServerPermeateClientRealFilter())
;
bootstrap.connect(clientTargetIp, clientTargetPort).addListener((ChannelFutureListener) future -> {
if (future.isSuccess()) {
// 客户端链接真实服务成功 设置自动读写false 等待访客连接成功后设置成true
@@ -63,7 +76,7 @@ public class NettyClientRealSocket {
log.info("访客通过 客户端:【{}】,绑定本地服务,IP:{},端口:{} 新建通道成功", clientId, clientTargetIp, clientTargetPort);
// 客户端真实通道
NettyRealIdContext.pushReal(realChannel, visitorId);
// NettyRealIdContext.pushReal(realChannel, visitorId);
// 绑定访客ID到当前真实通道属性
ChannelAttributeKeyUtils.buildVisitorId(realChannel, visitorId);
ChannelAttributeKeyUtils.buildClientId(realChannel, clientId);
@@ -72,7 +85,7 @@ public class NettyClientRealSocket {
// 新建一个通道处理
newVisitorConnect2Server(internalNetworkPenetrationRealClient, nettyClientProperties, handleChannelTypeAdvancedList);
newVisitorConnect2Server(internalNetworkPenetrationRealClient, nettyClientProperties, handleChannelTypeAdvancedList,realChannel);
// 是否等 服务端相应访客通道已经可以自动读写
// realChannel.config().setOption(ChannelOption.AUTO_READ, true);
@@ -120,11 +133,24 @@ public class NettyClientRealSocket {
*/
protected static void newVisitorConnect2Server(InternalNetworkPenetrationRealClient internalNetworkPenetrationRealClient,
NettyClientProperties nettyClientProperties,
List<HandleChannelTypeAdvanced> handleChannelTypeAdvancedList) throws InterruptedException {
List<HandleChannelTypeAdvanced> handleChannelTypeAdvancedList,
Channel realChannel) throws InterruptedException {
Bootstrap bootstrap = new Bootstrap();
bootstrap.group(eventLoopGroup)
.channel(NioSocketChannel.class)
.handler(new NettyClientVisitorRealFilter(new ChannelTypeAdapter(handleChannelTypeAdvancedList)))
.option(ChannelOption.SO_KEEPALIVE, true)
// 设置读缓冲区为2M
.option(ChannelOption.SO_RCVBUF, 2048 * 1024)
// 设置写缓冲区为1M
.option(ChannelOption.SO_SNDBUF, 1024 * 1024)
// .option(ChannelOption.TCP_NODELAY, false)
.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, 1000 * 60)//连接超时时间设置为 60
// .option(ChannelOption.SO_BACKLOG, 256)//务端接受连接的队列长度 默认128
// .option(ChannelOption.RCVBUF_ALLOCATOR, new NettyRecvByteBufAllocator(1024 * 1024))//用于Channel分配接受Buffer的分配器 默认AdaptiveRecvByteBufAllocator.DEFAULT
.option(ChannelOption.WRITE_BUFFER_WATER_MARK, new WriteBufferWaterMark(1024 * 1024, 1024 * 1024 * 2))
.handler(new NettyTcpServerPermeateClientTransferFilter(new ChannelTypeAdapter(handleChannelTypeAdvancedList)))
;
String inetHost = nettyClientProperties.getInetHost();
@@ -147,33 +173,38 @@ public class NettyClientRealSocket {
// 使用的客户端ID:{}
log.info("Client ID used: {}" , visitorClientId);
future.addListener((ChannelFutureListener) futureListener -> {
Channel channel = futureListener.channel();
Channel transferChannel = futureListener.channel();
if (futureListener.isSuccess()) {
NettyProxyMsg myMsg = new NettyProxyMsg();
myMsg.setType(MessageType.REPORT_SINGLE_CLIENT_REAL_CONNECT);
myMsg.setType(TcpMessageType.TCP_REPORT_SINGLE_CLIENT_REAL_CONNECT);
myMsg.setClientId(visitorClientId);
myMsg.setVisitorPort(visitorPort);
myMsg.setClientTargetIp(clientTargetIp);
myMsg.setClientTargetPort(clientTargetPort);
myMsg.setVisitorId(visitorId);
channel.writeAndFlush(myMsg);
transferChannel.writeAndFlush(myMsg);
// 绑定客户端真实通信通道
NettyCommunicationIdContext.pushVisitor(channel, visitorId);
ChannelAttributeKeyUtils.buildVisitorId(channel, visitorId);
ChannelAttributeKeyUtils.buildClientId(channel, visitorClientId);
NettyCommunicationIdContext.pushVisitor(transferChannel, visitorId);
ChannelAttributeKeyUtils.buildVisitorId(transferChannel, visitorId);
ChannelAttributeKeyUtils.buildClientId(transferChannel, visitorClientId);
// 客户端真实通道自动读写打开
Channel visitor = NettyRealIdContext.getReal(visitorId);
visitor.config().setOption(ChannelOption.AUTO_READ, true);
realChannel.config().setOption(ChannelOption.AUTO_READ, true);
ChannelAttributeKeyUtils.buildNextChannel(realChannel, transferChannel);
ChannelAttributeKeyUtils.buildNextChannel(transferChannel, realChannel);
} else {
log.info("每隔2s重连....");
// 离线
channel.eventLoop().schedule(() -> {
eventLoopGroup.schedule(() -> {
try {
newVisitorConnect2Server(internalNetworkPenetrationRealClient, nettyClientProperties, handleChannelTypeAdvancedList);
newVisitorConnect2Server(internalNetworkPenetrationRealClient, nettyClientProperties, handleChannelTypeAdvancedList,realChannel);
} catch (InterruptedException e) {
e.printStackTrace();
}

View File

@@ -0,0 +1,29 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.udp.advanced;
import io.netty.channel.Channel;
import org.framework.lazy.cloud.network.heartbeat.common.constant.UdpMessageType;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.permeate.udp.AbstractUdpHandleChannelHeartbeatTypeAdvanced;
/**
* 服务端 处理客户端心跳
* UDP_TYPE_HEARTBEAT
*/
public class ClientHandleUdpChannelHeartbeatTypeAdvanced extends AbstractUdpHandleChannelHeartbeatTypeAdvanced<NettyProxyMsg> {
/**
* 处理当前数据
*
* @param channel 当前通道
* @param msg 通道数据
*/
@Override
public void doHandler(Channel channel, NettyProxyMsg msg) {
NettyProxyMsg hb = new NettyProxyMsg();
hb.setType(UdpMessageType.UDP_TYPE_HEARTBEAT);
// channel.writeAndFlush(hb);
}
}

View File

@@ -1,4 +1,4 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.advanced;
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.udp.advanced;
import io.netty.buffer.ByteBuf;
@@ -6,22 +6,22 @@ import io.netty.channel.Channel;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.client.config.NettyClientProperties;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.NettyRealIdContext;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.client.AbstractHandleDistributeChannelTransferTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.enums.MessageTypeEnums;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.permeate.udp.client.AbstractHandleUdpDistributeServicePermeateClientTransferTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.enums.UdpMessageTypeEnums;
import org.framework.lazy.cloud.network.heartbeat.common.utils.ChannelAttributeKeyUtils;
/**
* 服务端处理客户端数据传输
*
* @see MessageTypeEnums#DISTRIBUTE_CLIENT_TRANSFER
* @see UdpMessageTypeEnums#UDP_DISTRIBUTE_CLIENT_TRANSFER
*/
@Slf4j
public class ClientReportHandleChannelTransferTypeAdvancedHandleDistribute extends AbstractHandleDistributeChannelTransferTypeAdvanced<NettyProxyMsg> {
public class ClientHandleUdpChannelTransferTypeAdvancedHandleDistribute extends AbstractHandleUdpDistributeServicePermeateClientTransferTypeAdvanced<NettyProxyMsg> {
private final NettyClientProperties nettyClientProperties;
public ClientReportHandleChannelTransferTypeAdvancedHandleDistribute(NettyClientProperties nettyClientProperties) {
public ClientHandleUdpChannelTransferTypeAdvancedHandleDistribute(NettyClientProperties nettyClientProperties) {
this.nettyClientProperties = nettyClientProperties;
}
@@ -40,8 +40,9 @@ public class ClientReportHandleChannelTransferTypeAdvancedHandleDistribute exten
byte[] clientTargetPort = nettyProxyMsg.getClientTargetPort();
byte[] visitorId = nettyProxyMsg.getVisitorId();
// 真实服务通道
Channel realChannel = NettyRealIdContext.getReal(new String(visitorId));
if (realChannel == null) {
// Channel realChannel = NettyRealIdContext.getReal(new String(visitorId));
Channel nextChannel = ChannelAttributeKeyUtils.getNextChannel(channel);
if (nextChannel == null) {
log.error("无法获取访客:{} 真实服务", new String(visitorId));
return;
}
@@ -51,7 +52,7 @@ public class ClientReportHandleChannelTransferTypeAdvancedHandleDistribute exten
ByteBuf buf = channel.config().getAllocator().buffer(nettyProxyMsg.getData().length);
buf.writeBytes(nettyProxyMsg.getData());
realChannel.writeAndFlush(buf);
nextChannel.writeAndFlush(buf);
}

View File

@@ -0,0 +1,34 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.udp.advanced;
import io.netty.channel.Channel;
import org.framework.lazy.cloud.network.heartbeat.client.config.NettyClientProperties;
import org.framework.lazy.cloud.network.heartbeat.common.ChannelContext;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.permeate.udp.client.AbstractHandleUdpClientChannelActiveAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.utils.ChannelAttributeKeyUtils;
/**
* 客户端通道 is active
*/
public class ClientHandleUdpClientChannelActiveAdvanced extends AbstractHandleUdpClientChannelActiveAdvanced<NettyProxyMsg> {
private final NettyClientProperties nettyClientProperties;
public ClientHandleUdpClientChannelActiveAdvanced(NettyClientProperties nettyClientProperties) {
this.nettyClientProperties = nettyClientProperties;
}
/**
* 处理当前数据
*
* @param channel 当前通道
* @param nettyProxyMsg 通道数据
*/
@Override
protected void doHandler(Channel channel, NettyProxyMsg nettyProxyMsg) {
// 缓存当前通道
byte[] clientIdByte = nettyProxyMsg.getClientId();
String clientId = new String(clientIdByte);
ChannelContext.push(channel, clientId);
ChannelAttributeKeyUtils.buildClientId(channel, clientId);
}
}

View File

@@ -0,0 +1,43 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.udp.advanced;
import io.netty.channel.Channel;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.NettyVisitorPortContext;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.permeate.udp.client.AbstractHandleUdpDistributeClientPermeateClientCloseTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.enums.UdpMessageTypeEnums;
import org.framework.lazy.cloud.network.heartbeat.common.socket.PermeateVisitorSocket;
/**
* 客户端渗透客户端init close 信息
*
* @see UdpMessageTypeEnums#UDP_DISTRIBUTE_CLIENT_PERMEATE_CLIENT_CLOSE
*/
@Slf4j
public class ClientHandleUdpDistributeClientPermeateClientCloseTypeAdvanced extends AbstractHandleUdpDistributeClientPermeateClientCloseTypeAdvanced<NettyProxyMsg> {
/**
* 处理当前数据
*
* @param channel 当前通道
* @param nettyProxyMsg 通道数据
*/
@Override
public void doHandler(Channel channel, NettyProxyMsg nettyProxyMsg) {
// 初始化 客户端渗透服务端socket
byte[] msgVisitorPort = nettyProxyMsg.getVisitorPort();
Integer visitorPort = Integer.parseInt(new String(msgVisitorPort));
PermeateVisitorSocket visitorSocket = NettyVisitorPortContext.getVisitorSocket(visitorPort);
// 关闭当前客户端渗透服务端访客通道
try {
visitorSocket.close();
} catch (Exception e) {
e.printStackTrace();
}
}
}

View File

@@ -0,0 +1,71 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.udp.advanced;
import io.netty.channel.Channel;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.client.config.NettyClientProperties;
import org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.udp.socket.NettyUdpClientPermeateClientVisitorSocket;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.HandleChannelTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.permeate.udp.client.AbstractHandleUdpDistributeClientPermeateClientInitTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.enums.TcpMessageTypeEnums;
import org.wu.framework.spring.utils.SpringContextHolder;
import java.util.ArrayList;
import java.util.List;
/**
* 客户端渗透客户端init信息
*
* @see TcpMessageTypeEnums#TCP_DISTRIBUTE_CLIENT_PERMEATE_CLIENT_INIT
*/
@Slf4j
public class ClientHandleUdpDistributeClientPermeateClientInitTypeAdvanced extends AbstractHandleUdpDistributeClientPermeateClientInitTypeAdvanced<NettyProxyMsg> {
/**
* 处理当前数据
*
* @param channel 当前通道
* @param nettyProxyMsg 通道数据
*/
@Override
public void doHandler(Channel channel, NettyProxyMsg nettyProxyMsg) {
// 初始化 客户端渗透服务端socket
byte[] fromClientIdBytes = nettyProxyMsg.getClientId();
byte[] visitorPortBytes = nettyProxyMsg.getVisitorPort();
byte[] clientTargetIpBytes = nettyProxyMsg.getClientTargetIp();
byte[] clientTargetPortBytes = nettyProxyMsg.getClientTargetPort();
byte[] toClientIdBytes = nettyProxyMsg.getData();
String fromClientId = new String(fromClientIdBytes);
String toClientId = new String(toClientIdBytes);
Integer visitorPort = Integer.parseInt(new String(visitorPortBytes));
String clientTargetIp = new String(clientTargetIpBytes);
Integer clientTargetPort = Integer.parseInt(new String(clientTargetPortBytes));
List<HandleChannelTypeAdvanced> handleChannelTypeAdvancedList = new ArrayList<>(SpringContextHolder.getApplicationContext().getBeansOfType(HandleChannelTypeAdvanced.class).values());
// ChannelFlowAdapter channelFlowAdapter = SpringContextHolder.getBean(ChannelFlowAdapter.class);
NettyClientProperties nettyClientProperties = SpringContextHolder.getBean(NettyClientProperties.class);
log.info("client permeate client from client_id:【{}】 to_client_id【{}】with visitor_port【{}】, clientTargetIp:【{}】clientTargetPort:【{}】",
fromClientId, toClientId, visitorPort, clientTargetIp, clientTargetPort);
NettyUdpClientPermeateClientVisitorSocket nettyTcpClientPermeateClientVisitorSocket =
NettyUdpClientPermeateClientVisitorSocket.NettyClientPermeateClientVisitorSocketBuilder.builder()
.builderClientId(fromClientId)
.builderClientTargetIp(clientTargetIp)
.builderClientTargetPort(clientTargetPort)
.builderVisitorPort(visitorPort)
.builderNettyClientProperties(nettyClientProperties)
// .builderChannelFlowAdapter(channelFlowAdapter)
.builderToClientId(toClientId)
.builderHandleChannelTypeAdvancedList(handleChannelTypeAdvancedList)
.build();
try {
nettyTcpClientPermeateClientVisitorSocket.start();
} catch (Exception e) {
e.printStackTrace();
}
}
}

View File

@@ -0,0 +1,35 @@
package org.framework.lazy.cloud.network.heartbeat.client.netty.permeate.udp.advanced;
import io.netty.channel.Channel;
import lombok.extern.slf4j.Slf4j;
import org.framework.lazy.cloud.network.heartbeat.common.NettyProxyMsg;
import org.framework.lazy.cloud.network.heartbeat.common.advanced.permeate.udp.client.AbstractHandleUdpDistributeClientPermeateClientTransferCloseTypeAdvanced;
import org.framework.lazy.cloud.network.heartbeat.common.enums.UdpMessageTypeEnums;
import org.framework.lazy.cloud.network.heartbeat.common.utils.ChannelAttributeKeyUtils;
/**
* 下发客户端渗透客户端通信通道关闭
*
* @see UdpMessageTypeEnums#UDP_DISTRIBUTE_CLIENT_PERMEATE_CLIENT_TRANSFER_CLOSE
*/
@Slf4j
public class ClientHandleUdpDistributeClientPermeateClientTransferCloseTypeAdvanced extends AbstractHandleUdpDistributeClientPermeateClientTransferCloseTypeAdvanced<NettyProxyMsg> {
/**
* 处理当前数据
*
* @param channel 当前通道
* @param nettyProxyMsg 通道数据
*/
@Override
public void doHandler(Channel channel, NettyProxyMsg nettyProxyMsg) {
// 关闭客户端真实通道、访客通道
Channel realChannel = ChannelAttributeKeyUtils.getNextChannel(channel);
realChannel.close();// 真实通道关闭
channel.close(); // 数据传输通道关闭
}
}

Some files were not shown because too many files have changed in this diff Show More