# 说明 ## kube-sidecar-injector sidecar-injector, 边车注入服务是使用kubernates中MutatingWebhookConfiguration,可对进入k8s的编排进行修改,简化k8s的编。 注意它只是辅助功能,没有它也可以,只不过配置更复杂,关于权限的编排没有办法统一管理而已。、 ## kube-fake-ssl 虚假ssl证书模块,该模块只能在k8s中运行,使用k8s的secret作为应用存储。提供ca自签名根证书生产,基于ca自签名证书的域名证书签发。 该模块为openresty中的fkssl组件提供服务,可以虚拟任何域名。如果业务pod信任ca证书,就可以监控pod流出的所有https流量。 ## openresty:1.21.4.1-az-[xx] 这是基于openresty为kratos框架定制的网关,增加了子请求,日志,流代理组件,同时基于lua开发了authz, fkssl组件。 参数: ```conf NGX_MASTER_PROC; 默认, on, off: 单线程模式 NGX_WORKER_CONNS; 默认, 4096, 单实例服务进程,无需太多线程 2x4096即可 NGX_WORKER_COUNT; 默认, 2, 工作线程数量 KS_WATCHDOG; 默认, 关闭 看门狗模式, KS_WATCHDOG=inlog,authz KS_PROXYDOG; 默认,关闭 看门狗代理, KS_PROXYDOG=pxy_p,pxy_h,pxy_a,pxy_i NGX_SVC_ADDR; 默认, 127.0.0.1 业务服务地址 NGX_RESOLVRE; 默认, 空, DNS服务地址,默认系统通过/etc/resolv.conf文件查找 NGX_INLOG_PORT; 默认, 12001 接口普通日志端口 NGX_AUTHZ_PORT; 默认, 12006 接口鉴权日志端口 NGX_PXY_P_PORT; 默认, 12011 系统代理端口, path模式, http://127.0.0.1:12011/[http|https|internal-port].[domain]/path NGX_PXY_H_PORT; 默认, 12012 系统代理端口, http代理, 支持https,但是无法记录日志 NGX_PXY_I_PORT; 默认, 12013 系统代理端口, iptables代理, 需要虚假证书支持 NGX_PXY_A_PORT; 默认,12014 系统代理端口, all_proxy, http and https, 需要虚假证书支持 NGX_INLOG_EXTRA; 默认, 空, 日志服务额外参数 NGX_AUTHZ_EXTRA; 默认, 空, 鉴权服务额外参数,一般指向CAS,有时候也指向KIN NGX_IAM_AUTHZ; 默认, http://end-iam-cas-svc/authz?$args 鉴权使用的地址 LOG_PROXY_HANDLER; 默认,/etc/nginx/az/log_by_sock_def.lua, 不记录用户详情 LOG_AUTHZ_HANDLER; 默认,/etc/nginx/az/log_by_sock_usr.lua, 记录用户详情 LUA_NGX_SSL_CACHE; 默认, 空, SSL缓存,如果强制开启pxy_i or pxy_a, 自动配置为10m,如果使用pxy_i, pxy_a, 需要指定 NGX_HTTP_CONF; 默认, 空, http, 自定义配置 NGX_STREAM_CONF; 默认, 空, stream, 自定义配置 LUA_SYSLOG_HOST; 默认, 127.0.0.1 日志地址 LUA_SYSLOG_PORT; 默认, 5144 日志端口 LUA_SYSLOG_TYPE; 默认, disable 日志类型, 开启: udp or tcp LUA_FAKESSL_URI; 默认,空, 例如:http://kube-fake-ssl/api/ssl/v1/cert?token=$(SSL_TOKEN)&key=${SSL_KEY}&profile=&kind=1&domain=%s LUA_PROXY_LAN_M; 默认, .default.svc.cluster.local, 默认通过/etc/resolv.conf查找 LUA_NGX_ENV_DEF; 默认, env ... LUA_PXY_FIX_HOSTS; 默认, /etc/nginx/az/proxy_k8s_hosts.lua, 修复局域网代理host, 只支持, service-name or service-name.namespace.svc ``` ## 部署应用编排(使用sidecar) ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: tst-iam-kin-2 spec: replicas: 1 selector: matchLabels: app: tst-iam-kin-app template: metadata: labels: app: tst-iam-kin-app ksidecar/inject: enable # 开启注入服务 annotations: ksidecar/configmap: >- # 下面的说明内容,部署时候需要删除 dev-kwdog#authz, # 注入接口鉴权, authz: 接口鉴权, authx: 登录鉴权, inlog: 只记录日志,不鉴权 dev-kwdog#iam, # 注入iam服务,提供/api/iam/*服务,用于登录令牌赋值,也可以在ingress中指定,而不用注入到kwdog中 dev-kwdog#proxya, # 注入正向代理,监控pod出口的https流量 kube-ksidecar/ca-tools#getter.dev, # 注入虚假ca根证书, 通过kube-fake-ssl服务获取虚假CA证书 kube-ksidecar/ca-tools#java11 # 让java信任ca证书, 如果使用alpine系统,使用java.sh。如果使用golang应用,可注入debian, alpine, centos... spec: # nodeName: w06.k8s.local imagePullSecrets: - name: local-registry containers: - image: dcr.dev.sims-cn.com/plus/j1kas:v1.0.287 #imagePullPolicy: Always imagePullPolicy: IfNotPresent name: kin livenessProbe: httpGet: path: /healthz port: 80 scheme: HTTP initialDelaySeconds: 10 readinessProbe: httpGet: path: /healthz port: 80 scheme: HTTP initialDelaySeconds: 30 envFrom: - configMapRef: name: iam-sso-go - configMapRef: name: iam-client-ja ``` ## 部署应用编排(不适用sidecar) ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: tst-iam-kin-2 spec: replicas: 1 selector: matchLabels: app: tst-iam-kin-app template: metadata: labels: app: tst-iam-kin-app spec: imagePullSecrets: - name: local-registry initContainers: # 通过kube-fake-ssl服务获取虚假证书,见虚假证书放入共享缓存中 - name: update-ca image: suisrc/openresty:1.21.4.1-hp-2 command: ["/bin/sh"] args: ["-c", "curl http://10.103.93.57/api/ssl/v1/ca/txt?key=dev > /cas/local-fake.crt"] volumeMounts: - name: ca-volume mountPath: /cas/ containers: - image: dcr.dev.sims-cn.com/plus/j1kas:v1.0.287 imagePullPolicy: IfNotPresent name: kin ... env: # 配置本地代理, PS:如果使用java, 必须使用start.sh脚本启动,已让java正常加载代理配置 - name: HTTP_PROXY value: 127.0.0.1:12014 - name: HTTPS_PROXY value: 127.0.0.1:12014 lifecycle: postStart: exec: # 让java信任虚假CA根证书 command: - /bin/bash - '-c' - >- echo -e "y" | keytool -importcert -alias local-fake-ca -file /cas/local-fake.crt -keystore $JAVA_HOME/lib/security/cacerts -storepass changeit volumeMounts: - name: ca-volume mountPath: /cas/ - name: kwdog image: suisrc/openresty:1.21.4.1-az-23 env: # 看门狗启动authz模式 - name: KS_WATCHDOG value: authz # 看门狗产生的日志发送的服务器 - name: LUA_SYSLOG_HOST value: '10.110.23.152' - name: LUA_SYSLOG_TYPE value: udp # 配置扩展的iam服务 - name: NGX_AUTHZ_EXTRA value: |- location = /api/iam/v1/a/odic/authc { set $proxy_tags "in,traffic,cas"; include /etc/nginx/az/authz_by_logger.conf; proxy_pass http://end-iam-cas-svc/authc?$args; } location ^~ /api/iam/v1/a/ { set $proxy_tags "in,traffic,kin"; include /etc/nginx/az/authz_by_logger.conf; proxy_pass http://end-iam-kin-svc; } - name: KS_PROXYDOG value: pxy_p,pxy_a - name: LUA_FAKESSL_URI value: 'http://10.103.93.57/api/ssl/v1/cert?token=$(DEV_TOKEN)&key=dev&profile=&kind=1&domain=%s' ports: # 看门狗入口 - name: authz containerPort: 12006 protocol: TCP volumes: - name: ca-volume emptyDir: medium: Memory sizeLimit: 1Mi ``` ## 非集团开发环境部署 借助看门狗服务对登录应用鉴权, 该方式适合应用部署在非平台集群环境中 ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: tst-iam-kin-2 spec: replicas: 1 selector: matchLabels: app: tst-iam-kin-app template: metadata: labels: app: tst-iam-kin-app spec: imagePullSecrets: - name: local-registry containers: - image: dcr.dev.sims-cn.com/plus/j1kas:v1.0.287 imagePullPolicy: IfNotPresent name: kin - name: kwdog image: suisrc/openresty:1.21.4.1-az-23 env: - name: KS_WATCHDOG value: authx - name: NGX_IAM_AUTHZ # 集团开发环境认证地址 value: https://sso.dev1.sims-cn.com/api/iam/v1/a/odic/authx?$args - name: NGX_AUTHZ_EXTRA value: |- location = /api/iam/v1/a/odic/authc { proxy_pass https://sso.dev1.sims-cn.com; } location ^~ /api/iam/v1/a/ { proxy_pass https://sso.dev1.sims-cn.com; } ports: - name: authz containerPort: 12006 protocol: TCP ``` ## 业务应用+KWDOG分享 以下是看门狗服务相对于应用的一种配置的完整组合。PS:只是其中的一种,更加参数不同,生成的内容会有差异 ```yaml apiVersion: v1 kind: ConfigMap metadata: name: tst-iam-kin-kwdog data: authz.conf: |- # 看门狗流量监控 # ############################################################################### # 记录入口,并鉴权 server { listen 12006; resolver 10.96.0.10 valid=120s; access_log off; server_name _; #location = /healthz { # return 200 '{"success":true,"data":$msec}'; # # 专用代理,应该以后端服务为准,不拦截监控检查接口 # # proxy_pass http://127.0.0.1; #} # more_clear_headers 'X-Request-*'; location = /api/iam/v1/a/odic/authc { set $proxy_tags "in,traffic,cas"; include /etc/nginx/az/authz_by_logger.conf; proxy_pass http://end-iam-cas-svc/authc?$args; } location ^~ /api/iam/v1/a/ { set $proxy_tags "in,traffic,kin"; include /etc/nginx/az/authz_by_logger.conf; proxy_pass http://end-iam-kin-svc; } # 访问鉴权时候,提供鉴权接口,用于鉴权 location /authz { internal; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Request-Origin-Host $http_host; proxy_set_header X-Request-Origin-Path $request_uri; proxy_set_header X-Request-Origin-Method $request_method; proxy_method GET; # 特殊情况下,强制返回用户信息,便于调试/日志记录 proxy_set_header X-Debug-Force-User "961212"; proxy_pass http://end-iam-cas-svc; } #set $lua_skip_pre_path "/api/iam/" #set $lua_auth_uri_path "/authz" access_by_lua_file /etc/nginx/az/authz_by_access.lua; body_filter_by_lua_file /etc/nginx/az/log_by_body.lua; log_by_lua_file /etc/nginx/az/log_by_sock_usr.lua; # 只绑定一个服务,专用服务 location / { set $proxy_tags "in,traffic,authz"; # inbound traffic, authz proxy_pass http://127.0.0.1; } } pxy_a.conf: | # 看门狗流量监控 # ############################################################################### # 记录出口,访问代理, proxy by http, http_proxy, https_proxy, all_proxy # 正向代理,不需要server_name # 可以代理http, https协议,需要虚假证书,可以记录全部 server { listen 12014; resolver 10.96.0.10 valid=120s; access_log off; server_name _; proxy_connect; proxy_connect_allow all; proxy_connect_connect_timeout 10s; proxy_connect_bind $remote_addr transparent; proxy_connect_address "127.0.0.1:12043"; location / { set $proxy_tags "out,traffic,path_a3"; # outbound traffic, path match路径匹配 body_filter_by_lua_file /etc/nginx/az/log_by_body.lua; log_by_lua_file /etc/nginx/az/log_by_sock_def.lua; set $proxy_http_host $http_host; rewrite_by_lua_file /etc/nginx/az/proxy_k8s_hosts.lua; proxy_pass $scheme://$proxy_http_host; } } # 通过虚假证书,解析https流量内容 server { listen 127.0.0.1:12043 ssl; resolver 10.96.0.10 valid=120s; access_log off; ssl_certificate /etc/nginx/az/localhost-crt.pem; ssl_certificate_key /etc/nginx/az/localhost-key.pem; ssl_certificate_by_lua_file /etc/nginx/az/ssl_by_fake_1.lua; location / { set $proxy_tags "out,traffic,path_a3"; # outbound traffic, path match路径匹配 body_filter_by_lua_file /etc/nginx/az/log_by_body.lua; log_by_lua_file /etc/nginx/az/log_by_sock_def.lua; set $proxy_http_host $http_host; rewrite_by_lua_file /etc/nginx/az/proxy_k8s_hosts.lua; proxy_pass $scheme://$proxy_http_host; } } pxy_p.conf: | # 看门狗流量监控 # ############################################################################### # 记录出口,访问代理, proxy by path # http://127.0.0.1:[port]/[kind].[domain]/[path] # kind = "http|https|internal[-port]" # 正向代理,不需要server_name server { listen 12011; resolver 10.96.0.10 valid=120s; access_log off; server_name _; location = /healthz { return 200 '{"success":true,"data":$msec}'; } location ~ ^/(?\w+)(-(?\d+))?\.(?[\w-\.]+)(?.*)$ { set $proxy_tags "out,traffic,path_m3"; # outbound traffic, path match路径匹配 body_filter_by_lua_file /etc/nginx/az/log_by_body.lua; log_by_lua_file /etc/nginx/az/log_by_sock_def.lua; set $proxy_http_host $proxy_host; rewrite_by_lua_file /etc/nginx/az/proxy_k8s_hosts.lua; include /etc/nginx/az/proxy_by_pass.conf; } location / { return 404 '{"success":false,"errorCode":"NOT-FOUND","errorMessage":"host=[$http_host] path=[$request_uri] status=404"}'; } } --- apiVersion: apps/v1 kind: Deployment metadata: name: tst-iam-kin-2 spec: replicas: 1 selector: matchLabels: app: tst-iam-kin-app template: metadata: labels: app: tst-iam-kin-app spec: imagePullSecrets: - name: local-registry initContainers: - name: update-ca image: suisrc/openresty:1.21.4.1-hp-2 command: ["/bin/sh"] args: ["-c", "curl http://10.103.93.57/api/ssl/v1/ca/txt?key=dev > /cas/local-fake.crt"] volumeMounts: - name: ca-volume mountPath: /cas/ containers: - image: dcr.dev.sims-cn.com/plus/j1kas:v1.0.287 imagePullPolicy: IfNotPresent name: kin ports: - name: http containerPort: 80 protocol: TCP livenessProbe: httpGet: path: /healthz port: 80 scheme: HTTP initialDelaySeconds: 10 readinessProbe: httpGet: path: /healthz port: 80 scheme: HTTP initialDelaySeconds: 30 envFrom: - configMapRef: name: iam-sso-go - configMapRef: name: iam-client-ja env: - name: HTTP_PROXY value: 127.0.0.1:12014 - name: HTTPS_PROXY value: 127.0.0.1:12014 lifecycle: postStart: exec: command: - /bin/bash - '-c' - >- echo -e "y" | keytool -importcert -alias local-fake-ca -file /cas/local-fake.crt -keystore $JAVA_HOME/lib/security/cacerts -storepass changeit volumeMounts: - name: ca-volume mountPath: /cas/ - name: kwdog image: suisrc/openresty:1.21.4.1-az-23 env: - name: LUA_SYSLOG_HOST value: '10.110.23.152' - name: LUA_SYSLOG_TYPE value: udp - name: LUA_FAKESSL_URI value: 'http://kube-fake-ssl/api/ssl/v1/cert?token=$(DEV_TOKEN)&key=$(DEV_KEY)&profile=&kind=1&domain=%s' ports: - name: authz containerPort: 12006 protocol: TCP volumeMounts: - mountPath: /etc/nginx/conf.d/ name: kwdog-volume volumes: - name: ca-volume emptyDir: medium: Memory sizeLimit: 1Mi - name: kwdog-volume configMap: name: tst-iam-kin-kwdog ``` ## java, Dockerfile启动入口 在java应用启动时候,无法使用系统HTTP_PROXY代理, 该脚本将HTTP_PROXY代理转换为-Dhttp.proxyHost,-Dhttp.proxyPort代理,提供java启动使用 ```yaml => chmod +x start.sh #!/bin/bash ## 如果存在HTTP_PROXY系统环境变量 if [ -n "$HTTP_PROXY" ]; then JAVA_OPTS_PROXY="$JAVA_OPTS_PROXY -Dhttp.proxyHost=`echo $HTTP_PROXY | awk -F ':' '{print $1}'` -Dhttp.proxyPort=`echo $HTTP_PROXY | awk -F ':' '{print $2}'`" fi ## 如果存在HTTPS_PROXY系统环境变量 if [ -n "$HTTPS_PROXY" ]; then JAVA_OPTS_PROXY="$JAVA_OPTS_PROXY -Dhttps.proxyHost=`echo $HTTPS_PROXY | awk -F ':' '{print $1}'` -Dhttps.proxyPort=`echo $HTTPS_PROXY | awk -F ':' '{print $2}'`" fi ## 如果不存在系统环境变量JAVA_OPTS, 则使用默认值 if [ -z "$JAVA_OPTS" ]; then JAVA_OPTS="-Xms256M -Xmx8G" fi ## 如果存在前置脚本,则执行前置脚本 if [ -n "$PRE_START_SCRIPT" ]; then /bin/bash -c "$PRE_START_SCRIPT" fi ## 启动服务 echo "start command: java $JAVA_OPTS_PROXY $JAVA_OPTS -jar app.jar --spring.profiles.active=prod" java $JAVA_OPTS_PROXY $JAVA_OPTS $JAVA_OPS_EXT -jar app.jar --spring.profiles.active=prod ``` ## 备注 k8s空间操作 注入服务,必须namespace中存在"ksidecar/inject=enable"标签,并且pod中同样具有"ksidecar/inject=enable"标签 ```shell # 获取空间标签 kubectl get ns dev-fmes --show-labels # 增加空间标签 kubectl label ns dev-fmes ksidecar/inject=enable # 删除空间标签 kubectl label ns dev-fmes ksidecar/inject- ```