k8s集群环境namespace共享和隔离
不同的工作组可以在同一个kubernets集群中工作,使用命名空间和context来对不同的工作组进行划分,使其在同一个集群工作又互不干扰
namespace共享和隔离,适用于人员多,人事架构系统复杂,一遍情况下,使用标签就够公司使用
1、创建命名空间namespeace
namespace-development.yaml
apiVersion: v1
kind: Namespace
metadata:
name: development
namespace-production.yaml
apiVersion: v1
kind: Namespace
metadata:
name: production
使用yaml创建命令空间
[root@kubernetes k8s]# kubectl create -f namespace-development.yaml
namespace/development created
[root@kubernetes k8s]# kubectl create -f namespace-production.yaml
namespace/production created
查看创建的命名空间(development,production)
[root@kubernetes k8s]# kubectl get namespace
NAME STATUS AGE
default Active 180d
development Active 97s
kube-public Active 180d
kube-system Active 180d
production Active 73s
2、定义运行的环境(Context)
使用kubectl config set-context来定义Context,并将Context放置于之前创建的命名空间中
使用这些命令前需要备份好/etc/kubernetes/admin.conf文件
set-cluster 设置访问集群的参数,下面的这条命令是访问 kubetnetes集群,指向kube-apiserver的地址
[root@kubernetes k8s]# kubectl config set-cluster kubernetes –server=http://192.168.73.152:6443
Cluster "kubernetes-cluster" set
[root@kubernetes k8s]# kubectl config set-context ctx-dev –namespace=development –cluster=kubernetes –user=dev
Context "ctx-dev" created.
[root@kubernetes k8s]# kubectl config set-context ctx-prod –namespace=production –cluster=kubernetes –user=prod
Context "ctx-prod" created.
原文件的内容
[root@kubernetes ~]# kubectl config view
apiVersion: v1
clusters:
– cluster:
certificate-authority-data: DATA+OMITTED
server: https://192.168.73.152:6443
name: kubernetes
contexts:
– context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
– name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
使用kubectl config view查看已经定义的Context(新文件的内容)
[root@kubernetes k8s]# kubectl config view
apiVersion: v1
clusters:
– cluster:
certificate-authority-data: DATA+OMITTED
server: https://192.168.73.152:6443
name: kubernetes
contexts:
– context:
cluster: kubernetes
namespace: development
user: dev
name: ctx-dev
– context:
cluster: kubernetes
namespace: production
user: prod
name: ctx-prod
– context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
– name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
kubectl config 命令在${HOME}/.kube目录下生成了一个名为config的文件,文件的内容就是以kubectl config view命令查看的内容,所以,也可以通过手工编辑该文件的方式来设置Context
3、设置工作组在特定的命令空间环境中工作
使用kubectl config use-context <context_name>来设置当前的运行环境
将当前的运行环境设置为"ctx-dev"
[root@kubernetes k8s]# kubectl config use-context ctx-dev
Switched to context "ctx-dev".
在当前的命名空间创建pod
[root@kubernetes k8s]# cat redis-slave-conrtoller.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: redis-slave
labels:
name: redis-slave
spec:
replicas: 2
selector:
name: redis-slave
template:
metadata:
labels:
name: redis-slave
spec:
containers:
– name: slave
image: kubeguide/guestbook-redis-slave
ports:
– containerPort: 6379
创建pod时报错,用户的权限错误,需要给用户授权
[root@kubernetes k8s]# kubectl create -f redis-slave-conrtoller.yaml
Error from server (Forbidden): error when creating "redis-slave-conrtoller.yaml": replicationcontrollers is forbidden: User "system:anonymous" cannot create resource "replicationcontrollers" in API group "" in the namespace "development"
查看集群的admin.conf文件,查看配置参数
[root@kubernetes k8s]# cat /etc/kubernetes/admin.conf
apiVersion: v1
clusters:
– cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFNU1ETXdNVEV4TXprMU4xb1hEVEk1TURJeU5qRXhNemsxTjFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTHZiCnlGYTRta2RHTko1ZThXWjh0RDkyUnpzeUZ3a204ZU1PVnpOMjdaT2Nac1c0M0c4Sy9lZHZrRndzUlduK1QzbzcKNElxdG8xVDRyc2M3TXljVklPeWNhdlkwZDFWWWFIY09CKzlNanVjZUh4cllzVmhxamJOQkV2b3h6OUpybG43UApvdzV0OVkwZDNZTnR5Wk9IZTVLckwyZGRCekF6VFIyWVc2K1BBMjdWKzRPTUF1NUJrakd4U2F4ZUpPeUFIMVpTCkVJUmsyMW9Ob1RkQ2dGZlAzTWs3eDM1SVEvVVZlQUpTeEdqRGhwYTFZd1QwV3IvMHd5NUlrcE50b29MSlNwRGgKdUIwaXBQU3hKSC93dS9SNW9reXZMVEF3OHlUc3FsamxvNzlyQnFCVkpJTzIycGZoT0tqZHFYUkpDOHNhVkVlVgptMys2elcxZzdjK0lJTHJxR2prQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHRC9RVTJSUENqUCtGcFBOc1kwSTZhMmc3MnoKWkppZnI4RTQ5bThiMnRkeDl3TU5tK0FGSkVtTGhoTlAzVnU2VHFhOVNSLzY2Y2ZwdnVKSkZLemRNSWtqK3M4VwprNGdoRWh3Q203bVl1cVhMVXNFNFR3K1FjWjJObWZ5aDJOdm16clJzcVVEa3hKUWFSWFpGRHJkdWVBVllMaVdwCm5abDVZdTJVOTRRV0NLOWJMTTRSMDYyTVYrZThHbVVXaW5jQWIvUFdsS3d3TTNSd2ZJbUtZM0h0S1BkenJLK08Kc0w3bHBkVE10anU3YXN6MjJueHN6dHMrdzU1TjE3aWhtN0dMV2tvaDlFOERwQ2xUbXFRVjdGSVpnWkl6TXAwbwp0Y3NoYmxLNmZGU1lDVHB1T1lPRDFwLzNYLzZEaXhMdUVJR25jZlUxMEt0RkZSZno4Z0RKOUJzVlFKMD0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://192.168.73.152:6443
name: kubernetes
contexts:
– context:
cluster: kubernetes
namespace: development
user: dev
name: ctx-dev
– context:
cluster: kubernetes
namespace: production
user: prod
name: ctx-prod
– context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
– name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJUWVZTXFnRzZxNUV3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB4T1RBek1ERXhNVE01TlRkYUZ3MHlNREF5TWpreE1UTTVOVGhhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTgrUnYrM24rWUtucUptRHIKVXlWeGxpSXg5TXVUd2VKK0w2bXB3T2xTZSt1VFRkREVzNFZIQXY0QzV1bzlDL1FBSEJIdWdvUFI0alo0U0xWTwpoQ1l4MlVvSzVzNjQ4Q3d4enhabW9XenRPaVJEcHp6ZG44VFpVb2gxSVY3Zy9NbFpvYWhCYXNnaGdEWjNGRGVoCmZYTGFJMmlDS1FpWG1RMkR5bTkxYXRHblhXOXZXUzk2VTBtV296NU1YRHNUOXFySnBLVVNLMlZnU2dOSW1BMFUKbU5UV3A2NlkvaTV1bllyT1VscXkyam54VWY4K3Rzd1VDVVVLTmxMMWh1dTVObEtrOVRVQzErZGI4OXRNb1VENQpiWkI5OGFyTlFLTUQyYzJkS3cxazhiQUV1U2VuOVdYdDBUTVVUa2tKdVdPZmFjdGNndFFzYkpCVDg3dEp0Tkg1CldROGNGd0lEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFKSjJoaGVGRHNTL0ZHbUpVQks2eDlsQVBYQTIrdUtKdTZaYgo5a1ROOFVyNDMyOTVGSmczdTdpY2tFYlhVQkJrQVJ3Nzd1WlgxTnl4dkZTbUVDU3hNWk5oUXRMSVh4M1FGakNyCnZEdDBrdXIxUnFnbHUwNk5XbHBBRzA0MGxHbTlER2F1WHE4STNwc2xMY0ZqSnluZzZRcHkvOFFUb1ZacTZWejUKamhIdnNZaDNaT0hFQk9PektoaEowcCs4eVc1dUFKVU5XVlk3RitvS2s1TnhOMHc1eTdoL1Y1Unhkd0xlNXMwZApBM3J1cStKMS9xRWtvRGdwLy9MaTFwZnRzV0JneGx6L0NFSGJOYjVxNUhzVHB2c0Z4UG5zK1Y3QWRsU0lsUE5tCnNFQ2gvT1BaakxzQmNxalo1U2ZJVFY4Lzc5bmNWSkRYYjNLSSt3VUlqTU84cUpDTHp5bz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBOCtSdiszbitZS25xSm1EclV5VnhsaUl4OU11VHdlSitMNm1wd09sU2UrdVRUZERFCnM0VkhBdjRDNXVvOUMvUUFIQkh1Z29QUjRqWjRTTFZPaENZeDJVb0s1czY0OEN3eHp4Wm1vV3p0T2lSRHB6emQKbjhUWlVvaDFJVjdnL01sWm9haEJhc2doZ0RaM0ZEZWhmWExhSTJpQ0tRaVhtUTJEeW05MWF0R25YVzl2V1M5NgpVMG1Xb3o1TVhEc1Q5cXJKcEtVU0syVmdTZ05JbUEwVW1OVFdwNjZZL2k1dW5Zck9VbHF5MmpueFVmOCt0c3dVCkNVVUtObEwxaHV1NU5sS2s5VFVDMStkYjg5dE1vVUQ1YlpCOThhck5RS01EMmMyZEt3MWs4YkFFdVNlbjlXWHQKMFRNVVRra0p1V09mYWN0Y2d0UXNiSkJUODd0SnROSDVXUThjRndJREFRQUJBb0lCQVFDazNSMXBnOHlWY1NacgpyUVpHSFVDTCtTK0Z4M3RwQzM2N0gxZzFwRUx5cTRyV2hqSUJIQzJsY3lscENKd2RrV0hsbDZWL3FqWGRVVzY3CnozdDB6eERSVGtEU1Jwa3pHTnpPbk5qaGMrMWthUkVtWW5sNXY2b2NKVXZ3TTkrb1lUOXFqSDh6L0hiUTRZQmIKTlVPL3RqeUl4T04vYzJSdXZIa0RCWmI4ang0d2hJcU1LZnBHbnBRYTVYd3hNY0pUdXZyelFzcStiZC82SkJQVApSWURsUGRBTkZwZHRyei8wQ2RlSGRxZEtNUFRHUGJlcXErNm5pQ2ZPT1Y2NkhGQThHZ1JMNkhReWF6aE13clY5CnBORXJ0bEsyYnlSMGgwakFUeU8zYk5mMEdCYWNCei9hb3A5UGtmSWV3eGk5RE1MY0o1U2VMeThzcHI3UndEbU0KU0VIT0tvOWhBb0dCQVB6TENYY3ZPOXY4WHppZDVrNXdnRHNjUUJuWXVDV2NSLytpZlYwN0hmT09IMFZTT1JBWAo2UkJIYlg2VWVlTmp4VlB5UG9xbDljOW9waHNNUVRBcHRNRHFrL3NIZnB4RmpYQmh3T3BHRjV6MjNPblRIbTNlCjRpM3ZOOW9jcmlUTmljc09JVkJnNE1uRGJqSXorMWdJS3dMMXJMYklySlpxRVhabngxT0o0a05uQW9HQkFQYjgKZnB1bHhKRTBZYUM5RnNscENNYlF3WlFocGFPMkY4MG55Q0JpQ2R4dWplaDRhaTVUamlIU0pQNGlHREpGZUM2Nwo2UXVseUV2WHE5ZXEvNkU0RHlmam0zUVoyUDZ3L3plY3ZLcXdkVURPOTk3MC8vaFJXM3QrSlk2ZUVFekhKZjNDCjcxVi96UXlCSnJmc01IYWZUNWd3QWx2SUhGUDI5MStIVFpaS2lTUFJBb0dCQU0wcDU3dkFMUC9FV20renp3ZlkKVEdvczZtVlB1MXhpOHhncEF6L1lTOTJ3ZWZhajNjWU84Y3VSZk84Zmg3RWg5elBEUmpqMnFvSXp0NlA2RlNvRwpuemwyNkxUeWh6SkRUS0JwNWN6OUtBcnZXNm0xKzF0R2FpREhwbmV4a1RBelZHTGxTK3BWUmZyYVRxUVdCaWFUCk1uMUxjR3BFZ2JYdHpkQmdONmk0SGxkL0FvR0FCUjB6VGV1L3h6WHJjNVdXdkRMbGtLQlA4YzVFak1BL2Urci8KOU5PeTJwS2JjNTNTclFUajQ0VmphTE95QnBCenJad01ka1pIZ0hLRngyQnVESnVKak16a3NpRzY0WU11dGFzVQpLUUNnWE9nc0w3eW85QzVURTVxSUVvenBUMzJzSnJYN21LeWR4VTJEVExtaVEwS0hObUtBR2dCcHZiOVdYZThDClIwTmgxWkVDZ1lCYmFNNm80Uk1oMmpiUTczU0d5YUU0Zi9HWUZYejBDNDlSYkd4Q2lJdWRKS3NlcUU4NmIyRUcKQVBhYVlFekRvSDZ0MXc2ejhSOURDRlpQT2p2QWJ3NzZpNlE5NTF2RCtqOHREcCtuSkhnRmJFMDkySUpkWHhKQQpxU3Rpb1hpWTduR3h2YkhDc3VsY3JBamFVNzU1TThhcVBEc2VXZmxyYThZR0RMYU8yZHBZRGc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
切换回默认的命名空间
[root@kubernetes k8s]# kubectl config use-context kubernetes-admin@kubernetes
Switched to context "kubernetes-admin@kubernetes".
[root@kubernetes k8s]#
[root@kubernetes k8s]#
[root@kubernetes k8s]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubernetes Ready master 182d v1.13.3
kubernetes-node1 Ready <none> 182d v1.13.3
kubernetes-node2 Ready <none> 182d v1.13.3
给报错的用户绑定一个cluster-admin的权限
[root@kubernetes k8s]# kubectl create clusterrolebinding system:anonymous –clusterrole=cluster-admin –user=system:anonymous
clusterrolebinding.rbac.authorization.k8s.io/system:anonymous created
切换到ctx-deve
[root@kubernetes k8s]# kubectl config use-context ctx-dev
创建pod
[root@kubernetes k8s]# kubectl create -f redis-slave-conrtoller.yaml
replicationcontroller/redis-slave created
查看创建的pod,创建成功(用户的权限错误在上面授权时已经解决了)
[root@kubernetes k8s]# kubectl get pod
NAME READY STATUS RESTARTS AGE
redis-slave-pndnq 0/1 ContainerCreating 0 12s
redis-slave-r69b2 0/1 ContainerCreating 0 12s
等了一会,已经有一个pod已经运行起来了
[root@kubernetes k8s]# kubectl get pod
NAME READY STATUS RESTARTS AGE
redis-slave-pndnq 1/1 Running 0 6m52s
redis-slave-r69b2 0/1 ImagePullBackOff 0 6m52s
参看创建的另一个环境ctx-prod,看ctx-dev和该环境是否隔离开
[root@kubernetes k8s]# kubectl config use-context ctx-prod
Switched to context "ctx-prod".
[root@kubernetes k8s]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
查看ctx-prod环境下的pod发现没有pod,说明ctx-dev和ctx-prod这两个环境隔离开,他们各自内部运行的服务不会发生干涉
[root@kubernetes k8s]# kubectl get pod
No resources found.
在ctx-prod也创建pod
[root@kubernetes k8s]# kubectl create -f redis-slave-conrtoller.yaml
replicationcontroller/redis-slave created
[root@kubernetes k8s]# kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-slave-hdhdd 0/1 ContainerCreating 0 6s
redis-slave-llt4s 1/1 Running 0 6s