安装配置KVM虚拟机

1、硬件环境检测

grep -E –color ‘vmx|svm’ /proc/cpuinfo

有输出代表cpu支持

2、软件包安装

yum install qemu-kvm qemu-img libvirt-python python-virtinst libvirt-client virt-viewer bridge-utils

—>少了一些组件也可

yum groupinstall -y Virtualization “Virtualization Client” “Virtualization Platform” “Virtualization Tools”

使用桥接网络安装bridge-utils包

yum -y install bridge-utils

3、关闭防火墙对IPv6支持并关闭selinux功能

chkconfig ip6tables off
setenforce 0 临时关闭
sed -i ‘s/SELINUX=enforcing/SELINUX=disabled/g’ /etc/selinux/config

4、查看模块并启动进程

查看KVM模块:lsmod | grep kvm
service libvirtd restart启动libvirt进程
modprobe kvm
modprobe kvm-intel
modprobe -ls | grep kvm

5、将网卡模式修改为桥接模式

首先复制原em1和em2网卡为br1和br2,将em1的配置文件修改为:

DEVICE=”em1“
BOOTPROTO=”static”
HWADDR=”30:85:A9:9F:67:74″
NM_CONTROLLED=”no”
ONBOOT=”yes”
TYPE=”Ethernet”
UUID=”34096e10-ff72-4142-b7b3-e290d200b68a”
BRIDGE=”br1″
命令:virsh iface-bridge em3 br3

虚拟安装脚本create_kvm.sh内容:

virt-install 
–name Test 
–ram 1536 
–vcpus=1 
–disk path=/data/img/kvm_Test.img,size=50 
–network bridge=br0 
–cdrom=/opt/iso/CentOS-6.8-x86_64-minimal.iso 
–accelerate 
–vnclisten=0.0.0.0 
–vncport=5911 
–vnc

启动虚机:

virsh start Test

查看虚拟状态:

virsh list –all

raw转换为qcow2格式:

qemu-img convert -f raw -0 qcow2 /data/img/kvm-Test.img /data/img/kvm-Test.qcow2
确认:
qemu-img info /data/img/kvm-Test.qcow2

修改虚机配置文件:

virsh edit Test

复制(克隆)一台虚拟服务器:

virt-clone -o Test -n Test -f /data/img/kvm-Test1.qcow2

虚机当前运行产生的相关文件所在目录:

/var/run/libvirt/qemu/
/etc/libvirt/qemu配置文件目录
virsh shutdown Test

如果报错,先暂停该虚机业务

virsh suspend Test
cp -av /data/img/kvm-Test1.qcow2  /data/img/kvm-Test2.qcow2
virsh dumpxml Test1 > /etc/libvirt/qemu/Test2.xml
virsh define /etc/libvirt/qemu/Test2.xml

虽然克隆完毕,但我们还不能启动,需编辑配置文件先修改vnc端口

virsh edit Test2

注意:与其他虚拟服务器vnc端口不一致而且要在可用范围内。
保存配置后尝试启动

virsh start Test2

通过vnc viewer连接
ifconfig命令查看后发现没网卡相关信息

more /etc/udev/rules.d/70-persistent-net.rules >>/etc/sysconfig/network-scripts/ifcfg-eth0

然后修改MAC地址和对应IP并注释无关代码后保存

service network restart

重启服务:

service libvirtd restart

再次尝试启动虚拟机Test2(略)

yum install -y acpid
service acpid start
chkconfig acpid on

kvm 基础镜像与增量镜像

KVM虚拟机的基本镜像和增量镜像

1、概述

实验目的:通过一个基础镜像(node.img),里面把各个虚拟机都需要的环境都搭建好,然后基于这个镜像建立起一个个增量镜像,每个增量镜像对应一个虚拟机,虚拟机对镜像中所有的改变都记录在增量镜像里面,基础镜像始终保持不变。
功能:节省磁盘空间,快速复制虚拟机。
环境:
基本镜像文件:node.img 虚拟机ID:node
增量镜像文件:node4.img 虚拟机ID:node4
要求:以基本镜像文件node.omg为基础,创建一个镜像文件node4.img,以此创建一个虚拟机机node4,虚拟机node4的改变将存储于node4.img中。

2、创建增量镜像文件

[root@target kvm_node]#qemu-img create -b node.img -f qcow2 node4.img
[root@target kvm_node]# qemu-img info node4.img 
image: node4.img
file format: qcow2
virtual size: 20G (21495808000 bytes)
disk size: 33M
cluster_size: 65536
backing file: node.img (actual path: node.img)

注:该实验只是针对qcow2格式的镜像文件,未测试raw格式的镜像文件是否可行。

3、创建虚拟机node4的XML配置文件

[root@target kvm_node]# cp /etc/libvirt/qemu/node.xml /etc/libvirt/qemu/node4.xml
[root@target kvm_node]# vim /etc/libvirt/qemu/node4.xml 
<domain type='kvm'>
  <name>node4</name>                                  #node4的虚拟机名,须修改,否则与基本虚拟机冲突
  <uuid>4b7e91eb-6521-c2c6-cc64-c1ba72707fe4</uuid>   #node4的UUID,必须修改,否则与基本虚拟机冲突
  <memory>524288</memory>
  <currentMemory>524288</currentMemory>
  <vcpu cpuset='0-1'>2</vcpu>
  <os>
    <type arch='x86_64' machine='rhel5.4.0'>hvm</type>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
  </features>
  <clock offset='localtime'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/virhost/kvm_node/node4.img'/>    #将原指向/virhost/kvm_node/node.img改为node4.img
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>
    <interface type='bridge'>
      <mac address='54:52:00:69:d5:f4'/>             #修改网卡MAC,防止冲突
      <source bridge='br0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <interface type='bridge'>
      <mac address='54:52:00:69:d5:e4'/>            #修改网卡MAC,防止冲突
      <source bridge='br0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target port='0'/>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <input type='mouse' bus='ps2'/>
    <graphics type='vnc' port='5904' autoport='no' listen='0.0.0.0' passwd='xiaobai'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <video>
      <model type='cirrus' vram='9216' heads='1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </memballoon>
  </devices>
</domain>

4、根据xml配置定义虚拟机node4

[root@target kvm_node]#virsh define /etc/libvirt/qemu/node4.xml
[root@target kvm_node]#virsh start node4  

5、测试

[root@target kvm_node]# du -h node.img

6.3G node.img

[root@target kvm_node]# du -h node4.img

33M node4.img

[root@node4 ~]# dd if=/dev/zero of=test bs=1M count=200   #在虚拟机node4上增量200M大小文件
200+0 records in
200+0 records out
209715200 bytes (210 MB) copied, 1.00361 seconds, 209 MB/s
[root@target kvm_node]# du -h node.img                    #基本镜像文件node.img大小未变

6.3G node.img

[root@target kvm_node]# du -h node.img                    #增量镜像文件node4.img增加200M了
234M    node4.img

KVM虚拟机virsh管理常用命令

管理kvm虚拟机

常用的虚拟机管理命令

  • 列出所有的虚拟机
1  virsh list --all
  • 显示虚拟机信息
1  virsh dominfo kvm-1
  • 显示虚拟机内存和cpu的使用情况
1  yum install virt-top -y
2  virt-top
  • 显示虚拟机分区信息
1  virt-df kvm-1
  • 关闭虚拟机(shutodwn)
1  virsh shutdown kvm-1
  • 启动虚拟机
1  virsh start kvm-1
  • 设置虚拟机(kvm-1)跟随系统自启
1  virsh autostart kvm-1
  • 关闭虚拟及自启
1  virsh autostart --disable kvm-1
  • 删除虚拟机
1  virsh undefine kvm-1
  • 通过控制窗口登录虚拟机
virsh console kvm-1

给虚拟机添加硬盘

添加硬盘(lvm卷)或者USB到虚拟机上

1  virsh attach-disk kvm-1 /dev/sdb vbd --driver qemu --mode shareable
  • 使用完成之后可以卸载usb
1  virsh detach-disk kvm vdb

添加lvm卷,并挂载

[root@sh-kvm-1 ~]# lvcreate -n kvm-1-data -L 50G vg_shkvm1
[root@sh-kvm-1 ~]# virsh attach-disk kvm-1 /dev/vg_shkvm1/kvm-1-data vdb --driver qemu --mode shareable
Disk attached successfully
# 登录到kvm-1上查看lvm是否已经被挂载
[root@sh-kvm-1 ~]# virsh console kvm-1 # 输入kvm-1的用户和密码
[root@sh-kvm-1-1 ~]# fdisk -l # 查看硬盘挂载情况

Disk /dev/vda: 21.5 GB, 21474836480 bytes
16 heads, 63 sectors/track, 41610 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00058197

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *           3        1018      512000   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/vda2            1018       41611    20458496   8e  Linux LVM
Partition 2 does not end on cylinder boundary.

Disk /dev/mapper/VolGroup-lv_root: 18.8 GB, 18798870528 bytes
255 heads, 63 sectors/track, 2285 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/VolGroup-lv_swap: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/vdb: 53.7 GB, 53687091200 bytes  # 新添加的硬盘
16 heads, 63 sectors/track, 104025 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
  • 格式化新添加的vdb,并添加到lvm组中
# 对新添加的硬盘分区
[root@sh-kvm-1-1 ~]# fdisk /dev/vdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0xf04b6807.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): m  # 查看帮助
Command action
   a   toggle a bootable flag
   b   edit bsd disklabel
   c   toggle the dos compatibility flag
   d   delete a partition
   l   list known partition types
   m   print this menu
   n   add a new partition
   o   create a new empty DOS partition table
   p   print the partition table
   q   quit without saving changes
   s   create a new empty Sun disklabel
   t   change a partition's system id
   u   change display/entry units
   v   verify the partition table
   w   write table to disk and exit
   x   extra functionality (experts only)
Command (m for help): n  #添加一个分区
Command action
   e   extended
   p   primary partition (1-4)
p  #选择添加一个扩展分区
Partition number (1-4):
Value out of range.
Partition number (1-4): 1
First cylinder (1-104025, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-104025, default 104025):
Using default value 104025

Command (m for help): t  #改变分区的格式
Selected partition 1
Hex code (type L to list codes): 8e  #改成lvm
Changed system type of partition 1 to 8e (Linux LVM)

Command (m for help): w  # 保存更改
root@sh-kvm-1-1 ~]# mkfs.ext4 /dev/vdb1  # 格式化分区
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
3276800 inodes, 13107142 blocks
655357 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
400 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
    4096000, 7962624, 11239424

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 28 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
[root@sh-kvm-1-1 ~]# pvc reate /dev/vdb1   # 创建pv
vdb   vdb1
[root@sh-kvm-1-1 ~]# vgextend VolGroup /dev/vdb  # 扩展lvm vg
vdb   vdb1
[root@sh-kvm-1-1 ~]# vgs
  VG       #PV #LV #SN Attr   VSize  VFree
  VolGroup   2   2   0 wz--n- 69.50g 50.00g
# 从上面能看出,新添加的 已经加到lvm组中

改变虚拟机的参数

通过命令行更改创建之后虚拟机的内存,cpu等信息

  • 更改内存
# 1. 查看虚拟机当前内存
[root@sh-kvm-1 ~]# virsh dominfo kvm-1 | grep memory
Max memory:     4194304 KiB
Used memory:    4194304 KiB

# 2、动态设置内存为512MB,内存减少
virsh setmem kvm-1 524288
# 注意单位必须是KB

# 3、查看内存变化
# virsh dominfo kvm-1 | grep memory
Max memory: 14194304 KiB
Used memory: 524288 kiB

# 4、内存增加
virsh shutdown kvm-1
virsh edit kvm-1  # 直接更改memory
virsh create /etc/libvirt/demu/kvm-1/xml
# 之后操作1,2,3步骤增加内存
  • 更改CPU

需要修改配置文件,因此需要停止虚拟机

virsh shutdown kvm-1
virsh edit kvm-1
#  2  # 4 &gt; 2
virsh create /etc/libvirt/demu/kvm-1/xml
  • 硬盘扩容
1. Create a 10-GB non-sparse file:
# dd if=/dev/zero of=/vm-images/vm1-add.img bs=1M count=10240
2. Shutdown the VM:
 # virsh shutdown vm1
3. Add an extra entry for ‘disk’ in the VM's XML file in /etc/libvirt/qemu. You can look copy &amp; paste
the entry for your mail storage device and just change the target and address tags. For example:
 # virsh edit vm1




 <address />

 Add:




 <address />

 # 这里建议使用上面的添加硬盘的方式添加

删除虚拟机

  • 第一步,停掉虚拟机
1  virsh shutdown kvm-1
  • 第二步
1  virsh destroy kvm-1
  • 第三步
1  virsh undefine kvm-1
  • 第四部
1  rm /dev/vg_shkvm1/kvm-1  # 不建议删除硬盘

CentOS7 nginx keepalived主备实例配置

nginx1 ip:192.168.12.4 #MASTER
nginx2 ip:192.168.12.10 #BACKUP
nginx_vip :192.168.12.100

原理可参考:
http://www.keepalived.org/documentation.html
系统为CentOS7

1、配置一下yum源

curl -L http://mirrors.aliyun.com/repo/Centos-7.repo > /etc/yum.repos.d/CentOS-Base.repo
curl -L http://mirrors.aliyun.com/repo/epel-7.repo > /etc/yum.repos.d/epel.repo

yum -y install keepalived nginx

2、设置服务器基本环境

关闭防火墙:iptables -F;service iptables save
关闭seLinux:setenforce 0;sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

3、配置一下nginx,好区分测试环境

192.168.12.4:
echo 192.168.12.4 > /usr/share/nginx/html/index.html

192.168.12.10:
echo 192.168.12.10 > /usr/share/nginx/html/index.html

4、配置keepalived.conf

[[email protected]]# cat keepalived.conf 
! Configuration File for keepalived
global_defs {
notification_email {
[email protected]
}
notification_email_from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}

vrrp_script check_nginx {
script "sh /etc/keepalived/check_nginx.sh" 
interval 2 
}

vrrp_instance VI_1 {
state MASTER
interface eth0
mcast_src_ip 192.168.12.4
virtual_router_id 51
priority 100
advert_int 2
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.12.100
}
track_script {
check_nginx
}
}

[[email protected]]# cat keepalived.conf 
! Configuration File for keepalived
global_defs {
notification_email {
[email protected]
}
notification_email_from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}

vrrp_script check_nginx {
script "sh /etc/keepalived/check_nginx.sh" 
interval 2 
}

vrrp_instance VI_1 {
state BACKUP
interface eth0
mcast_src_ip 192.168.12.10
virtual_router_id 51
priority 99
advert_int 2
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.12.100
}
track_script {
check_nginx
}
}

脚本check_nginx.sh内容如下

#!/bin/bash

A=`pgrep nginx|wc -l` 
if [ $A -eq 0 ];then 
/bin/systemctl start nginx.service
if [ `pgrep nginx|wc -l` -eq 0 ];then
/bin/systemctl stop keepalived.service
fi
fi

5、启动

在两个服务器执行启动命令

service nginx start
service keepalived start

6、检查

[[email protected] keepalived]# ip addr
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host 
valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:85:83:e8 brd ff:ff:ff:ff:ff:ff
inet 192.168.12.4/24 brd 192.168.12.255 scope global eth0
valid_lft forever preferred_lft forever
inet 192.168.12.100/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe85:83e8/64 scope link 
valid_lft forever preferred_lft forever
[[email protected] keepalived]# ip addr
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host 
valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:62:6a:50 brd ff:ff:ff:ff:ff:ff
inet 192.168.12.10/24 brd 192.168.12.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe62:6a50/64 scope link 
valid_lft forever preferred_lft forever

仔细观察可以发现192.168.12.4的服务器多了一个IP,就是192.168.12.100,这个就是VIP
再看日志发现主服务器日志有这下面一条:

Jul 12 10:30:57 localhost Keepalived_vrrp[2510]: VRRP_Instance(VI_1) Entering MASTER STATE

备服务器日志

Jul 11 00:01:00 localhost Keepalived_vrrp[111937]: VRRP_Instance(VI_1) Entering BACKUP STATE

综上主备已搭建成功

7、模拟主nginx宕机

Kubernetes(k8s) Pod 弹性伸缩详解与使用

Kubernetes HPA(Horizontal Pod Autoscaling)Pod水平自动伸缩,通过此功能,只需简单的配置,集群便可以利用监控指标(cpu使用率等)自动的扩容或者缩容服务中Pod数量,当业务需求增加时,系统将为您无缝地自动增加适量容器 ,提高系统稳定性。本文将详细讲解HPA的核心设计原理和基于Hepaster的使用方法。

1. HPA概览

未分类
HPA在kubernetes中被设计为一个controller,可以简单的使用kubectl autoscale命令来创建。HPA Controller默认30秒轮询一次,查询指定的resource中(Deployment,RC)的资源使用率,并且与创建时设定的值和指标做对比,从而实现自动伸缩的功能。

  • 当你创建了HPA后,HPA会从Heapster或者用户自定义的RESTClient获取定义的资源中每一个pod利用率或原始值(取决于指定的目标类型)的平均值,然后和HPA中定义的指标进行对比,同时计算出需要伸缩的具体值并进行操作。

当Pod没有设置request时,HPA不会工作。

  • 目前,HPA可以从两种取到获取数据:

  • Heapster(稳定版本,仅支持CPU使用率,在使用腾讯云容器服务时,需要手动安装)。

  • 自定义的监控(alpha版本,不推荐用于生产环境) 。

  • 当需要从自定义的监控中获取数据时,只能设置绝对值,无法设置使用率。

  • 现在只支持Replication Controller, Deployment or Replica Set的扩缩容。

2. 自动伸缩算法

  • HPA Controller会通过调整副本数量使得CPU使用率尽量向期望值靠近,而且不是完全相等.另外,官方考虑到自动扩展的决策可能需要一段时间才会生效:例如当pod所需要的CPU负荷过大,从而在创建一个新pod的过程中,系统的CPU使用量可能会同样在有一个攀升的过程。所以,在每一次作出决策后的一段时间内,将不再进行扩展决策。对于扩容而言,这个时间段为3分钟,缩容为5分钟。

  • HPA Controller中有一个tolerance(容忍力)的概念,它允许一定范围内的使用量的不稳定,现在默认为0.1,这也是出于维护系统稳定性的考虑。例如,设定HPA调度策略为cpu使用率高于50%触发扩容,那么只有当使用率大于55%或者小于45%才会触发伸缩活动,HPA会尽力把Pod的使用率控制在这个范围之间。

  • 具体的每次扩容或者缩容的多少Pod的算法为:

        Ceil(前采集到的使用率 / 用户自定义的使用率) * Pod数量)
  • 每次最大扩容pod数量不会超过当前副本数量的2倍

3. HPA YAML文件详解

下面是一个标准的基于heapster的HPA YAML文件,同时也补充了关键字段的含义

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  creationTimestamp: 2017-06-29T08:04:08Z
  name: nginxtest
  namespace: default
  resourceVersion: "951016361"
  selfLink: /apis/autoscaling/v1/namespaces/default/horizontalpodautoscalers/nginxtest
  uid: 86febb63-5ca1-11e7-aaef-5254004e79a3
spec:
  maxReplicas: 5 //资源最大副本数
  minReplicas: 1 //资源最小副本数
  scaleTargetRef:
    apiVersion: extensions/v1beta1
    kind: Deployment //需要伸缩的资源类型
    name: nginxtest  //需要伸缩的资源名称
  targetCPUUtilizationPercentage: 50 //触发伸缩的cpu使用率
status:
  currentCPUUtilizationPercentage: 48 //当前资源下pod的cpu使用率
  currentReplicas: 1 //当前的副本数
  desiredReplicas: 2 //期望的副本数
  lastScaleTime: 2017-07-03T06:32:19Z

4. 如何使用

  • 在上文的介绍中我们知道,HPA Controller有两种途径获取监控数据:Heapster和自定义监控,由于自定义监控一直处于alpha阶段,所以本文这次主要介绍在腾讯云容器服务中使用基于Heapster的HPA方法。

  • 腾讯云容器服务没有默认安装Heapster,所以如果需要使用HPA需要手动安装。

  • 此方法中需要使用kubectl命令操作集群,集群apiservice地址,账号和证书相关信息暂时可以提工单申请,相关功能的产品化方案已经在设计中。

4.1 创建Heapster

创建Heapster ServiceAccount

apiVersion: v1
kind: ServiceAccount
metadata:
  name: heapster
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile

创建Heapster deployment

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: heapster-v1.4.0-beta.0
  namespace: kube-system
  labels:
    k8s-app: heapster
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    version: v1.4.0-beta.0
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: heapster
      version: v1.4.0-beta.0
  template:
    metadata:
      labels:
        k8s-app: heapster
        version: v1.4.0-beta.0
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      containers:
        - image: ccr.ccs.tencentyun.com/library/heapster-amd64:v1.4.0-beta.0
          name: heapster
          livenessProbe:
            httpGet:
              path: /healthz
              port: 8082
              scheme: HTTP
            initialDelaySeconds: 180
            timeoutSeconds: 5
          command:
            - /heapster
            - --source=kubernetes.summary_api:''
        - image: ccr.ccs.tencentyun.com/library/addon-resizer:1.7
          name: heapster-nanny
          resources:
            limits:
              cpu: 50m
              memory: 90Mi
            requests:
              cpu: 50m
              memory: 90Mi
          command:
            - /pod_nanny
            - --cpu=80m
            - --extra-cpu=0.5m
            - --memory=140Mi
            - --extra-memory=4Mi
            - --threshold=5
            - --deployment=heapster-v1.4.0-beta.0
            - --container=heapster
            - --poll-period=300000
            - --estimator=exponential
      serviceAccountName: heapster

创建Heapster Service

kind: Service
apiVersion: v1
metadata: 
  name: heapster
  namespace: kube-system
  labels: 
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "Heapster"
spec: 
  ports: 
    - port: 80
      targetPort: 8082
  selector: 
    k8s-app: heapster

保存上述的文件,并使用 kubectl create -f FileName.yaml创建,当创建完成后,可以使用kubectl get 查看

$ kubectl get deployment heapster-v1.4.0-beta.0 -n=kube-system
NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
heapster-v1.4.0-beta.0   1         1         1            1           1m
$ kubectl get svc heapster -n=kube-system
NAME       CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
heapster   172.16.255.119   <none>        80/TCP    4d

4.2 创建服务

创建一个用于测试的服务,可以选择从控制台创建,实例数量设置为1
未分类

4.3 创建HPA

现在,我们要创建一个HPA,可以使用如下命令

$ kubectl autoscale deployment nginxtest --cpu-percent=10 --min=1 --max=10
deployment "nginxtest" autoscaled
$ kubectl get hpa                                                         
NAME        REFERENCE              TARGET    CURRENT   MINPODS   MAXPODS   AGE
nginxtest   Deployment/nginxtest   10%       0%        1         10        13s

此命令创建了一个关联资源nginxtest的HPA,最小的pod副本数为1,最大为10。HPA会根据设定的cpu使用率(10%)动态的增加或者减少pod数量,此地方用于测试,所以设定的伸缩阈值会比较小。

4.4 测试

4.4.1 增大负载

我们来创建一个busybox,并且循环访问上面创建的服务。

$ kubectl run -i --tty load-generator --image=busybox /bin/sh
If you don't see a command prompt, try pressing enter.
/ # while true; do wget -q -O- http://172.16.255.60:4000; done

下图可以看到,HPA已经开始工作。

$ kubectl get hpa
NAME        REFERENCE              TARGET    CURRENT   MINPODS   MAXPODS   AGE
nginxtest   Deployment/nginxtest   10%       29%        1         10        27m

同时我们查看相关资源nginxtest的副本数量,副本数量已经从原来的1变成了3。

$ kubectl get deployment nginxtest
NAME        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginxtest   3         3         3            3           4d

同时再次查看HPA,由于副本数量的增加,使用率也保持在了10%左右。

$ kubectl get hpa
NAME        REFERENCE              TARGET    CURRENT   MINPODS   MAXPODS   AGE
nginxtest   Deployment/nginxtest   10%       9%        1         10        35m

4.4.2 减小负载

我们关掉刚才的busbox并等待一段时间。

$ kubectl get hpa     
NAME        REFERENCE              TARGET    CURRENT   MINPODS   MAXPODS   AGE
nginxtest   Deployment/nginxtest   10%       0%        1         10        48m
$ kubectl get deployment nginxtest
NAME        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginxtest   1         1         1            1           4d

可以看到副本数量已经由3变为1。

5. 总结

本文主要介绍了HPA的相关原理和使用方法,此功能可以能对服务的容器数量做自动伸缩,对于服务的稳定性是一个很好的提升。但是当前稳定版本中只有cpu使用率这一个指标,是一个很大的弊端。我们会继续关注社区HPA自定义监控指标的特性,待功能稳定后,会持续输出相关文档。

kubernetes(k8s)运行StatefulSet实例

目标

  在你的环境中创建一个PV
  创建一个MySQl的Deployment
  在集群中以DNS名称的方式,将MySQL暴露给其他的pod

开始之前

  你需要一个Kubernetes集群,一个可以连接到集群的kubectl命令行工具。如果你没有集群,你可以使用Minikube来创建。
  我们会创建一个PV(PersistentVolume)用于数据存储。点击这里来查看PV支持的类型,该指导会使用GCEPersistentDisk来演示,但其实任何的PV类型都可以正常工作。GCEPersistentDisk只能在Google Compute Engine(GCE)上工作。

在你的环境中创建磁盘

  在Google Compute Engine,运行:

gcloud compute disks create --size=20GB mysql-disk

  然后创建一个PV,指向刚刚创建的mysql-disk。下面是一个创建PV的配置文件,指向上面提到的GCE磁盘:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-pv
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  gcePersistentDisk:
    pdName: mysql-disk
    fsType: ext4

  注意pdName: mysql-disk这一行匹配上面GCE环境创建磁盘的名称。如果要在其他环境中创建PV,可以查看Persistent Volumes来获取详细信息。
  创建PV:

kubectl create -f https://k8s.io/docs/tasks/run-application/gce-volume.yaml

部署MySQL

  你可以通过Kubernetes Deployment的方式来创建一个有状态服务,然后使用PVC(PersistentVolumeClaim)来连接已经存在的PV。比如,下面的YAML文件描述了一个运行MySQL并使用PVC的Deployment。文件定义了一个mount到/var/lib/mysql的卷,并创建了一个需要20G卷大小的PVC。
  注意:密码定义在YAML配置文件中,这是不安全的。查看Kubernetes Secrets获取更安全的方案。

apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  ports:
    - port: 3306
  selector:
    app: mysql
  clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: mysql
spec:
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        env:
          # Use secret in real usage
        - name: MYSQL_ROOT_PASSWORD
          value: password
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pv-claim

  1. 部署YAML文件中的内容。

kubectl create -f https://k8s.io/docs/tasks/run-application/mysql-deployment.yaml

  2. 显示Deployment的信息。

kubectl describe deployment mysql

 Name:                 mysql
 Namespace:            default
 CreationTimestamp:    Tue, 01 Nov 2016 11:18:45 -0700
 Labels:               app=mysql
 Selector:             app=mysql
 Replicas:             1 updated | 1 total | 0 available | 1 unavailable
 StrategyType:         Recreate
 MinReadySeconds:      0
 OldReplicaSets:       <none>
 NewReplicaSet:        mysql-63082529 (1/1 replicas created)
 Events:
   FirstSeen    LastSeen    Count    From                SubobjectPath    Type        Reason            Message
   ---------    --------    -----    ----                -------------    --------    ------            -------
   33s          33s         1        {deployment-controller }             Normal      ScalingReplicaSet Scaled up replica set mysql-63082529 to 1

  3. 显示Deployment创建的pod。

kubectl get pods -l app=mysql

 NAME                   READY     STATUS    RESTARTS   AGE
 mysql-63082529-z3ki   1/1       Running   0          m

  4. 检查PV。

 kubectl describe pv mysql-pv

 Name:            mysql-pv
 Labels:          <none>
 Status:          Bound
 Claim:           default/mysql-pv-claim
 Reclaim Policy:  Retain
 Access Modes:    RWO
 Capacity:        20Gi
 Message:    
 Source:
     Type:        GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine)
     PDName:      mysql-disk
     FSType:      ext4
     Partition:   0
     ReadOnly:    false
 No events.

  5. 检查PVC。

 kubectl describe pvc mysql-pv-claim

 Name:         mysql-pv-claim
 Namespace:    default
 Status:       Bound
 Volume:       mysql-pv
 Labels:       <none>
 Capacity:     20Gi
 Access Modes: RWO
 No events.

访问MySQL实例

  前面的YAML文件创建了一个服务,允许集群的其他Pod可以访问数据库。服务选项clusterIP:None使得服务的DNS名直接解析为Pod的IP地址。当你的服务只有一个Pod,并且你不打算增加Pod的数量时,这是一种最佳的使用方式。
  运行一个Mysql客户端来连接Mysql服务:

kubectl run -it --rm --image=mysql:5.6 mysql-client -- mysql -h <pod-ip> -ppassword

  上面的命令在集群中创建了一个新的Pod,该Pod运行了一个mysql客户端,连接着上面服务的Mysql Server。如果它连接成功,也就说明了这个有状态的MySQL数据库成功启动和运行了。

Waiting for pod default/mysql-client-274442439-zyp6i to be running, status is Pending, pod ready: false
If you don't see a command prompt, try pressing enter.

mysql>

更新

  更新Deployment的镜像或者其他部分,同样可以照例使用kubectl apply命令来完成。以下是使用有状态应用时需要注意的地方:

  不要扩容该应用。该应用只针对单例应用。下面的PV只能映射给一个Pod。对于集群的有状态应用,请查看StatefulSet文档。
  在Deployment的YAML配置文档中使用strategy: type: Recreate。它会告诉Kubernetes不要使用rolling update。因为Rolling update不会工作,因此不会有多个Pod同时运行。策略Recreate会在使用更新配置创建一个新的Pod时删除之前的Pod。

删除Deployment

  通过名称来删除Deployment对象:

kubectl delete deployment,svc mysql
kubectl delete pvc mysql-pv-claim
kubectl delete pv mysql-pv

  另外,如果你使用的是GCE disk,还需要删除对应的disk:

gcloud compute disks delete mysql-disk

RedHat6.5系统LVM扩容根文件系统

一、新增物理空间

系统管理

系统管理

二、linux中创建新分区

1、首先查看硬盘信息,用fdisk -l命令,如果有硬盘有剩余空间就可以对其进行分区。

[root@master 桌面]# fdisk -l

Disk /dev/sda: 64.4 GB, 64424509440 bytes
255 heads, 63 sectors/track, 7832 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0004bbc1

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          64      512000   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              64        2611    20458496   8e  Linux LVM

Disk /dev/mapper/vg_hadoop-lv_root: 18.8 GB, 18798870528 bytes
255 heads, 63 sectors/track, 2285 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/vg_hadoop-lv_swap: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

2、下面对/dev/sda进行分区

fdisk /dev/sda

Command (m for help): m   //输入m查看帮助文档

Command (m for help): n   //输入n新建分区
[root@master ~]# fdisk /dev/sda

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): m
Command action
   a   toggle a bootable flag
   b   edit bsd disklabel
   c   toggle the dos compatibility flag
   d   delete a partition
   l   list known partition types
   m   print this menu
   n   add a new partition
   o   create a new empty DOS partition table
   p   print the partition table
   q   quit without saving changes
   s   create a new empty Sun disklabel
   t   change a partition's system id
   u   change display/entry units
   v   verify the partition table
   w   write table to disk and exit
   x   extra functionality (experts only)

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)

3、建立扩展分区

有扩展分区和主分区,逻辑分区在扩展分区中建立。注意到括号中的1-4,最多只能建四个主分区(包括扩展分区)。这里创建扩展分区,

输入 : e  #建立扩展分区

Partition number (1-4)  :  3   #因为已经有sda1、sda2了

First cylinder (2611-7832, default 2611):Last cylinder, +cylinders or +size{K,M,G} (2611-7832, default 7832): #直接Enter键,默认即可

Command (m for help): p   #查看分区结果
Command action
   e   extended
   p   primary partition (1-4)
e
Partition number (1-4): 3
First cylinder (2611-7832, default 2611): 
Using default value 2611
Last cylinder, +cylinders or +size{K,M,G} (2611-7832, default 7832): 
Using default value 7832

Command (m for help): p

Disk /dev/sda: 64.4 GB, 64424509440 bytes
255 heads, 63 sectors/track, 7832 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0004bbc1

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          64      512000   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              64        2611    20458496   8e  Linux LVM
/dev/sda3            2611        7832    41939020    5  Extended

4、扩展分区建好就可以在扩展分区建立逻辑分区了

Command (m for help): n

输入 : l  建立逻辑分区

Command (m for help): p   #查看分区结果
Command action
   l   logical (5 or over)
   p   primary partition (1-4)
l
First cylinder (2611-7832, default 2611): 
Using default value 2611
Last cylinder, +cylinders or +size{K,M,G} (2611-7832, default 7832): 
Using default value 7832

Command (m for help): p

Disk /dev/sda: 64.4 GB, 64424509440 bytes
255 heads, 63 sectors/track, 7832 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0004bbc1

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          64      512000   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              64        2611    20458496   8e  Linux LVM
/dev/sda3            2611        7832    41939020    5  Extended
/dev/sda5            2611        7832    41938988+  83  Linux

5、上面显示已经建好一个主分区,一个逻辑分区,但是这些现在还没有生效,需要保存退出。

Command (m for help):w   #保存退出

输入 reboot 重启系统生效。
Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: 设备或资源忙.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.

三、 Linux系统LVM增加新硬盘实现根文件系统扩容

1、创建物理卷

fdisk -l
[root@master local]# fdisk -l

Disk /dev/sda: 64.4 GB, 64424509440 bytes
255 heads, 63 sectors/track, 7832 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0004bbc1

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          64      512000   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              64        2611    20458496   8e  Linux LVM
/dev/sda3            2611        7832    41939020    5  Extended
/dev/sda5            2611        7832    41938988+  83  Linux

Disk /dev/mapper/vg_hadoop-lv_root: 18.8 GB, 18798870528 bytes
255 heads, 63 sectors/track, 2285 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/vg_hadoop-lv_swap: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

pvcreate /dev/sda5

[root@master local]# pvcreate /dev/sda5
  Physical volume "/dev/sda5" successfully created

2、查看创建好的物理卷

pvdisplay /dev/sda5
[root@master local]# pvdisplay /dev/sda5
  "/dev/sda5" is a new physical volume of "40.00 GiB"
  --- NEW Physical volume ---
  PV Name               /dev/sda5
  VG Name               
  PV Size               40.00 GiB
  Allocatable           NO
  PE Size               0   
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               qX00lY-nkpd-4txl-HFwM-6NuT-wMqu-yEFehV

3、卷组扩容

vgdisplay
[root@master local]# vgdisplay
  --- Volume group ---
  VG Name               vg_hadoop
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               19.51 GiB
  PE Size               4.00 MiB
  Total PE              4994
  Alloc PE / Size       4994 / 19.51 GiB
  Free  PE / Size       0 / 0   
  VG UUID               iQqDwB-Ft3T-aFfh-7nwK-alS3-LSMo-Uid9nz

vgextend vg_hadoop /dev/sda5

[root@master local]# vgextend vg_hadoop /dev/sda5
  Volume group "vg_hadoop" successfully extended

4、查看扩容之后的卷组信息

vgdisplay
[root@master local]# vgdisplay
  --- Volume group ---
  VG Name               vg_hadoop
  System ID             
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               59.50 GiB
  PE Size               4.00 MiB
  Total PE              15232
  Alloc PE / Size       4994 / 19.51 GiB
  Free  PE / Size       10238 / 39.99 GiB
  VG UUID               iQqDwB-Ft3T-aFfh-7nwK-alS3-LSMo-Uid9nz

5、逻辑卷扩容

df -h
[root@master local]# df -h
Filesystem                     Size  Used Avail Use% Mounted on
/dev/mapper/vg_hadoop-lv_root   18G   12G  5.1G  70% /
tmpfs                          1.9G  224K  1.9G   1% /dev/shm
/dev/sda1                      485M   40M  421M   9% /boot
/dev/sr0                       3.6G  3.6G     0 100% /media/RHEL_6.5 x86_64 Disc 1

lvextend -L +38G /dev/mapper/vg_hadoop-lv_root

[root@master local]# lvextend -L +38G /dev/mapper/vg_hadoop-lv_root 
  Extending logical volume lv_root to 55.51 GiB
  Logical volume lv_root successfully resized

6、查看扩容之后的逻辑卷

lvdisplay /dev/vg_hadoop/lv_root
[root@master local]# lvdisplay /dev/vg_hadoop/lv_root
  --- Logical volume ---
  LV Path                /dev/vg_hadoop/lv_root
  LV Name                lv_root
  VG Name                vg_hadoop
  LV UUID                wv0vJ6-c5Dd-Su9k-7dSV-P3KE-CF88-ElqYFA
  LV Write Access        read/write
  LV Creation host, time hadoop, 2017-07-05 18:56:16 +0800
  LV Status              available
  # open                 1
  LV Size                55.51 GiB
  Current LE             14210
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

7、文件系统扩容

resize2fs /dev/vg_hadoop/lv_root
[root@master local]# resize2fs /dev/vg_hadoop/lv_root
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/vg_hadoop/lv_root is mounted on /; on-line resizing required
old desc_blocks = 2, new_desc_blocks = 4
Performing an on-line resize of /dev/vg_hadoop/lv_root to 14551040 (4k) blocks.
The filesystem on /dev/vg_hadoop/lv_root is now 14551040 blocks long.

8、成功

df -h
[root@master local]# df -h
Filesystem                     Size  Used Avail Use% Mounted on
/dev/mapper/vg_hadoop-lv_root   55G   12G   41G  22% /
tmpfs                          1.9G  224K  1.9G   1% /dev/shm
/dev/sda1                      485M   40M  421M   9% /boot
/dev/sr0                       3.6G  3.6G     0 100% /media/RHEL_6.5 x86_64 Disc 1

LVS负载均衡架构原理

高可用/集群

Linux Virtual Server
Ipvs : 嵌入到Linux的内核
IPVsadm:管理应用程序

负载均衡器

1、硬件:

 F5BIG-IP
 CitrixNetScaler
 A10

2、软件

 四层:tcp 之上的第四层协议
           LVS:只能操作IP,端口 ,在操作系统内核中。
 七层:http,ajp,https,(应用层)
           nginx
           haproxy
           httpd

调度方法(静态方法和动态方法)

四种静态:

     rr:轮循
     wrr:
     dh:
     sh:

动态调度方法:

     lc:最少连接
               active*256+inactive
               谁的小,挑谁
     wlc:加权最少连接
               (active*256+inactive)/weight
     sed:最短期望延迟
               (active+1)*256/weight
     nq:never queue
     LBLC:基于本地的最少连接
               DH:
     LBLCR:基于本地的带复制功能的最少连接
     默认方法:wlc

类型:

     NAT:地址转换
     DR:直接路由
     TUN:隧道

  NAT:
               集群节点跟director必须在同一个IP网络中;
               RIP通常是私有地址,仅用于各集群节点间的通信;
               director位于client和real server之间,并负责处理进出的所有通信;
               realserver必须将网关指向DIP;
               支持端口映射;
               realserver可以使用任意OS;
               较大规模应该场景中,director易成为系统瓶颈;
               VIP:虚拟服务器地址
               DIP:   转发的网络地址
                RIP:  后端真实主机(后端服务器)
               CIP:客户端IP地址
 DR:  
              集群节点跟director必须在同一个物理网络中;
              后端服务器(真实服务器)可以使用公网地址,实现便捷的远程管理和监控;
              director仅负责处理入站请求,响应报文则由realserver直接发往客户端;
              不支持端口映射;

高可用/集群

VIP(隐藏)的意思:

不对外广播

被叫不响应

Node1
VIP:192.168.1.251
DIP:192.168.239.3
集群服务器

Node2
RIP:192.168.1.248
VIP:192.168.239.10
RS,apache

Node3
RIP:192.168.1.249
VIP:192.168.239.10
RS,apache

TUN:

               集群节点可以跨越Internet;
               RIP必须是公网地址;
               director仅负责处理入站请求,响应报文则由realserver直接发往客户端;
               realserver网关不能指向director;
               只有支持隧道功能的OS才能用于realserver;
               不支持端口映射;

Ipvsadm命令:

  管理集群服务
               添加:-A -t|u|fservice-address [-s scheduler]
                        -t:TCP协议的集群
                        -u:UDP协议的集群
                                 service-address:     IP:PORT
                        -f:FWM: 防火墙标记
                                 service-address:Mark Number
               修改:-E
               删除:-D -t|u|fservice-address

               #ipvsadm -A -t 172.16.100.1:80 -s rr

  管理集群服务中的RS
               添加:-a -t|u|fservice-address -r server-address [-g|i|m] [-w weight]
                          -t|u|f service-address:事先定义好的某集群服务
                          -r server-address: 某RS的地址,在NAT模型中,可使用IP:PORT实现端口映射;
                          [-g|i|m]: LVS类型      
                                 -g:DR
                                 -i:TUN
                                 -m:NAT
                        [-wweight]: 定义服务器权重
               修改:-e
               删除:-d -t|u|fservice-address -r server-address

               #ipvsadm -a -t 172.16.100.1:80 -r 192.168.10.8 –g
               #ipvsadm -a -t 172.16.100.1:80 -r 192.168.10.9 -g
     查看
               -L|l
                        -n:数字格式显示主机地址和端口
                        --stats:统计数据
                        --rate:速率
                        --timeout:显示tcp、tcpfin和udp的会话超时时长
                        -c:显示当前的ipvs连接状况

     删除所有集群服务
               -C:清空ipvs规则
     保存规则
               -S
               #ipvsadm -S &gt; /path/to/somefile
     载入此前的规则:
               -R
               # ipvsadm -R &lt;/path/form/somefile

DR模式

VIP: 虚拟主机IP
DIP:
kernelparameter:
arp_ignore:定义接收到ARP请求时的响应级别;
0:只要本地配置的有相应地址,就给予响应;
1:仅在请求的目标(MAC)地址配置请求到达的接口上的时候,才给予响应;

               arp_announce:定义将自己地址向外通告时的通告级别;
                        0:将本地任何接口上的任何地址向外通告;
                        1:试图仅向目标网络通告与其网络匹配的地址;
                        2:仅向与本地接口上(MAC)地址匹配的网络进行通告;

—————————————————————–DR模式示例————————————————————

Lvs DR模式集群步骤

1、 找一台主机作为DR(虚拟服务器),安装ipvsadm

   a) Yum install ipvsadm

2、 在DR设置两个IP地址:

       a) DIP: 192.168.1.134 ,设置静态ID
       b) VIP:192.168.1.200 , ifconfig eth0:1 192.168.1.200/24 (取消绑定ifconfig eth1 down)

3、 找多台机器作为RS(apeche或者tomcat )

   a) 两台:静态设置192.168.1.137
                               192.168.1.138
   b) 修改报文源IP的设置,需要设置内核参数
                    i.  echo 1 >/proc/sys/net/ipv4/conf/eth0/arp_ignore
                    ii. echo 1 >/proc/sys/net/ipv4/conf/all/arp_ignore
                    iii.echo 2 >/proc/sys/net/ipv4/conf/eth0/arp_announce
                    iv.echo 2 >/proc/sys/net/ipv4/conf/all/arp_announce
       c)  在两台机器(RS)上,设置网卡的别名IP:192.168.1.200
                     i. ifconfig lo:0 192.168.1.200 netmask 255.255.255.255 broadcast 192.168.1.200
       d)  在两台机器(RS)上,添加一个路由
                      i. route add -host 192.168.1.200 dev lo:0

4、 DR上需要加一个路由设置:route add -host 192.168.1.200 dev eth0:1

5、 在RS 检查web服务是否正常

6、 在DR上使用ipvsadm添加集群服务

       a)  Ipvsadm –C
       b)  ipvsadm -A -t 192.168.1.200:80-s wlc
       c)  ipvsadm -a -t 192.168.1.200:80-r 192.168.1.137 -g -w 1
       d)  ipvsadm -a -t 192.168.1.200:80-r 192.168.1.138 -g -w 1

脚本

#!/bin/bash
#description : start realserver
VIP=125.38.38.64
/etc/rc.d/init.d/functions
case "$1" in
start)
echo " start LVS of REALServer"
echo "1">/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2">/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1">/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
/sbin/ifconfig lo:0 $VIP broadcast $VIPnetmask 255.255.255.255 up

;;
stop)
/sbin/ifconfig lo:0 down
echo "close LVS Directorserver"
echo "0">/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "0">/proc/sys/net/ipv4/conf/lo/arp_announce
echo "0">/proc/sys/net/ipv4/conf/all/arp_ignore
echo "0">/proc/sys/net/ipv4/conf/all/arp_announce
;;
*)
echo "Usage: $0 {start|stop}"
exit 1
esac

9个常用iptables配置实例

iptables命令可用于配置Linux的包过滤规则,常用于实现防火墙、NAT。咋一看iptables的配置很复杂,掌握规律后,其实用iptables完成指定任务并不难,下面我们通过具体实例,学习iptables的详细用法。

1. 删除已有规则

在新设定iptables规则时,我们一般先确保旧规则被清除,用以下命令清除旧规则:

iptables -F
(or iptables --flush)

2. 设置chain策略

对于filter table,默认的chain策略为ACCEPT,我们可以通过以下命令修改chain的策略:

iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT DROP

以上命令配置将接收、转发和发出包均丢弃,施行比较严格的包管理。由于接收和发包均被设置为丢弃,当进一步配置其他规则的时候,需要注意针对INPUT和OUTPUT分别配置。当然,如果信任本机器往外发包,以上第三条规则可不必配置。

3. 屏蔽指定ip

有时候我们发现某个ip不停的往服务器发包,这时我们可以使用以下命令,将指定ip发来的包丢弃:

BLOCK_THIS_IP="x.x.x.x"iptables -A INPUT -i eth0 -p tcp -s "$BLOCK_THIS_IP" -j DROP

以上命令设置将由x.x.x.x ip发往eth0网口的tcp包丢弃。

4. 配置服务项

利用iptables,我们可以对日常用到的服务项进行安全管理,比如设定只能通过指定网段、由指定网口通过SSH连接本机:

iptables -A INPUT -i eth0 -p tcp -s 192.168.100.0/24 --dport 22 -m state --state NEW,ESTABLESHED -j ACCEPT
iptables -A OUTPUT -o eth0 -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT

若要支持由本机通过SSH连接其他机器,由于在本机端口建立连接,因而还需要设置以下规则:

iptables -A INPUT -i eth0 -p tcp -s 192.168.100.0/24 --dport 22 -m state --state ESTABLESHED -j ACCEPT
iptables -A OUTPUT -o eth0 -p tcp --sport 22 -m state --state NEW,ESTABLISHED -j ACCEPT

类似的,对于HTTP/HTTPS(80/443)、pop3(110)、rsync(873)、MySQL(3306)等基于tcp连接的服务,也可以参照上述命令配置。

对于基于udp的dns服务,使用以下命令开启端口服务:

iptables -A OUTPUT -p udp -o eth0 --dport 53 -j ACCEPT
iptables -A INPUT -p udp -i eth0 --sport 53 -j ACCEPT

5. 网口转发配置

对于用作防火墙或网关的服务器,一个网口连接到公网,其他网口的包转发到该网口实现内网向公网通信,假设eth0连接内网,eth1连接公网,配置规则如下:

iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT

6. 端口转发配置

对于端口,我们也可以运用iptables完成转发配置:

iptables -t nat -A PREROUTING -p tcp -d 192.168.102.37 --dport 422 -j DNAT --to 192.168.102.37:22

以上命令将422端口的包转发到22端口,因而通过422端口也可进行SSH连接,当然对于422端口,我们也需要像以上“4.配置服务项”一节一样,配置其支持连接建立的规则。

7. DoS攻击防范

利用扩展模块limit,我们还可以配置iptables规则,实现DoS攻击防范:

iptables -A INPUT -p tcp --dport 80 -m limit --limit 25/minute --limit-burst 100 -j ACCEPT

–litmit 25/minute 指示每分钟限制最大连接数为25

–litmit-burst 100 指示当总连接数超过100时,启动 litmit/minute 限制

8. 配置web流量均衡

我们可以将一台服务器作为前端服务器,利用iptables进行流量分发,配置方法如下:

iptables -A PREROUTING -i eth0 -p tcp --dport 80 -m state --state NEW -m nth --counter 0 --every 3 --packet 0 -j DNAT --to-destination 192.168.1.101:80 

iptables -A PREROUTING -i eth0 -p tcp --dport 80 -m state --state NEW -m nth --counter 0 --every 3 --packet 0 -j DNAT --to-destination 192.168.1.102:80 

iptables -A PREROUTING -i eth0 -p tcp --dport 80 -m state --state NEW -m nth --counter 0 --every 3 --packet 0 -j DNAT --to-destination 192.168.1.103:80

以上配置规则用到nth扩展模块,将80端口的流量均衡到三台服务器。

9. 将丢弃包情况记入日志

使用LOG目标和syslog服务,我们可以记录某协议某端口下的收发包情况。拿记录丢包情况举例,可以通过以下方式实现。

首先自定义一个chain:

iptables -N LOGGING

其次将所有接收包导入LOGGING chain中:

iptables -A INPUT -j LOGGING

然后设置日志前缀、日志级别:

iptables -A LOGGING -m limit --limit 2/min -j LOG --log-prefix "IPTables Packet Dropped: " --log-level 7

最后将包倒向DROP,将包丢弃:

iptables -A LOGGING -j DROP

另可以配置syslog.conf文件,指定iptables的日志输出。

k8s的flannel安装报错Failed to create SubnetManager: error retrieving pod spec解决

花了一个上午来追踪问题,k8s都反复新建了十多次,docker都重启了几次。(一次显示不有获取磁盘空间,重启docker,清空存储解决)

在用kubeadm安装容器化的几个组件时,flannel组件死活不能启动,报如下问题:

Failed to create SubnetManager: error retrieving pod spec for ‘kube-system/kube-flannel-ds-xxx’: the server does not allow access to the requested resource.

在如下Url找到解决办法:

http://blog.csdn.net/ximenghappy/article/details/70157361

明天搞DNS和节点加入….

================================================

Kubernetes一共提供五种网络组件,可以根据自己的需要选择。我使用的Flannel网络,此处1.5.5和1.6.1也是不一样的,1.6.1加了RBAC。需要执行一下两个命令:

kubectl create -f https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel-rbac.yml

clusterrole “flannel” configured
clusterrolebinding “flannel” configured

kubectl create -f  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

serviceaccount “flannel” created
configmap “kube-flannel-cfg” created
daemonset “kube-flannel-ds” created

  解决此问题参考:
https://github.com/kubernetes/kubernetes/issues/44029
https://github.com/kubernetes/kubeadm/issues/212#issuecomment-290908868

kube-flannel-rbac.yaml文件内容:

# Create the clusterrole and clusterrolebinding:
# $ kubectl create -f kube-flannel-rbac.yml
# Create the pod using the same namespace used by the flannel serviceaccount:
# $ kubectl create --namespace kube-system -f kube-flannel.yml
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system

kube-flannel.yaml内容:

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "type": "flannel",
      "delegate": {
        "isDefaultGateway": true
      }
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: amd64
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel-amd64:v0.7.1
        command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr" ]
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      - name: install-cni
        image: quay.io/coreos/flannel-amd64:v0.7.1
        command: [ "/bin/sh", "-c", "set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done" ]
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg

未分类

未分类