etcd集群故障处理

1. etcd安装

rpm -ivh etcd-3.2.15-1.el7.x86_64.rpm
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
export ETCDCTL_API=3
systemctl status etcd

hosts如下

192.168.0.100 etcd01
192.168.0.101 etcd02
192.168.0.102 etcd03

2. etcd配置

etcd02配置如下,详细见kubernetes1.9版本集群配置向导

# egrep -v "^$|^#" /etc/etcd/etcd.conf 
ETCD_DATA_DIR="/var/lib/etcd/"
ETCD_LISTEN_PEER_URLS="https://192.168.0.101:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.0.101:2379,http://127.0.0.1:2379"
ETCD_NAME="etcd02"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.101:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.101:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.0.100:2380,etcd02=https://192.168.0.101:2380,etcd03=https://192.168.0.102:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="existing"
ETCD_CERT_FILE="/etc/kubernetes/ssl/etcd.pem"
ETCD_KEY_FILE="/etc/kubernetes/ssl/etcd-key.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/kubernetes/ssl/ca.pem"
ETCD_AUTO_TLS="true"
ETCD_PEER_CERT_FILE="/etc/kubernetes/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/kubernetes/ssl/etcd-key.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/kubernetes/ssl/ca.pem"
ETCD_PEER_AUTO_TLS="true"

3. 故障报错

3个节点做集群,直接关机后,etcd02故障,报错:

etcd: advertise client URLs = https://192.168.0.101:2379
etcd: read wal error (wal: crc mismatch) and cannot be repaired
systemd: etcd.service: main process exited, code=exited, status=1/FAILURE

wal的cec校验出错,谷歌了一下,没什么结果,于是移除这个etcd,再恢复
在正常的etcd节点移除

# etcdctl member list
1ce6d6d01109192, started, etcd03, https://192.168.0.102:2380, https://192.168.0.102:2379
9b534175b46ea789, started, etcd01, https://192.168.0.100:2380, https://192.168.0.100:2379
ac2f188e97f50eb7, started, etcd02, https://192.168.0.101:2380, https://192.168.0.101:2379
# etcdctl member remove ac2f188e97f50eb7
Member ac2f188e97f50eb7 removed from cluster 194cd14a48430083

再启动etcd服务

# systemctl start etcd

报错:

etcd: error validating peerURLs {ClusterID:194cd14a48430083 Members:[&{ID:1ce6d6d01109192 RaftAttributes:{PeerURLs:[https://192.168.0.102:2380]} Attributes:{Name:etcd03 ClientURLs:[https://192.168.0.102:2379]}} &{ID:9b534175b46ea789 RaftAttributes:{PeerURLs:[https://192.168.0.100:2380]} Attributes:{Name:etcd01 ClientURLs:[https://192.168.0.100:2379]}}] RemovedMemberIDs:[]}: member count is unequal

报错:

etcd: the member has been permanently removed from the cluster the data-dir used by this member must be removed

4. etcd恢复数据

在etcd02节点恢复一下数据试试:

# mv /var/lib/etcd/member /var/lib/member
# rm -rf /var/lib/etcd/*
# etcdctl snapshot restore /var/lib/member/snap/db --skip-hash-check=true
2018-06-22 11:28:35.622666 I | mvcc: restore compact to 10177401
2018-06-22 11:28:35.659626 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32
# systemctl start etcd

服务启动了,自己把自己选做主,服务倒是启动了,加入集群还是出错,用正常的节点备份再恢复

# etcdctl snapshot save etcdback.db
# etcdctl member add etcd02 http://192.168.0.101:2380
Error: member name not provided.

看看现在集群的其他2个etcd

# curl -k --key /etc/kubernetes/ssl/etcd-key.pem --cert /etc/kubernetes/ssl/etcd.pem https://192.168.0.100:2380/members
[{"id":130161754177048978,"peerURLs":["https://192.168.0.102:2380"],"name":"etcd03","clientURLs":["https://192.168.0.102:2379"]},{"id":11192361472739944329,"peerURLs":["https://192.168.0.100:2380"],"name":"etcd01","clientURLs":["https://192.168.0.100:2379"]}]

参考文档:

etcdctl member add etcd_name –peer-urls=”https://peerURLs”

再次添加

# etcdctl member add etcd02 --peer-urls="https://192.168.0.101:2380"
Member 41c2a7b938a5e387 added to cluster 194cd14a48430083

ETCD_NAME="etcd02"
ETCD_INITIAL_CLUSTER="etcd03=https://192.168.0.102:2380,etcd02=https://192.168.0.101:2380,etcd01=https://192.168.0.100:2380"
ETCD_INITIAL_CLUSTER_STATE="existing"

查看etcd member状态:

# etcdctl member list
1ce6d6d01109192, started, etcd03, https://192.168.0.102:2380, https://192.168.0.102:2379
9b534175b46ea789, started, etcd01, https://192.168.0.100:2380, https://192.168.0.100:2379
ad17c3da831c84c7, unstarted, , https://192.168.0.101:2380,

报错:

etcd: request cluster ID mismatch (got 194cd14a48430083 want cdf818194e3a8c32)

发现步骤顺序错误,应该是先添加到etcd集群,再启动etcd服务,我们现在先启动etcd服务,就是一个etcd单点

etcd节点加入集群

故障的etcd主机

# systemctl stop etcd

正常的etcd主机:

# etcdctl member remove ad17c3da831c84c7
# etcdctl member add etcd02 --peer-urls="https://192.168.0.101:2380"
Member 41c2a7b938a5e387 added to cluster 194cd14a48430083

ETCD_NAME="etcd02"
ETCD_INITIAL_CLUSTER="etcd03=https://192.168.0.102:2380,etcd02=https://192.168.0.101:2380,etcd01=https://192.168.0.100:2380"
ETCD_INITIAL_CLUSTER_STATE="existing"

故障的etcd主机,查看现在etcd状态:

# etcdctl endpoint health
127.0.0.1:2379 is healthy: successfully committed proposal: took = 24.52485ms
# etcdctl member list
1ce6d6d01109192, started, etcd03, https://192.168.0.102:2380, https://192.168.0.102:2379
41c2a7b938a5e387, started, etcd02, https://192.168.0.101:2380, https://192.168.0.101:2379
9b534175b46ea789, started, etcd01, https://192.168.0.100:2380, https://192.168.0.100:2379

到这里,etcd故障修复完毕

5. etcd常用命令

查看状态

# export ETCDCTL_API=3

# etcdctl endpoint status --write-out=table
+----------------+------------------+---------+---------+-----------+-----------+------------+
|    ENDPOINT    |        ID        | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
+----------------+------------------+---------+---------+-----------+-----------+------------+
| 127.0.0.1:2379 | 41c2a7b938a5e387 |  3.2.15 |   15 MB |      true |       317 |   11051403 |
+----------------+------------------+---------+---------+-----------+-----------+------------+

备份及恢复

etcdctl snapshot save etcdback.db
etcdctl snapshot status etcdback.db --write-out=table
etcdctl snapshot restore etcdback.db --skip-hash-check=true

etcd监控

# curl -L http://localhost:2379/metrics
# HELP etcd_debugging_mvcc_keys_total Total number of keys.
# TYPE etcd_debugging_mvcc_keys_total gauge
etcd_debugging_mvcc_keys_total 776
# HELP etcd_debugging_mvcc_pending_events_total Total number of pending events to be sent.
# TYPE etcd_debugging_mvcc_pending_events_total gauge
etcd_debugging_mvcc_pending_events_total 0
# HELP etcd_debugging_mvcc_put_total Total number of puts seen by this member.
# TYPE etcd_debugging_mvcc_put_total counter
etcd_debugging_mvcc_put_total 9.548201e+06
# HELP etcd_debugging_mvcc_range_total Total number of ranges seen by this member.
# TYPE etcd_debugging_mvcc_range_total counter
etcd_debugging_mvcc_range_total 2.1052143e+07
# HELP etcd_debugging_mvcc_slow_watcher_total Total number of unsynced slow watchers.
# TYPE etcd_debugging_mvcc_slow_watcher_total gauge
etcd_debugging_mvcc_slow_watcher_total 0
# HELP etcd_debugging_mvcc_txn_total Total number of txns seen by this member.
# TYPE etcd_debugging_mvcc_txn_total counter
etcd_debugging_mvcc_txn_total 0
# HELP etcd_debugging_mvcc_watch_stream_total Total number of watch streams.
# TYPE etcd_debugging_mvcc_watch_stream_total gauge
etcd_debugging_mvcc_watch_stream_total 125
# HELP etcd_debugging_mvcc_watcher_total Total number of watchers.
# TYPE etcd_debugging_mvcc_watcher_total gauge
etcd_debugging_mvcc_watcher_total 125
# HELP etcd_debugging_server_lease_expired_total The total number of expired leases.
# TYPE etcd_debugging_server_lease_expired_total counter
etcd_debugging_server_lease_expired_total 3649

适合用prometheus监控

global:
  scrape_interval: 10s
scrape_configs:
  - job_name: etcd
    static_configs:
    - targets: ['192.168.0.100:2379','192.168.0.101:2379','192.168.0.102:2379']

图解raft算法 http://thesecretlivesofdata.com/raft/

etcd获取kubernetes的数据

# export ETCDCTL_API=3
# etcdctl get /registry/namespaces/default --prefix -w json|python -m json.tool
{
    "count": 1,
    "header": {
        "cluster_id": 1823062066148343939,
        "member_id": 11192361472739944329,
        "raft_term": 317,
        "revision": 10880816
    },
    "kvs": [
        {
            "create_revision": 6,
            "key": "L3JlZ2lzdHJ5L25hbWVzcGFjZXMvZGVmYXVsdA==",
            "mod_revision": 6,
            "value": "azhzAAoPCgJ2MRIJTmFtZXNwYWNlEl8KRQoHZGVmYXVsdBIAGgAiACokOTVlNzdjMWEtM2Q1Ny0xMWU4LTk5YzItMDA1MDU2YmU3NWEzMgA4AEIICK7qttYFEAB6ABIMCgprdWJlcm5ldGVzGggKBkFjdGl2ZRoAIgA=",
            "version": 1
        }
    ]
}
查看key的内容
# echo L3JlZ2lzdHJ5L25hbWVzcGFjZXMvZGVmYXVsdA== |base64 -d
/registry/namespaces/default
#!/bin/bash
# Get kubernetes keys from etcd
export ETCDCTL_API=3
keys=`etcdctl get /registry --prefix -w json|python -m json.tool|grep key|cut -d ":" -f2|tr -d '"'|tr -d ","`
for x in $keys;do
  echo $x|base64 -d|sort
done

获取etcd中kubernetes所有对象的key

【网页加速】lua redis的二次升级

之前发过openresty的相关文章,也是用于加速网页速度的,但是上次没有优化好代码,这次整理了下,优化了nginx的配置和lua的代码,感兴趣的话可以看看上篇的文章:
https://www.cnblogs.com/w1570631036/p/8449373.html

为了学习,不断的给自己的服务器装东西,又是logstash,又是kafka,导致主站网络负载、cpu消耗过大,再加上tomcat这个本身就特别占用内存的东西,只要稍微刷新一下网站,就能感受到蜗牛般的速度,实在受不了,前段时间给网站加了n多层缓存,依旧没有改观多少,想了想,算了,一直都这么卡,还不如直接将动态的网站直接变成静态网页存储在redis里面,然后关掉tomcat,貌似没有改观多少,但是在xshell里面敲命令没那么卡了,这里,也提出了一种别样的网站加速方法——redis存储静态网页。
未分类

一、总体流程如下

未分类
1.一次请求过来,通过openresty的nginx来访问lua脚本;
2.读取redis中是否存在该uri对应的静态网页,如果有,则直接返回,否则回源到tomcat,然后将响应的内容保存到redis里面。

二、nginx的设置

openresty中自带了nginx,所以只需要配置一下即可,我们最终的目前是拦截所有以html结尾的请求,如果是以其他后缀结尾的,比如do,则可以直接回滚到tomat里面去。
由于篇幅的关系,只粘贴部分nginx配置,想看全的请转至:mynginxconfig.ngx

server {
    listen       80;
    # listen       443 ssl;   # ssl
    server_name  www.wenzhihuai.com;
    location  ~ .*.(html)$ {       //拦截所有以html结尾的请求,调用lua脚本
        ...
        charset utf8;
        proxy_pass_request_headers off ;
        # 关闭缓存lua脚本,调试的时候专用
        lua_code_cache off;
        content_by_lua_file /opt/lua/hello.lua;
    }
    location / {        //nginx是按顺序匹配的,如果上面的不符合,那么将回滚tomcat
        default_type    text/html;
        root   html;
        index  index.html index.htm;
        ...
        # websocket
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_pass http://backend;
    }

三、lua脚本

为了方便key的操作,经过测试,即使uri带有各种字符,比如 ? . html = &等,都是可以直接设置为redis中的key的,所以,不是那么的需要考虑redis的key违反规则,可以直接将uri设置为key。具体流程如下:

local key = request_uri
首先,key为请求访问的uri
local resp, err = red:get(key)
去redis上查找有没有
if resp == ngx.null then
    如果没有
    ngx.req.set_header("Accept", "text/html,application/xhtml+xml,application/xml;")
    ngx.req.set_header("Accept-Encoding", "")
    这里,特别需要注意的是,要把头部的信息去掉,这里之前说过。(如果不去掉,就是gzip加密返回,然后再经过一层gzip加密返回给用户,导致用户看到的是gzip压缩过的乱码)
    local targetURL = string.gsub(uri, "html", "do")
    这里讲html替换为do,即:不拦截*.do的请求,其可以直接访问tomcat
    local respp = ngx.location.capture(targetURL, { method = ngx.HTTP_GET, args = uri_args })
    开始回源到tomcat
    red:set(key, respp.body)
    将uri(key)和响应的内容设到redis里面去
    red:expire(key, 600)
    lua redis并没有提供在set的时候同时设置过期时间,所以,额外加一行设置过期时间
    ngx.print(respp.body)
    将响应的内容输出给用户
    return
end
ngx.print(resp)

四、测试

进行一次测试,以访问http://www.wenzhihuai.com/jaowejoifjefoijoifaew.html 为例,我的网站并没有设置这个uri,所以,访问的时候,会统一调到错误页面,之后,会在redis中看到有这条记录:
未分类
该地址已经成功被缓存到redis里面去,点击其他页面,可以看到,只要是点击的页面,都被缓存到redis里面去了。总体来说,如果不设置过期时间,可以把整个网页静态化缓存到redis里面,甚至是可以关闭tomcat了,但是这种做法只适用于万年不变的页面,至于用于企业的话,,,,

后记:
其实我有个疑问,我的代码里,并没有设置lua断开redis的连接,不知道会不会有影响,而且它这个是指每次请求过来,都需要重新连接redis么?光是TCP三次握手就耗时不少啊,不知道怎么优化这些信息。

全部代码如下:

local redis = require "resty.redis"
local red = redis:new()
local request_uri = ngx.var.request_uri
local ngx_log = ngx.log
local ngx_ERR = ngx.ERR

local function close_redis(red)
    if not red then
        return
    end
    local pool_max_idle_time = 10000
    local pool_size = 100
    red:set("pool_size", pool_size)
    local ok, err = red:set_keepalive(pool_max_idle_time, pool_size)
    if not ok then
        ngx_log(ngx_ERR, "set redis keepalive error : ", err)
    end
end

local uri = ngx.var.uri

red:set_timeout(1000)
red:connect("119.23.46.71", 6340)
red:auth("root")
local uri_args = ngx.req.get_uri_args()

local key = request_uri
local resp, err = red:get(key)

if resp == ngx.null then
    ngx.req.set_header("Accept", "text/html,application/xhtml+xml,application/xml;")
    ngx.req.set_header("Accept-Encoding", "")
    local targetURL = string.gsub(uri, "html", "do")
    local respp = ngx.location.capture(targetURL, { method = ngx.HTTP_GET, args = uri_args })
    red:set(key, respp.body)
    red:expire(key, 600)
    ngx.print(respp.body)
    return
end
ngx.print(resp)
close_redis(red)

ETCD数据库异常:mvcc: database space exceeded解决

ETCD数据库异常:mvcc: database space exceeded解决

  • 问题来源:在k8s集群中给node打标签发现报错
[root@master1]# kubectl label node  30.4.228.20 env=prod
Error from server: etcdserver: mvcc: database space exceeded
  • 环境信息
etcd集群:30.4.228.19,30.4.228.20,30.4.228.22 (配置了安全加密)

原因分析

  • etcd服务未设置自动压缩参数(auto-compact)
  • etcd 默认不会自动 compact,需要设置启动参数,或者通过命令进行compact,如果变更频繁建议设置,否则会导致空间和内存的浪费以及错误。Etcd v3 的默认的 backend quota 2GB,如果不 compact,boltdb 文件大小超过这个限制后,就会报错:”Error: etcdserver: mvcc: database space exceeded”,导致数据无法写入。

处理过程

1、 获取旧版本号:

[root@etcd1]# rev=$(/usr/local/bin/etcdctl --cacert=/etc/kubernetes/ssl/ca.pem 
--cert=/etc/etcd/ssl/etcd.pem  
--key=/etc/etcd/ssl/etcd-key.pem  
--endpoints="https://127.0.0.1:2379" 
endpoint status --write-out="json"  
| egrep -o '"revision":[0-9]*'  
| egrep -o '[0-9].*') 

[root@etcd1]# echo $rev 

2 、压缩旧版本

[root@etcd1]# /usr/local/bin/etcdctl --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints="https://30.4.228.20:2379" compact $rev

[root@etcd1]# /usr/local/bin/etcdctl --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints="https://30.4.228.20:2379" defrag 

[root@etcd1]# /usr/local/bin/etcdctl --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints="https://30.4.228.20:2379" alarm list 

[root@etcd1]# /usr/local/bin/etcdctl --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints="https://30.4.228.19:2379" compact $rev

[root@etcd1]# /usr/local/bin/etcdctl --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints="https://30.4.228.19:2379" defrag 

[root@etcd1]# /usr/local/bin/etcdctl --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints="https://30.4.228.22:2379" compact $rev

[root@etcd1]# /usr/local/bin/etcdctl --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints="https://30.4.228.22:2379" defrag 

[root@etcd1]# /usr/local/bin/etcdctl --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints="https://30.4.228.22:2379" alarm list 

[root@etcd1]# /usr/local/bin/etcdctl --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints="https://30.4.228.22:2379" alarm disarm

3、查看告警

[root@etcd1]# /usr/local/bin/etcdctl --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints="https://30.4.228.22:2379" alarm list

[root@etcd1]#  /usr/local/bin/etcdctl --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints="https://30.4.228.19:2379" alarm list 

[root@etcd1]# /usr/local/bin/etcdctl --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints="https://30.4.228.20:2379" alarm list

etcd相关命令

1、设置etcd配额:

# 设置16MB的配额
etcd --quota-backend-bytes=$((16*1024*1024))

2、触发配额耗尽:

while [1];do dd if=dev/urandom bs=1024 count=1024 
 | ETCDCTL_API=3 etcdctl put key || break; done
...
Error: rpc error: code = 8 desc = etcdserver: mvcc:database space exceeded

3、确认数据空间超出配额:

ETCDCTL_API=3 etcdctl --write-out=table endpoint status
----------------+------------------+-----------+---------+-----------+-----------+------------+
|    ENDPOINT    |        ID        |  VERSION  | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
+----------------+------------------+-----------+---------+-----------+-----------+------------+
| 127.0.0.1:2379 | bf9071f4639c75cc | 2.3.0+git | 18 MB   | true      |         2 |       3332 |
+----------------+------------------+-----------+---------+-----------+-----------+------------+

4、查看告警:

ETCDCTL_API=3 etcdctl alarm list

5、整合压缩、碎片整理:

1) 获取当前etcd数据的修订版本(revision)

rev=$(ETCDCTL_API=3 etcdctl --endpoints=:2379 endpoint status --write-out="json" | egrep -o '"revision":[0-9]*' | egrep -o '[0-9]*')

2) 整合压缩旧版本数据

ETCDCTL_API=3 etcdctl compact $rev

3) 执行碎片整理

ETCDCTL_API=3 etcdctl defrag

4) 解除告警

ETCDCTL_API=3 etcdctl alarm disarm

5) 备份以及查看备份数据信息

ETCDCTL_API=3 etcdctl snapshot save backup.db
ETCDCTL_API=3 etcdctl snapshot status backup.db

linux下源码安装redis

redis官方下载网址: https://redis.io/download

安装目录: /usr/local/bin/
配置文件路径: /etc/redis/redis.conf
配置端口: 6379
服务端: /usr/local/bin/redis-server
客户端: /usr/local/bin/redis-cli
持久化文件存放目录路径: /var/lib/redis
pid路径: /var/run/redis.pid
日志路径: /var/log/redis.log

1. centos下安装redis

以centos7.4为例.

1.1. 源码编译安装

$ mkdir ~/soft
$ cd ~/soft
$ wget -c http://download.redis.io/releases/redis-4.0.10.tar.gz
$ tar xzf redis-4.0.10.tar.gz
$ cd redis-4.0.10
$ make
$ sudo make install

1.2. 添加服务并添加到开机自启动

$ cd ~/soft/redis-4.0.10/utils
$ sudo ./install_server.sh
Welcome to the redis service installer
This script will help you easily set up a running redis server
Please select the redis port for this instance: [6379] 
Selecting default: 6379
Please select the redis config file name [/etc/redis/6379.conf] /etc/redis/redis.conf
Please select the redis log file name [/var/log/redis_6379.log] /var/log/redis.log
Please select the data directory for this instance [/var/lib/redis/6379] /var/lib/redis
Please select the redis executable path [] /usr/local/bin/redis-server
Selected config:
Port : 6379
Config file : /etc/redis/redis.conf
Log file : /var/log/redis.log
Data dir : /var/lib/redis
Executable : /usr/local/bin/redis-server
Cli Executable : /usr/local/bin/redis-cli
Is this ok? Then press ENTER to go on or Ctrl-C to abort.
Copied /tmp/6379.conf => /etc/init.d/redis_6379
Installing service...
Successfully added to chkconfig!
Successfully added to runlevels 345!
Starting Redis server...
Installation successful!
$ 

1.3. 创建redis用户及其用户组并给相应目录授权

$ sudo useradd --r --U -M redis
$ sudo chown redis:redis /var/lib/redis
$ sudo chmod 770 /var/lib/redis

1.4. 修改服务名及配置文件

重命名服务名redis_6379为redis, 在此之前先停止服务:

$ sudo /etc/init.d/redis_6379 stop
$ sudo mv /etc/init.d/redis_6379 /etc/init.d/redis
$ sudo vim /etc/init.d/redis

将文件内的redis_6379替换为redis, vim按ESC切换到命令模式, 替换后保存退出vim

:%s/redis_6379/redis/g
:wq

编辑配置文件/etc/redis/redis.conf:

$ sudo vim /etc/redis/redis.conf
. . . 
daemonize yes
. . . 
supervised systemd
. . . 
pidfile /var/run/redis.pid
. . . 
logfile /var/log/redis.log
. . . 
dir /var/lib/redis

将文件内的redis_6379替换为redis, vim按ESC切换到命令模式, 替换后保存退出vim

:%s/redis_6379/redis/g
:wq

1.5. redis服务管理

$ sudo systemctl daemon-reload
$ sudo systemctl enable redis
$ sudo systemctl status redis
● redis.service - LSB: start and stop redis
   Loaded: loaded (/etc/rc.d/init.d/redis; bad; vendor preset: disabled)
   Active: inactive (dead) since 五 2018-06-22 14:48:36 CST; 7s ago
     Docs: man:systemd-sysv-generator(8)
  Process: 21946 ExecStop=/etc/rc.d/init.d/redis stop (code=exited, status=0/SUCCESS)
  Process: 21909 ExecStart=/etc/rc.d/init.d/redis start (code=exited, status=0/SUCCESS)
6月 22 14:24:50 izm5eat3t2va6ch6mdbpbtz systemd[1]: Starting LSB: start and ...
6月 22 14:24:50 izm5eat3t2va6ch6mdbpbtz redis[21909]: Starting Redis server...
6月 22 14:24:50 izm5eat3t2va6ch6mdbpbtz systemd[1]: Started LSB: start and s...
6月 22 14:48:36 izm5eat3t2va6ch6mdbpbtz systemd[1]: Stopping LSB: start and ...
6月 22 14:48:36 izm5eat3t2va6ch6mdbpbtz redis[21946]: /var/run/redis.pid doe...
6月 22 14:48:36 izm5eat3t2va6ch6mdbpbtz systemd[1]: Stopped LSB: start and s...
Hint: Some lines were ellipsized, use -l to show in full.
$ sudo systemctl start redis
● redis.service - LSB: start and stop redis
   Loaded: loaded (/etc/rc.d/init.d/redis; bad; vendor preset: disabled)
   Active: active (exited) since 五 2018-06-22 14:24:50 CST; 5s ago
     Docs: man:systemd-sysv-generator(8)
  Process: 21909 ExecStart=/etc/rc.d/init.d/redis start (code=exited, status=0/SUCCESS)
6月 22 14:24:50 izm5eat3t2va6ch6mdbpbtz systemd[1]: Starting LSB: start and ...
6月 22 14:24:50 izm5eat3t2va6ch6mdbpbtz redis[21909]: Starting Redis server...
6月 22 14:24:50 izm5eat3t2va6ch6mdbpbtz systemd[1]: Started LSB: start and s...
Hint: Some lines were ellipsized, use -l to show in full.

开启服务: sudo systemctl start redis
停止服务: sudo systemctl stop redis
重启服务: sudo systemctl restart redis
查看进程: sudo lsof -i:6379
杀掉进程: sudo kill -9 pid
进入redis shell: redis-cli

2. ubuntu下安装redis

以ubuntu16.04为例.

源码安装步骤与centos的基本相同。

Flask 运行性能调优

概述

目前使用的平台在使用的过程中发现性能比较低,所以需要想办法进行性能调优。

使用的工具

Siege是一个http负载测试和基准测试工具。 它旨在让网络开发者在胁迫下测量他们的代码,看看它将如何站起来加载到互联网上。 Siege支持基本认证,cookies,HTTP,HTTPS和FTP协议。 它允许用户使用可配置数量的模拟客户端访问服务器。 这些客户将服务器置于“under siege”。

说白了Siege 就是一个多线程的http服务器压力测试工具,官网在这里,最近版本3.1.4。怎么安装可以在官网上查看。这个官网好像已经很长时间没更新了,我在mac下安装的siege已经到了4.0.4版本。Mac下直接使用brew安装就可以了。

brew install siege

siege
SIEGE 4.0.4
Usage: siege [options]
       siege [options] URL
       siege -g URL
Options:
  -V, --version             VERSION, prints the version number.
  -h, --help                HELP, prints this section.
  -C, --config              CONFIGURATION, show the current config.
  -v, --verbose             VERBOSE, prints notification to screen.
  -q, --quiet               QUIET turns verbose off and suppresses output.
  -g, --get                 GET, pull down HTTP headers and display the
                            transaction. Great for application debugging.
  -p, --print               PRINT, like GET only it prints the entire page.
  -c, --concurrent=NUM      CONCURRENT users, default is 10
  -r, --reps=NUM            REPS, number of times to run the test.
  -t, --time=NUMm           TIMED testing where "m" is modifier S, M, or H
                            ex: --time=1H, one hour test.
  -d, --delay=NUM           Time DELAY, random delay before each requst
  -b, --benchmark           BENCHMARK: no delays between requests.
  -i, --internet            INTERNET user simulation, hits URLs randomly.
  -f, --file=FILE           FILE, select a specific URLS FILE.
  -R, --rc=FILE             RC, specify an siegerc file
  -l, --log[=FILE]          LOG to FILE. If FILE is not specified, the
                            default is used: PREFIX/var/siege.log
  -m, --mark="text"         MARK, mark the log file with a string.
                            between .001 and NUM. (NOT COUNTED IN STATS)
  -H, --header="text"       Add a header to request (can be many)
  -A, --user-agent="text"   Sets User-Agent in request
  -T, --content-type="text" Sets Content-Type in request
      --no-parser           NO PARSER, turn off the HTML page parser
      --no-follow           NO FOLLOW, do not follow HTTP redirects

Copyright (C) 2017 by Jeffrey Fulmer, et al.
This is free software; see the source for copying conditions.
There is NO warranty; not even for MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE.

直接给出几个常用命令。

# get 请求
siege -c 1000 -r 100 -b url
# post 请求
siege -c 1000 -r 100 -b url POST {"accountId":"123","platform":"ios"}"

测试

测试代码

看一下文件树结构,tree

➜  flask tree
.
├── hello1.py
├── hello1.pyc
├── hello2.py
├── hello2.pyc
├── hello3.py
└── templates
    └── hello.html

下面是一段没有使用模板,只返回字符串的Flask代码。

# file hello1.py
from flask import Flask

app = Flask(__name__)

@app.route('/')
def hello_world():
    return 'Hello, World!'

app.run(debug=False, threaded=True, host="127.0.0.1", port=5000)

下面是一段使用模板文件的Flask代码。

# file hello2.py
from flask import Flask,render_template

app = Flask(__name__)

@app.route('/hello/')
@app.route('/hello/<name>')
def hello(name=None):
    return render_template('hello.html', name=name)

app.run(debug=False, threaded=True, host="127.0.0.1", port=5000)

hello.html文件

<!doctype html>
<title>Hello from Flask</title>
{% if name %}
  <h1>Hello {{ name }}!</h1>
{% else %}
  <h1>Hello, World!</h1>
{% endif %}

flask 直接运行

首先看hello1.py的测试结果

# 100 并发
siege -c 100 -r 10 -b http://127.0.0.1:5000

Transactions:               1000 hits
Availability:             100.00 %
Elapsed time:               1.17 secs
Data transferred:           0.01 MB
Response time:              0.11 secs
Transaction rate:         854.70 trans/sec
Throughput:             0.01 MB/sec
Concurrency:               92.12
Successful transactions:        1000
Failed transactions:               0
Longest transaction:            0.14
Shortest transaction:           0.01

# 200并发
# siege -c 200 -r 10 -b http://127.0.0.1:5000

Transactions:               1789 hits
Availability:              89.45 %
Elapsed time:               2.26 secs
Data transferred:           0.02 MB
Response time:              0.17 secs
Transaction rate:         791.59 trans/sec
Throughput:             0.01 MB/sec
Concurrency:              134.37
Successful transactions:        1789
Failed transactions:             211
Longest transaction:            2.09
Shortest transaction:           0.00

# 1000 并发
siege -c 1000 -r 10 -b http://127.0.0.1:5000

Transactions:              10000 hits
Availability:             100.00 %
Elapsed time:              16.29 secs
Data transferred:           0.12 MB
Response time:              0.00 secs
Transaction rate:         613.87 trans/sec
Throughput:             0.01 MB/sec
Concurrency:                2.13
Successful transactions:       10000
Failed transactions:               0
Longest transaction:            0.08
Shortest transaction:           0.00

不知道为什么200的时候可用率会有一个下降,但是从大趋势可以看出来,访问速率是一直再降的,1000并发的时候已经到613/s了。

在看看第二段代码

# 100 并发
siege -c 100 -r 10 -b http://127.0.0.1:5000/hello/libai

Transactions:               1000 hits
Availability:             100.00 %
Elapsed time:               1.26 secs
Data transferred:           0.07 MB
Response time:              0.12 secs
Transaction rate:         793.65 trans/sec
Throughput:             0.06 MB/sec
Concurrency:               93.97
Successful transactions:        1000
Failed transactions:               0
Longest transaction:            0.14
Shortest transaction:           0.04

# 200并发
siege -c 200 -r 10 -b http://127.0.0.1:5000/hello/libai
Transactions:               1837 hits
Availability:              91.85 %
Elapsed time:               2.52 secs
Data transferred:           0.13 MB
Response time:              0.18 secs
Transaction rate:         728.97 trans/sec
Throughput:             0.05 MB/sec
Concurrency:              134.77
Successful transactions:        1837
Failed transactions:             163
Longest transaction:            2.18
Shortest transaction:           0.00

# 1000 并发
siege -c 1000 -r 10 -b http://127.0.0.1:5000/hello/libai
Transactions:              10000 hits
Availability:             100.00 %
Elapsed time:              17.22 secs
Data transferred:           0.70 MB
Response time:              0.01 secs
Transaction rate:         580.72 trans/sec
Throughput:             0.04 MB/sec
Concurrency:                7.51
Successful transactions:       10000
Failed transactions:               0
Longest transaction:            0.09
Shortest transaction:           0.00

其他方式

参考Flask官方文档推荐的部署方式进行测试。

虽然轻便且易于使用,但是 Flask 的内建服务器不适用于生产 ,它也不能很好 的扩展。本文主要说明在生产环境下正确使用 Flask 的一些方法。

如果想要把 Flask 应用部署到这里没有列出的 WSGI 服务器,请查询其文档中关于 如何使用 WSGI 的部分,只要记住: Flask 应用对象实质上是一个 WSGI 应用。
下面从官方的方式中挑选几种进行性能测试。

Gunicorn

Gunicorn ‘Green Unicorn’ 是一个 UNIX 下的 WSGI HTTP 服务器,它是一个 移植自 Ruby 的 Unicorn 项目的 pre-fork worker 模型。它既支持 eventlet , 也支持 greenlet 。在 Gunicorn 上运行 Flask 应用非常简单:

gunicorn myproject:app

当然,为了使用gunicorn,我们首先得pip install gunicorn来进行gunicorn的安装。要使用gunicorn启动hello1.py,需要将里面的代码

app.run(debug=False, threaded=True, host="127.0.0.1", port=5000)

删掉。然后执行命令

# 其中 -w 为开启n个进程 -b 为绑定ip和端口
gunicorn hello1:app -w 4 -b 127.0.0.1:4000

gunicorn 默认使用同步阻塞的网络模型(-k sync),对于大并发的访问可能表现不够好, 它还支持其它更好的模式,比如:gevent或meinheld。所以,我们可以将阻塞模型替换为gevent。

# 其中 -w 为开启n个进程 -b 为绑定ip和端口 -k 为替换阻塞模型为gevent
gunicorn hello1:app -w 4 -b 127.0.0.1:4000  -k gevent

下面我分别测试1000并发10次访问的四种情况,1个进程、4个进程下gevnent和非gevent模型,看看结果。

在测试前,一定要设置ulimit的值大一些,否者会报Too many open files错误,我设置到了65535

ulimit 65535

gunicorn hello1:app -w 1 -b 127.0.0.1:4000
siege -c 1000 -r 10 -b http://127.0.0.1:4000
Transactions:              10000 hits
Availability:             100.00 %
Elapsed time:              15.21 secs
Data transferred:           0.12 MB
Response time:              0.00 secs
Transaction rate:         657.46 trans/sec
Throughput:             0.01 MB/sec
Concurrency:                0.85
Successful transactions:       10000
Failed transactions:               0
Longest transaction:            0.01
Shortest transaction:           0.00

可以看到,单进程比flask直接启动稍稍好一点。

gunicorn hello1:app -w 4 -b 127.0.0.1:4000
siege -c 1000 -r 10 -b http://127.0.0.1:4000

Transactions:              10000 hits
Availability:             100.00 %
Elapsed time:              15.19 secs
Data transferred:           0.12 MB
Response time:              0.00 secs
Transaction rate:         658.33 trans/sec
Throughput:             0.01 MB/sec
Concurrency:                0.75
Successful transactions:       10000
Failed transactions:               0
Longest transaction:            0.01
Shortest transaction:           0.00

# 使用gevent,记得 pip install gevent
gunicorn hello1:app -w 1 -b 127.0.0.1:4000  -k gevent
Transactions:              10000 hits
Availability:             100.00 %
Elapsed time:              15.20 secs
Data transferred:           0.12 MB
Response time:              0.00 secs
Transaction rate:         657.89 trans/sec
Throughput:             0.01 MB/sec
Concurrency:                1.33
Successful transactions:       10000
Failed transactions:               0
Longest transaction:            0.02
Shortest transaction:           0.00

gunicorn hello1:app -w 4 -b 127.0.0.1:4000  -k gevent

Transactions:              10000 hits
Availability:             100.00 %
Elapsed time:              15.51 secs
Data transferred:           0.12 MB
Response time:              0.00 secs
Transaction rate:         644.75 trans/sec
Throughput:             0.01 MB/sec
Concurrency:                1.06
Successful transactions:       10000
Failed transactions:               0
Longest transaction:            0.28
Shortest transaction:           0.00

可以看到,在并发数为1000的时候,使用gunicorn和genent并不明显,但是当我们修改并发数为100或200是进行测试

gunicorn hello1:app -w 1 -b 127.0.0.1:4000  -k gevent
siege -c 200 -r 10 -b http://127.0.0.1:4000
Transactions:               1991 hits
Availability:              99.55 %
Elapsed time:               1.62 secs
Data transferred:           0.02 MB
Response time:              0.14 secs
Transaction rate:        1229.01 trans/sec
Throughput:             0.02 MB/sec
Concurrency:              167.71
Successful transactions:        1991
Failed transactions:               9
Longest transaction:            0.34
Shortest transaction:           0.00

gunicorn hello1:app -w 4 -b 127.0.0.1:4000  -k gevent
siege -c 200 -r 10 -b http://127.0.0.1:4000
Transactions:               2000 hits
Availability:             100.00 %
Elapsed time:               0.71 secs
Data transferred:           0.02 MB
Response time:              0.04 secs
Transaction rate:        2816.90 trans/sec
Throughput:             0.03 MB/sec
Concurrency:              122.51
Successful transactions:        2000
Failed transactions:               0
Longest transaction:            0.17
Shortest transaction:           0.00

可以看到在4进程,使用gevent的时候已经达到了2816。

再测试一下200并发下hello2.py的效率。

gunicorn hello2:app -w 1 -b 127.0.0.1:4000  -k gevent
siege -c 200 -r 10 -b http://127.0.0.1:4000/hello/2
Transactions:               1998 hits
Availability:              99.90 %
Elapsed time:               1.72 secs
Data transferred:           0.13 MB
Response time:              0.14 secs
Transaction rate:        1161.63 trans/sec
Throughput:             0.08 MB/sec
Concurrency:              168.12
Successful transactions:        1998
Failed transactions:               2
Longest transaction:            0.35
Shortest transaction:           0.00

gunicorn hello2:app -w 4 -b 127.0.0.1:4000  -k gevent
siege -c 200 -r 10 -b http://127.0.0.1:4000/hello/2
Transactions:               2000 hits
Availability:             100.00 %
Elapsed time:               0.71 secs
Data transferred:           0.13 MB
Response time:              0.05 secs
Transaction rate:        2816.90 trans/sec
Throughput:             0.19 MB/sec
Concurrency:              128.59
Successful transactions:        2000
Failed transactions:               0
Longest transaction:            0.14
Shortest transaction:           0.0

可以看到和hello1.py的效率差不都,也达到了2800+,性能基本上算是提升了4倍。

uWSGI

官网链接uWSGI,安装请点开链接查看。Mac下直接使用brew install uWSGI就可以安装,安装好后,在网站目录下运行

uWSGI --http 127.0.0.1:4000 --module hello1:app

时间不够了,先写到这里

usWSGI和ngnix

uswgi 安装,使用pip install uswgi就可以了。

写配置文件uswgi.ini,这个文件uswgi的配置文件。

[uwsgi]
# 是否启用主进程
master = true
# 虚拟python环境的目录,即virtualenv生成虚拟环境的目录
home = venv
# wsgi启动文件
wsgi-file = manage.py
# wsgi启动文件中new出的app对象
callable = app
# 绑定端口
socket = 0.0.0.0:5000
# 开启几个进程
processes = 4
# 每个进程几个线程
threads = 2
# 允许的缓冲大小
buffer-size = 32768
# 接受的协议,这里要注意!!!!!!直接使用uwsgi启动时必须有这项,没有的话会造成服务可以启动,但是浏览器不能访问;在只是用nginx进行代理访问时,这项必须删除,否则nginx不能正常代理到uwsgi服务。
protocol=http

其中uwsgi的启动文件为manage.py,其中hello1为上面的hello1.py,注释掉其中的app.run(debug=False, threaded=True, host=”127.0.0.1″, port=5000)。

from flask import Flask
from hello1 import app

manager = Manager(app)

if __name__ == '__main__':
    manager.run()

然后使用命令uswgi uswgi.ini启动程序,访问本地127.0.0.1:5000就可以看到helloworld了。然后就需要和nginx一起使用了,安装好nginx后,找到nginx的配置文件,如果使用的是apt或者yum安装nginx,则nginx的配置文件在/etc/nginx/nginx.conf中,为了不影响全局效果,这里修改/etc/nginx/sites-available/default文件,这个文件在/etc/nginx/nginx.conf中包含了,所以配置也是生效的。配置文件内容。

# nginx ip 访问次数限制,具体内容请查看参考6,7
limit_req_zone $binary_remote_addr zone=allips:100m rate=50r/s;  

server {
    listen 80 default_server;
    listen [::]:80 default_server;
    # nginx ip 访问次数限制,具体内容请查看参考6,7
    limit_req   zone=allips  burst=20  nodelay; 
    root /var/www/html;
    # Add index.php to the list if you are using PHP
    index index.html index.htm index.nginx-debian.html;
    server_name _;
    # 静态文件代理,nginx的静态文件访问速度比其他容器快很多。
    location /themes/  {
        alias       /home/dc/CTFd_M/CTFd/themes/;
    }
    # uwsgi配置
    location / {
        include uwsgi_params;
        uwsgi_pass 127.0.0.1:5000; 
        # python virtualenv 路径
        uwsgi_param UWSGI_PYHOME /home/dc/CTFd_M/venv; 
        # 当前项目路径
        uwsgi_param UWSGI_CHDIR /home/dc/CTFd_M; 
        # 启动文件
        uwsgi_param UWSGI_SCRIPT manage:app; 
        # 超时
        uwsgi_read_timeout 100;
    }
}

然后启动nginx服务器,访问127.0.0.1就可以正常访问了,由于可能本机配置有问题,不能成功使用这种方式进行系统的访问,后面的对比结果是我新建虚拟机,Ubuntu Server 16.04,2核,2G内存的性能,并且这里访问的网页已经不是前面的hello1.py这种测试程序,而是一个完成的应用平台,可以从Throughput属性看到,已经达到了20+M/s的处理速度。

# 下面的两个测试均是物理机上访问虚机环境,虚机环境为Ubuntu Server 16.04
# 使用uswgi启动
siege -c 200 -r 10 -b http://192.168.2.151:5000/index.html
Transactions:              56681 hits
Availability:              99.90 %
Elapsed time:             163.48 secs
Data transferred:        3385.71 MB
Response time:              0.52 secs
Transaction rate:         346.72 trans/sec
Throughput:            20.71 MB/sec
Concurrency:              180.97
Successful transactions:       56681
Failed transactions:              59
Longest transaction:           32.23
Shortest transaction:           0.05

# 使用uswsgi和nginx做静态代理后
siege -c 200 -r 10 -b http://192.168.2.151/index.html

Transactions:              53708 hits
Availability:              99.73 %
Elapsed time:             122.13 secs
Data transferred:        3195.15 MB
Response time:              0.29 secs
Transaction rate:         439.76 trans/sec
Throughput:            26.16 MB/sec
Concurrency:              127.83
Successful transactions:       53708
Failed transactions:             148
Longest transaction:          103.07
Shortest transaction:           0.00

可以看到,uswsgi和nginx一起使用,能够提升一些效率,从346次/s提升到了439次/s。

Ubuntu 防火墙 ufw

UbuntuHelp:UFW :http://wiki.ubuntu.org.cn/UbuntuHelp:UFW

Ufw使用指南:http://wiki.ubuntu.org.cn/Ufw使用指南

ubuntu ufw防火墙:

http://wap.dongnanshan.com/fn.php?s=ubuntu%20ufw%E9%98%B2%E7%81%AB%E5%A2%99

UFW要领:通用防火墙规则和命令:https://www.howtoing.com/ufw-essentials-common-firewall-rules-and-commands/

Linux 防火墙:https://blog.csdn.net/freeking101/article/details/70239637

自打2.4版本以后的Linux内核中, 提供了一个非常优秀的防火墙工具。这个工具可以对出入服务的网络数据进行分割、过滤、转发等等细微的控制,进而实现诸如防火墙、NAT等功能。

一般来说, 我们会使用名气比较的大iptables等程序对这个防火墙的规则进行管理。iptables可以灵活的定义防火墙规则, 功能非常强大。但是由此产生的副作用便是配置过于复杂。一向以简单易用著称Ubuntu在它的发行版中,附带了一个相对iptables简单很多的防火墙配置工具:ufw。

ufw防火墙 即uncomplicated firewall(不复杂的防火墙)是iptables的接口,旨在简化配置防火墙的过程。 繁琐部分的设置还是需要去到iptables。虽然iptables是一个坚实和灵活的工具,但是初学者可能很难学会如何使用它来正确配置防火墙。如果您希望开始保护网络安全,并且不确定使用哪种工具,UFW可能是您的最佳选择。

Ubuntu下使用UFW配置防火墙(简化iptables的操作)

UFW全称为Uncomplicated Firewall,是Ubuntu系统上配置iptables防火墙的工具,即ufw是一个iptables的前端应用程序。UFW提供一个非常友好的命令用于创建基于IPV4,IPV6的防火墙规则。但是,UFW是没有界面的,就是用命令的那一种,所以,操作起来就不是那么的方便,有人帮它写了个界面,名字就叫做“Gufw”。
由于Ubuntu下的iptables操作起来比较复杂,依赖关系比较多,所以使用UFW时可以简化很多操作。当然Debian同样适用。
无论是桌面版还是服务器版, UFW的命令行用法是一样的。

1. 安装

Ubuntu的防火墙默认已安装,若无意中卸载,执行以下命令安装

sudo apt-get install ufw

查看防火墙状态

sudo ufw status
sudo ufw status numbered  # 按编号显示

防火墙版本

sudo ufw version

2. 启动、禁用、重置UFW

sudo ufw enable    # 设置开机启动
sudo ufw deny       # 关闭所有外部对本机的访问,但本机访问外部正常。    
sudo ufw reset       # 重置防火墙

3.开启/禁用

用法:sudo ufw allow|deny [service]

高级规则:

除了基于端口的允许或阻止,UFW 还允许您按照 IP 地址、子网和 IP 地址/子网/端口的组合来允许/阻止。

ufw deny from {ip-address-here} to any port {port-number-here}
sudo ufw deny from 202.54.1.5 to any port 80    # 阻断或拒绝IP地址202.54.1.5访问80端口的请求

拦截特定IP、端口以及协议

sudo ufw deny proto {tcp|udp} from {ip-address-here} to any port {port-number-here}


例如,阻断IP地址202.54.1.1访问tcp 22端口(FTP协议),可以输入:

sudo ufw deny proto tcp from 202.54.1.1 to any port 22
sudo ufw status numbered



UFW拦截子网 语法是类似的:

sudo ufw deny proto tcp from sub/net to any port 22
sudo ufw deny proto tcp from 202.54.1.0/24 to any port 22


使用示例: 一般用户,只需如下设置:

sudo apt-get install ufw
sudo ufw enable
sudo default deny


以上三条命令已经足够安全了,如果你需要开放某些服务,再使用sudo ufw allow开启。

sudo ufw deny from 192.168.1.5 to any # 拦截或拒绝来自192.168.1.5的所有数据包
sudo ufw allow from 123.45.67.89 # 允许从一个 IP 地址连接:
sudo ufw allow from 123.45.67.89/24 # 允许特定子网的连接:


允许特定 IP/端口的组合:

sudo ufw allow from 123.45.67.89 to any port 22 proto tcp


proto tcp 可以删除或者根据你的需求改成 proto udp,所有例子的 allow 都可以根据需要变成 deny。 sudo ufw allow www 或 sudo ufw allow 80/tcp sudo ufw allow ftp 或 sudo ufw allow 21/tcp sudo ufw allow 22/tcp 允许所有的外部IP访问本机的22/tcp (ssh)端口 sudo ufw allow 53 # 允许外部访问53端口(tcp/udp) sudo ufw allow 80 # 允许外部访问80端口 等价 sudo ufw allow http sudo ufw delete allow 80 # 禁止外部访问80 端口 sudo ufw allow from 192.168.1.1 # 允许此IP访问所有的本机端口 sudo ufw allow from 111.111.111.111 to any port 22 # 允许特定IP特定端口的连接 sudo ufw allow proto udp 192.168.0.1 port 53 to 192.168.0.2 port 53 sudo ufw deny smtp # 禁止外部访问smtp服务 sudo ufw allow smtp  允许所有的外部IP访问本机的25/tcp (smtp)端口 sudo ufw delete allow smtp # 删除上面建立的某条规则 sudo ufw deny http # 拒绝http连接 sudo ufw deny from 111.111.111.111 # 拒绝特定IP连接

允许特定端口范围连接

sudo ufw allow 1000:2000/tcp
sudo ufw allow 2001:3000/udp

# 拒绝所有的TCP流量从10.0.0.0/8 到192.168.0.1地址的22端口
sudo ufw deny proto tcp from 10.0.0.0/8 to 192.168.0.1 port 22

# 可以允许所有RFC1918网络(局域网/无线局域网的)访问这个主机(/8,/16,/12是一种网络分级):
sudo ufw allow from 10.0.0.0/8
sudo ufw allow from 172.16.0.0/12
sudo ufw allow from 192.168.0.0/16

这样设置已经很安全,如果有特殊需要,可以使用sudo ufw allow开启相应服务

设置默认策略(默认策略即为拒绝所有传入连接,允许所有传出链接)

sudo ufw default deny incoming
sudo ufw default allow outgoing

允许SSH连接(重要!)否则你将无法连接云服务器…

# 以下两条命令效果是一样的
sudo ufw allow ssh
sudo ufw allow 22

4. 删除规则

有两种删除防火墙规则的方法,最直接的方法是:

sudo ufw delete allow ssh

如上,我们在 delete 指令后面输入要删除的规则即可。还可以这样:

sudo ufw delete allow 80/tcp

sudo ufw delete allow 1000:2000/tcp

如果规则又长又复杂,问题就比较棘手了。这里有一个更简单的方法,需要分成两个步骤:

sudo ufw status numbered    # 按规则来编号防火墙规则。命令会将所有防火墙规则以数字列表形式列出来
# 请将 [number] 替换成规则列表对应的数字序号。
sudo ufw delete 1           # 按规则编号删除防火墙规则 sudo ufw delete [number]
sudo ufw delete 4           # 删除编号为4的规则

5. 开启/关闭防火墙 (默认设置是’disable’)

sudo ufw enable|disable 

6. 编辑 UFW 的配置文件

虽然可以通过命令行添加简单的规则,但仍有可能需要添加或删除更高级或特定的规则。 在运行通过终端输入的规则之前,UFW 将运行一个文件 before.rules,它允许回环接口、ping 和 DHCP 等服务。要添加或改变这些规则,编辑 /etc/ufw/before.rules 这个文件。 同一目录中的 before6.rules 文件用于 IPv6 。

还存在一个 after.rule 和 after6.rule 文件,用于添加在 UFW 运行你通过命令行输入的规则之后需要添加的任何规则。
还有一个配置文件位于 /etc/default/ufw。 从此处可以禁用或启用 IPv6,可以设置默认规则,并可以设置 UFW 以管理内置防火墙链。

启用 IPv6 支持

如果你的云服务器启用了 IPv6,则需要配置 UFW 以启用支持,编辑 UFW 配置文件:

sudo nano /etc/default/ufw

在配置文件中将 IPV6 设置为 yes :

IPV6=yes

保存并退出编辑器,重启 UFW:

sudo ufw disable
sudo ufw enable

这样,UFW 就同时支持 IPv4 和 IPv6 两种协议了。

更详细的说明

[]是代表可选内容。可能需要root权限,如无法运行,请使用 sudo ufw……的命令结构。“”中的内容不能照抄,要按需要更改。
ufw [--dry-run] enable|disable|reload
命令[–试运行]激活|关闭|重新载入
ufw [--dry-run] default allow|deny|reject [incoming|outgoing]
命令[–试运行]默认 允许|阻止|拒绝 [访问本机的规则|向外访问的规则]
注:reject让访问者知道数据被拒绝(回馈拒绝信息)。deny则直接丢弃访问数据,访问者不知道是访问被拒绝还是不存在该主机。
ufw [--dry-run] logging on|off|LEVEL
命令[–试运行]日志 开启|关闭|“级别”
ufw [--dry-run] reset
命令[–试运行]复位
ufw [--dry-run] status [verbose|numbered]
命令[–试运行]状态 [详细|被编号的规则]
ufw [--dry-run] show REPORT
命令[–试运行]显示 “报告类型”
ufw [--dry-run] [delete] [insert NUM] allow|deny|reject|limit  [in|out][log|log-all] PORT[/protocol]
命令[–试运行][删除] [插到“x号规则”之前] 允许|阻止|拒绝|限制 [进|出] [记录新连接|记录所有数据包] “端口” [/“协议”]
ufw  [--dry-run]  [delete] [insert NUM] allow|deny|reject|limit [in|out on INTERFACE] [log|log-all] [proto protocol] [from ADDRESS [port PORT]] [to ADDRESS [port PORT]]
命令 [–试运行][删除][插到x号规则之前] 允许|阻止|拒绝|限制 [进|出 基于“什么网络设备”] [协议 “协议”] [来源 “地址” [端口 “端口”]] [目标 “地址” [端口 “端口”]]
ufw [--dry-run] delete NUM
命令[–试运行] 删除 “第X号规则”
ufw [--dry-run] app list|info|default|update
命令 [–试运行] 程序 清单|信息|默认|更新

参数
–version
显示程序版本号
-h , –help
显示帮助信息
–dry-run
不实际运行,只是把涉及的更改显示出来。
enable
激活防火墙,开机时自动启动
disable
关闭防火墙,开机时不启动
reload
重新载入防火墙
default allow|deny|reject 方向
方向是指:向内(incoming)|向外(outgoing)。如果更改了默认策略,一些已经存在的规则可能需要手动修改。更多内容看“规则示例”一节。
logging on|off|“级别”
切换日志状态。日志记录包使用的是系统日志。“级别”有好几个,默认是低级(low)。详细内容看“日志”一节。
reset [--force]
关闭防火墙,并复位至初始安装状态。如果使用–force选项,则忽略确认提示。
status
显示防火墙的状态和已经设定的规则。使用status verbose显示更详细的信息。‘anywhere’与‘any’、‘0.0.0.0/0’一个意思。
show “报告类型”
显示防火墙运行信息。详细内容看“报告类型”
limit “规则”
此命令目前只能用于IPv4。还不支持IPv6.

规则示例
    * 规则可以简写也可以完整表达。简写的规则只能指定端口和(或)协议被允许或阻止。默认是访问本机的规则(incoming)。例如:
ufw allow 53
允许其它机子访问本机53端口,协议包含tcp和udp。
    * 如果要控制协议,只要加入“/协议”在端口后面就行了。例如:
ufw allow 25/tcp
允许其它机子使用tcp协议访问25端口。
    * UFW也可以检查 /etc/services文件,明白服务的名字及对应的端口和协议。我们使用服务的名称即可。
ufw allow smtp
    * UFW同时支持出入口过滤。用户可以使用in或out来指定向内还是向外。如果未指定,默认是in。例如:
ufw allow in http
ufw reject out smtp
ufw deny out to 192.168.1.1
阻止向192.168.1.1发送信息
    * 用户也可使用完整的规则来指定来源与目的地,还有端口。书写规则基于OpenBSD PF。举例:
ufw deny proto tcp to any port 80
阻止本机用tcp协议在80端口发数据
ufw deny proto tcp from 10.0.0.0/8 to 192.168.0.1 port 25
This will deny all traffic from the RFC1918 Class A network to tcp port 25 with the address 192.168.0.1.(这条命令目前无法翻译 from 和 to的关系,希望后来者更改)
    * ufw也可以使用IPv6协议。但要事先在/etc/default/ufw 中设定IPv6为启动状态。举例:
ufw deny proto tcp from 2001:db8::/32 to any port 25
阻止IPv6为2001:db8::/32类型的地址,连接本机25端口
    * ufw可以连续例举端口号。端口号间必须使用逗号或分号,不能使用空格。“输入端口号”字符数最多不能超过15过(8080:8090算两个字符)。比如允许80,443,8080~8090这几个端口接受tcp传入连接。
ufw allow proto tcp from any to any port 80,443,8080:8090
此例,“输入端口号”字符数为4个。
    * ufw可以对连接数率进行限制,以防范暴力登录攻击。如果同一个IP地址在30秒之内进行了6次及6次以上的连接,ufw将阻止(deny)该连接。可以查看更多信息。
ufw limit ssh/tcp
    * 当然有些时候我们想让访问者知道他的访问被拒绝了,而不是保持沉默让他不知道哪出了问题。就使用reject代替deny
ufw reject auth
    * 默认情况下ufw的所有规则针对所有网络设备(比如网卡1,网卡2,无线网卡1,虚拟网卡1……)。但是我们可以特别指定,某规则在什么网络设备上生效。注意只能使用设备号,不能用别名。比如有线网卡:eth0(你可以使用ifconfig命令查看你现有的网络设备)
ufw allow in on eth0 to any port 80 proto tcp
    * 要删除规则,只要在命令中加入delete就行了。比如:
ufw deny 80/tcp
要删除条命令建立的规则,使用:
ufw delete deny 80/tcp
当然,也可以使用规则号来进行删除。比如要第3号规则
ufw delete 3
注意,如果你开启IPv6功能。要同时删除IPv4和IPv6的规则(比如:ufw allow 22/tcp),如果用规则号的方式删除可能只删除了一个。
    * 显示第几号规则,可以使用这样的命令
ufw status numbered(也就是规则号)
    * 日志功能。如果使用log将记录所有符合规则的新连接,如果使用log-all将记录所有符合规则的数据包。例如,要允许并记录shh(22/tcp)上的新连接:
ufw allow log 22/tcp
更多内容看“日志”一节
特殊例子: 允许RFC1918网络结构访问本机:
ufw allow from 10.0.0.0/8
ufw allow from 172.16.0.0/12
ufw allow from 192.168.0.0/16
最后一条经过测试,范围大约是192.168.0.0~192.168.225.225。当然,涉及很多专业知识,希望有人补充。
远程管理
此章节还未被编辑
应用程序集成管理
* ufw能从 /etc/ufw/applications.d. 中读取应用程序清单。你可以使用命令查看:
ufw app list
 * 大家可以使用应用程序名字来增加规则。比如
ufw allow <程序名字>
ufw allow CUPS
ufw allow from 192.168.0.0/16 to any app <程序名字>
注意,端口号已经被程序名所对应的策略所包括,不要再重新列举端口号。
* 查看程序名所对应的策略内容,命令:
ufw app into <程序名字>
注意:程序名字是清单上有的才行。程序名字改用用all,可以看全部策略。
* 如果你编辑或者增加了程序清单,你可使用此命令更新防火墙:
ufw app update <程序名字>
程序名字改用用all,则更新整个清单。
* 更新清单同时增加规则可以使用如下命令:
ufw app update –add-new <程序名字>
注意:update –add-new参数的行为由此命令配置:
ufw app default skip|allow|deny
默认是skip,也就是没有设定。
警告:如果程序规则设定为default allow ,将会引起很大的风险。请三思而后行!
日志
ufw支持许多日志级别。默认是低级(low),用户也可以自己指定:
ufw logging on|off|low|medium|high|full
    * off 就是关闭日志
    * low 记录与默认策略冲突的封装数据包(记录速度被限制)。记录与规则符合的数据包(没有要求关闭记录的)
    * medium 记录与默认策略冲突的数据包(包括被规则允许的)、无效数据包、所有新连接。记录速度被限制。
    * high 同medium,只是没有记录速度限制。附加记录所有数据包(有记录速度限制)。
    * full 与high等同,只是取消记录限制。
medium级别及更上级会记录许多内容,有可能短时间内撑爆你的硬盘。特别是用在服务器一类的机器上。

wget和curl设置代理服务器的命令

一、wget设置代理

eg:

wget -Y on -e "http_proxy=http://10.0.0.172:9201" "www.wo.com.cn"

此命令使用10.0.0.172:9201这个代理服务器IP和端口访问站点www.wo.com.cn

参数说明

  • -Y 是否使用代理

  • -e 执行命令

二、curl设置代理

eg:

curl -x 10.0.0.172:80 www.wo.com.cn

此命令使用10.0.0.172:80这个代理服务器IP和端口访问站点www.wo.com.cn

参数说明

  • -x 设置代理,格式为host[:port],port的缺省值为1080

使用lynis进行linux漏洞扫描

lynis 是一款运行在 Unix/Linux 平台上的基于主机的、开源的安全审计软件。

安装lynis

在 archlinux 上可以直接通过 pacman 来安装

sudo pacman -S lynis --noconfirm
resolving dependencies...
looking for conflicting packages...

Packages (1) lynis-2.6.4-1

Total Installed Size:  1.35 MiB
Net Upgrade Size:      0.00 MiB

:: Proceed with installation? [Y/n] 
(0/1) checking keys in keyring                     [----------------------]   0%
(1/1) checking keys in keyring                     [######################] 100%
(0/1) checking package integrity                   [----------------------]   0%
(1/1) checking package integrity                   [######################] 100%
(0/1) loading package files                        [----------------------]   0%
(1/1) loading package files                        [######################] 100%
(0/1) checking for file conflicts                  [----------------------]   0%
(1/1) checking for file conflicts                  [######################] 100%
(0/1) checking available disk space                [----------------------]   0%
(1/1) checking available disk space                [######################] 100%
:: Processing package changes...
(1/1) reinstalling lynis                           [----------------------]   0%
(1/1) reinstalling lynis                           [######################] 100%
:: Running post-transaction hooks...
(1/2) Reloading system manager configuration...
(2/2) Arming ConditionNeedsUpdate...

使用lynis进行主机扫描

首先让我们不带任何参数运行 lynis, 这会列出 lynis 支持的那些参数

[lujun9972@T520 linux和它的小伙伴]$ lynis

[ Lynis 2.6.4 ]

################################################################################
  Lynis comes with ABSOLUTELY NO WARRANTY. This is free software, and you are
  welcome to redistribute it under the terms of the GNU General Public License.
  See the LICENSE file for details about using this software.

  2007-2018, CISOfy - https://cisofy.com/lynis/
  Enterprise support available (compliance, plugins, interface and tools)
################################################################################


[+] Initializing program
------------------------------------


  Usage: lynis command [options]


  Command:

    audit
        audit system                  : Perform local security scan
        audit system remote <host>    : Remote security scan
        audit dockerfile <file>       : Analyze Dockerfile

    show
        show                          : Show all commands
        show version                  : Show Lynis version
        show help                     : Show help

    update
        update info                   : Show update details


  Options:

    --no-log                          : Don't create a log file
    --pentest                         : Non-privileged scan (useful for pentest)
    --profile <profile>               : Scan the system with the given profile file
    --quick (-Q)                      : Quick mode, don't wait for user input

    Layout options
    --no-colors                       : Don't use colors in output
    --quiet (-q)                      : No output
    --reverse-colors                  : Optimize color display for light backgrounds

    Misc options
    --debug                           : Debug logging to screen
    --view-manpage (--man)            : View man page
    --verbose                         : Show more details on screen
    --version (-V)                    : Display version number and quit

    Enterprise options
    --plugindir <path>                : Define path of available plugins
    --upload                          : Upload data to central node

    More options available. Run '/usr/bin/lynis show options', or use the man page.

  No command provided. Exiting..

从上面可以看出,使用 lynis 进行主机扫描很简单,只需要带上参数 audit system 即可。 Lynis在审计的过程中,会进行多种类似的测试,在审计过程中会将各种测试结果、调试信息、和对系统的加固建议都被写到 stdin 。 我们可以执行下面命令来跳过检查过程,直接截取最后的扫描建议来看。

sudo lynis audit system |sed '1,/Results/d'

lynis将扫描的内容分成几大类,可以通过 show groups 参数来获取类别

lynis show groups
accounting
authentication
banners
boot_services
containers
crypto
databases
dns
file_integrity
file_permissions
filesystems
firewalls
hardening
homedirs
insecure_services
kernel
kernel_hardening
ldap
logging
mac_frameworks
mail_messaging
malware
memory_processes
nameservices
networking
php
ports_packages
printers_spools
scheduling
shells
snmp
squid
ssh
storage
storage_nfs
system_integrity
time
tooling
usb
virtualization
webservers

若指向扫描某几类的内容,则可以通过 –tests-from-group 参数来指定。

比如我只想扫描 shells 和 networking 方面的内容,则可以执行

sudo lynis --tests-from-group "shells networking" --no-colors
[ Lynis 2.6.4 ]

################################################################################
  Lynis comes with ABSOLUTELY NO WARRANTY. This is free software, and you are
  welcome to redistribute it under the terms of the GNU General Public License.
  See the LICENSE file for details about using this software.

  2007-2018, CISOfy - https://cisofy.com/lynis/
  Enterprise support available (compliance, plugins, interface and tools)
################################################################################


[+] Initializing program
------------------------------------
[2C- Detecting OS... [41C [ DONE ]
[2C- Checking profiles...[37C [ DONE ]
[2C- Detecting language and localization[22C [ zh ]
[4CNotice: no language file found for 'zh' (tried: /usr/share/lynis/db/languages/zh)[0C

  ---------------------------------------------------
  Program version:           2.6.4
  Operating system:          Linux
  Operating system name:     Arch Linux
  Operating system version:  Rolling release
  Kernel version:            4.16.13
  Hardware platform:         x86_64
  Hostname:                  T520
  ---------------------------------------------------
  Profiles:                  /etc/lynis/default.prf
  Log file:                  /var/log/lynis.log
  Report file:               /var/log/lynis-report.dat
  Report version:            1.0
  Plugin directory:          /usr/share/lynis/plugins
  ---------------------------------------------------
  Auditor:                   [Not Specified]
  Language:                  zh
  Test category:             all
  Test group:                shells networking
  ---------------------------------------------------
[2C- Program update status... [32C [ NO UPDATE ]

[+] System Tools
------------------------------------
[2C- Scanning available tools...[30C
[2C- Checking system binaries...[30C

[+] Plugins (phase 1)
------------------------------------
[0CNote: plugins have more extensive tests and may take several minutes to complete[0C
[0C [0C
[2C- Plugins enabled[42C [ NONE ]

[+] Shells
------------------------------------
[2C- Checking shells from /etc/shells[25C
[4CResult: found 5 shells (valid shells: 5).[16C
[4C- Session timeout settings/tools[25C [ NONE ]
[2C- Checking default umask values[28C
[4C- Checking default umask in /etc/bash.bashrc[13C [ NONE ]
[4C- Checking default umask in /etc/profile[17C [ WEAK ]

[+] Networking
------------------------------------
[2C- Checking IPv6 configuration[30C [ ENABLED ]
[6CConfiguration method[35C [ AUTO ]
[6CIPv6 only[46C [ NO ]
[2C- Checking configured nameservers[26C
[4C- Testing nameservers[36C
[6CNameserver: 202.96.134.33[30C [ SKIPPED ]
[6CNameserver: 202.96.128.86[30C [ SKIPPED ]
[4C- Minimal of 2 responsive nameservers[20C [ SKIPPED ]
[2C- Getting listening ports (TCP/UDP)[24C [ DONE ]
[6C* Found 11 ports[39C
[2C- Checking status DHCP client[30C [ RUNNING ]
[2C- Checking for ARP monitoring software[21C [ NOT FOUND ]

[+] Custom Tests
------------------------------------
[2C- Running custom tests... [33C [ NONE ]

[+] Plugins (phase 2)
------------------------------------

================================================================================

  -[ Lynis 2.6.4 Results ]-

  Great, no warnings

  Suggestions (1):
  ----------------------------
  * Consider running ARP monitoring software (arpwatch,arpon) [NETW-3032] 
      https://cisofy.com/controls/NETW-3032/

  Follow-up:
  ----------------------------
  - Show details of a test (lynis show details TEST-ID)
  - Check the logfile for all details (less /var/log/lynis.log)
  - Read security controls texts (https://cisofy.com)
  - Use --upload to upload data to central system (Lynis Enterprise users)

================================================================================

  Lynis security scan details:

  Hardening index : 33 [######              ]
  Tests performed : 13
  Plugins enabled : 0

  Components:
  - Firewall               [X]
  - Malware scanner        [X]

  Lynis Modules:
  - Compliance Status      [?]
  - Security Audit         [V]
  - Vulnerability Scan     [V]

  Files:
  - Test and debug information      : /var/log/lynis.log
  - Report data                     : /var/log/lynis-report.dat

================================================================================

  Lynis 2.6.4

  Auditing, system hardening, and compliance for UNIX-based systems
  (Linux, macOS, BSD, and others)

  2007-2018, CISOfy - https://cisofy.com/lynis/
  Enterprise support available (compliance, plugins, interface and tools)

================================================================================

  [TIP]: Enhance Lynis audits by adding your settings to custom.prf (see /etc/lynis/default.prf for all settings)

查看详细说明

在查看审计结果时,你可以通过 show details 参数来获取关于某条警告/建议的详细说明。其对应的命令形式为:

lynis show details ${test_id}

比如,上面图中有一个建议

* Consider running ARP monitoring software (arpwatch,arpon) [NETW-3032] 

我们可以运行命令:

sudo lynis show details NETW-3032 
2018-06-08 18:18:01 Performing test ID NETW-3032 (Checking for ARP monitoring software)
2018-06-08 18:18:01 IsRunning: process 'arpwatch' not found
2018-06-08 18:18:01 IsRunning: process 'arpon' not found
2018-06-08 18:18:01 Suggestion: Consider running ARP monitoring software (arpwatch,arpon) [test:NETW-3032] [details:-] [solution:-]
2018-06-08 18:18:01 Checking permissions of /usr/share/lynis/include/tests_printers_spools
2018-06-08 18:18:01 File permissions are OK
2018-06-08 18:18:01 ===---------------------------------------------------------------===

查看日志文件

lynis在审计完成后会将详细的信息记录在 /var/log/lynis.log 中.

sudo tail /var/log/lynis.log
2018-06-08 17:59:46 ================================================================================
2018-06-08 17:59:46 Lynis 2.6.4
2018-06-08 17:59:46 2007-2018, CISOfy - https://cisofy.com/lynis/
2018-06-08 17:59:46 Enterprise support available (compliance, plugins, interface and tools)
2018-06-08 17:59:46 Program ended successfully
2018-06-08 17:59:46 ================================================================================
2018-06-08 17:59:46 PID file removed (/var/run/lynis.pid)
2018-06-08 17:59:46 Temporary files:  /tmp/lynis.sGxCR0hSPz
2018-06-08 17:59:46 Action: removing temporary file /tmp/lynis.sGxCR0hSPz
2018-06-08 17:59:46 Lynis ended successfully.

同时将报告数据被保存到 /var/log/lynis-report.dat 中.

sudo tail /var/log/lynis-report.dat

另外需要注意的是,每次审计都会覆盖原日志文件.

检查更新

审计软件需要随时进行更新从而得到最新的建议和信息,我们可以使用 update info 参数来检查更新:

lynis update info --no-colors
 == [1;37mLynis[0m ==

  Version            : 2.6.4
  Status             : [1;32mUp-to-date[0m
  Release date       : 2018-05-02
  Update location    : https://cisofy.com/lynis/


2007-2018, CISOfy - https://cisofy.com/lynis/

自定义lynis安全审计策略

lynis的配置信息以 .prf 文件的格式保存在 /etc/lynis 目录中。 其中,默认lynis自带一个名为 default.prf 的默认配置文件。

不过我们无需直接修改这个默认的配置文件,只需要新增一个 custom.prf 文件将自定义的信息加入其中就可以了。

关于配置文件中各配置项的意义,在 default.prf 中都有相应的注释说明,这里就不详述了。

想了解lynis的更多信息,可以访问它的官网https://cisofy.com/documentation/lynis/

flask多app应用(url进行处理和分发)

from flask import Flask
from werkzeug.wsgi import DispatcherMiddleware
from werkzeug.serving import run_simple

app01 = Flask('app01')
app02 = Flask('app02')

@app01.route('/login')
def login():
    return 'app01.login'

@app02.route('/index')
def index():
    return 'app02.index'


dm = DispatcherMiddleware(app01,{
    '/app02':        app02,
})

if __name__ == '__main__':
    run_simple('localhost', 5000,dm)

我们可以创建多个app,并同时运行,访问app01时可以直接使用/login,访问app02时需要使用/app02/index