分类目录归档:web

pip安装pycurl报错解决

 

错误1:Command “python setup.py egg_info” failed with error code 1 in /tmp/pip-install-u_NBeS/pycurl/

解决:

pip install –upgrade

pip install pycurl==7.43.0

 

错误2:运行时报错:

 

selenium + chrome爬虫环境搭建

在目录 /etc/yum.repos.d/ 下新建文件 google-chrome.repo

内容如下:

yum -y install google-chrome-stable –nogpgcheck

安装后查看安装目录:

which google-chrome-stable

which google-chrome

查看版本号:

google-chrome –version

如果想看更多内容就用

google-chrome –help #会列出所有参数

然而在使用selenium +chrome时有报错:

需要多加二个参数:

其它参数有:

[‘–headless’, ‘–disable-gpu’, ‘–no-sandbox’, ‘–disable-extensions’, ‘–disable-dev-shm-usage’]

到这其实安装完后通过vnc登录进去也开不开,可以参考这篇解决:

https://blog.csdn.net/tiandaochouqin99/article/details/79643248

tomcat杀进程方法

有时候tomcat停不掉,需要租如下修改:

在catalina.sh文件的PRGDIR=dirname "$PRG"行后面添加

if [ -z “$CATALINA_PID” ]; then
CATALINA_PID=$PRGDIR/CATALINA_PID
cat $CATALINA_PID
fi
在shutdown.sh文件的最后一行 stop后面添加
-force

在bin目录创建存id的文件
touch CATALINA_PID

Elk企业级日志平台详细部署+监控插件

1、操作系统优化:

/etc/sysctl.conf里增加:

vm.max_map_count=262144

/etc/security/limits.conf  增加:

*                soft   nofile          65536

*                hard   nofile          65536

*                soft   nproc           16384

*                hard   nproc           32768

/etc/security/limits.d/90-nproc.conf  修改如下:

*          soft    nproc     2048

root       soft    nproc     unlimited

添加主机名/etc/hosts

10.1.14.39  test-20160224.novalocal

10.1.14.40  test-20160224-1.novalocal

10.1.14.41  test-20160224-2.novalocal

2、下载elasticsearch-5.5.3.tar.gz

Tar –zxvf elasticsearch-5.5.3.tar.gz

Mv  elasticsearch-5.5.3    /home/htdocs/

需要用普通用户启动,新建webadmin用户。

chown -R webadin.webadmin /home/htdocs/elasticsearch-5.5.3

修改配置文件:

# ======================== Elasticsearch Configuration =========================

cluster.name: es-cluster

node.name: test-20160224.novalocal

#node.master: true

#node.data: true

path.data: /home/datas/es

path.logs: /home/logs/es

network.host: 10.1.14.39

http.port: 9200

transport.tcp.port: 9300

transport.tcp.compress: true

discovery.zen.ping.unicast.hosts: [“10.1.14.39:9300″,”10.1.14.40:9300″,”10.1.14.41:9300″]

discovery.zen.minimum_master_nodes: 1

#gateway.recover_after_nodes: 3

#action.destructive_requires_name: true

bootstrap.memory_lock: false

bootstrap.system_call_filter: false

http.cors.enabled: true

http.cors.allow-origin: “*”

http.cors.allow-headers: Authorization,Content-Type

script.engine.groovy.inline.search: on

script.engine.groovy.inline.aggs: on

#xpack.graph.enabled: false

#xpack.ml.enabled: false

#xpack.security.enabled: false

新建数据目录mkdir –p /home/datas/es   /home/logs/es; chown –R webadmin.webadmin /home/logs/es /home/datas/es

Cd   /home/htdocs/elasticsearch-5.5.3/bin

./ elasticsearch  -d  #启动

scp拷贝elasticsearch-5.5.3这个目录到10.1.14.40,101.14.41上,修改配置文件:

10.1.14.40  elasticsearch.yml

 

# ======================== Elasticsearch Configuration =========================

cluster.name: es-cluster

node.name: test-20160224-1.novalocal

#node.master: true

#node.data: true

path.data: /home/datas/es

path.logs: /home/logs/es

network.host: 10.1.14.40

http.port: 9200

transport.tcp.port: 9300

transport.tcp.compress: true

discovery.zen.ping.unicast.hosts: [“10.1.14.39:9300″,”10.1.14.40:9300″,”10.1.14.41:9300″]

discovery.zen.minimum_master_nodes: 1

#gateway.recover_after_nodes: 3

#action.destructive_requires_name: true

bootstrap.memory_lock: false

bootstrap.system_call_filter: false

http.cors.enabled: true

http.cors.allow-origin: “*”

http.cors.allow-headers: Authorization,Content-Type

script.engine.groovy.inline.search: on

script.engine.groovy.inline.aggs: on

#xpack.graph.enabled: false

#xpack.ml.enabled: false

#xpack.security.enabled: false

10.1.14.41  elasticsearch.yml

# ======================== Elasticsearch Configuration =========================

cluster.name: es-cluster

node.name: test-20160224-2.novalocal

#node.master: true

#node.data: true

path.data: /home/datas/es

path.logs: /home/logs/es

network.host: 10.1.14.41

http.port: 9200

transport.tcp.port: 9300

transport.tcp.compress: true

discovery.zen.ping.unicast.hosts: [“10.1.14.39:9300″,”10.1.14.40:9300″,”10.1.14.41:9300″]

discovery.zen.minimum_master_nodes: 1

#gateway.recover_after_nodes: 3

#action.destructive_requires_name: true

bootstrap.memory_lock: false

bootstrap.system_call_filter: false

http.cors.enabled: true

http.cors.allow-origin: “*”

http.cors.allow-headers: Authorization,Content-Type

script.engine.groovy.inline.search: on

script.engine.groovy.inline.aggs: on

#xpack.graph.enabled: false

#xpack.ml.enabled: false

#xpack.security.enabled: false

启动:

Cd  /home/htdocs/elasticsearch-5.5.3/bin

./ elasticsearch   -d

3、安装head监控插件

安装node.js

wget https://nodejs.org/dist/v6.10.2/node-v6.10.2-linux-x64.tar.xz

xz –d node-v6.10.2-linux-x64.tar.xz

tar xvf node-v6.10.2-linux-x64.tar

mv node-v6.10.2-linux-x64 /usr/local/node

vim /etc/profile  #以下是所有添加的环境变量,包括了之前的jdk,红色是添加的内容

 

# node –v

# npm –v

安装head插件:

chown –R webadmin.webadmin elasticsearch-head

npm install -g grunt

npm install -g grunt-cli

cd elasticsearch-head

npm install

vi elasticsearch-head/_site/app.js找到如下几行,红色是修改的内容:

vi  elasticsearch-head/Gruntfile.js增加如下,红色是增加内容:

 

访问:http://10.1.14.39:9100/

重启后,启动:

grunt server  &

bigdesk安装:

wget http://yellowcong.qiniudn.com/bigdesk-master.zip

解压,

然后配置nginx直到这个目录里直接访问即可:

 

#cerebro 插件安装

wget https://github.com/lmenezes/cerebro/releases/download/v0.6.5/cerebro-0.6.5.tgz

tar zxvf cerebro-0.6.5.tgz

cd cerebro-0.6.5/

bin/cerebro

eplug

Zookeeper安装:

版本:zookeeper-3.4.10.tar.gz

也是这三台节点

解压 : tar –zxvf zookeeper-2.4.10.tar.gz

Mv zookeeper-3.4.10  /usr/local/zookeeper

编辑:/usr/local/zookeeper/conf/ zoo.cfg

拷贝zookeeper目录到10.1.14.40, 10.1.14.41上。

分别在三台主机的dataDir路径下创建一个文件名为myid的文件,10.1.14.39, 10.1.14.40, 10.1.14.41 分别是:0,1,2

例如在10.1.14.39,

Cd   /data/zk/zk0/data/

#Cat myid

0

在10.1.14.40上这个myid文件内容是1,  41上是2。

启动:

Cd  /usr/local/zookeeper/

bin/zkServer.sh start

停止是:

bin/zkServer.sh stop

查看状态是:

bin/zkServer.sh status

注:Zookeeper默认会将控制台信息输出到启动路径下的zookeeper.out中,显然在生产环境中我们不能允许Zookeeper这样做,通过如下方法,可以让Zookeeper输出按尺寸切分的日志文件:

修改conf/log4j.properties文件,将zookeeper.root.logger=INFO, CONSOLE改为

zookeeper.root.logger=INFO, ROLLINGFILE修改bin/zkEnv.sh文件,将

ZOO_LOG4J_PROP=”INFO,CONSOLE”改为ZOO_LOG4J_PROP=”INFO,ROLLINGFILE”

然后重启zookeeper,就ok了

4、kafka安装,下载kafka_2.12-1.0.0.tgz,解压。

mv kafka_2.12-1.0.0  /usr/local/

进入config目录,编辑server.properties,内容如下:

 

修改完毕后拷贝这个目录到其它2台上,编辑文件,只需修改broker.id

10.1.14.40   broker.id=1

10.1.14.41   broker.id=2

三台分别启动:

./kafka-server-start.sh ../config/server.properties &

创建topic,

 

生产者:

bin/kafka-console-producer.sh –broker-list 10.1.14.39:9092,10.1.14.40:9092,10.1.14.41:9092 –topic my-replicated-topic

输入:

Hello kafka

消费者:

bin/kafka-console-consumer.sh –bootstrap-server   10.1.14.39:9092,10.1.14.40:9092,10.1.14.41:9092    –from-beginning –topic my-replicated-topic

会收到,hello kafka.

#创建nginx topic

./kafka-topics.sh –create –zookeeper 10.1.14.39:2181,10.1.14.40:2181,10.1.14.41:2181 –replication-factor 3 –partitions 3 –topic nginx-visitor-access-log

Filebeat安装:

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.1.1-x86_64.rpm

sudo rpm -vi filebeat-5.1.1-x86_64.rpm

#查看es中数据:

curl ‘10.1.14.39:9200/_cat/indices?v’

#配置filebeat

/etc/init.d/filebeat start # 启动

#logstash 安装:

官方下载logstash-6.1.2

解压

Mv logstash-6.1.2 /usr/local/

编写conf文件输出到es:

logstash_to_es.conf

 

./logstash -f logstash_to_es.conf  &  # 启动Logstash

#配置nginx日志

log_format main ‘$remote_addr – $remote_user [$time_local] ‘

‘”$request” $status $body_bytes_sent ‘

‘”$http_referer” “$http_user_agent”‘;

#配置Kibana

官网下载kibana-5.3.2-linux-x86_64

解析,进入config 目录:

配置:kibana.yml 修改如下:

server.port: 5601

server.host: “0.0.0.0”

elasticsearch.url: “http://10.1.14.39:9200″

kibana.index: “.kibana”

bin/kibana &  #启动

#访问测试:

kibana

glusterfs安装部署

 

准备三台机器, 安装centos7系统。

配置Host:

 

192.168.137.131 gluster1

192.168.137.132 gluster2

192.168.137.133 gluster3

 

建议防火墙都先关闭,部署完成后再加上然后在看整个集群状态,如果正常,就可以了。

 

防火墙添加:iptables -I INPUT -p tcp –dport 24007 -j ACCEPT

 

 

三台上都安装:

yum install centos-release-gluster -y

yum install -y glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma glusterfs-geo-replication glusterfs-devel

 

mkdir /opt/glusterd

 

sed -i ‘s/var\/lib/opt/g’ /etc/glusterfs/glusterd.vol

systemctl start glusterd.service

systemctl enable glusterd.service

systemctl status glusterd.service

 

创建存储目录:

mkdir /opt/gfs_data

 

 

添加节点,只需要在第一台上做,其它不需要,我用的是137.131这台.

gluster peer probe gluster2

gluster peer probe gluster3

gluster peer status

 

创建卷,其它模式请参考:http://www.cnblogs.com/jicki/p/5801712.html

 

gluster volume create k8s-volume transport tcp gluster1:/opt/gfs_data gluster2:/opt/gfs_data gluster3:/opt/gfs_data force

正式环境推荐用8台的条带+复制。

gluster volume info

gluster volume quota k8s-volume limit-usage / 3GB

gluster volume set k8s-volume performance.cache-size 500M

gluster volume set k8s-volume performance.cache-size 500MB

gluster volume set k8s-volume performance.io-thread-count 16

gluster volume set k8s-volume network.ping-timeout 10

gluster volume set k8s-volume performance.write-behind-window-size 200MB

gluster volume info

 

以上参数是我自己根据机器配置调的,另外的参考配置如下:

# 开启 指定 volume 的配额

$ gluster volume quota k8s-volume enable

# 限制 指定 volume 的配额

$ gluster volume quota k8s-volume limit-usage / 1TB

# 设置 cache 大小, 默认32MB

$ gluster volume set k8s-volume performance.cache-size 4GB

# 设置 io 线程, 太大会导致进程崩溃

$ gluster volume set k8s-volume performance.io-thread-count 16

# 设置 网络检测时间, 默认42s

$ gluster volume set k8s-volume network.ping-timeout 10

# 设置 写缓冲区的大小, 默认1M

$ gluster volume set k8s-volume performance.write-behind-window-size 1024MB

 

 

 

客户端安装:

yum install -y glusterfs glusterfs-fuse

 

配置host:

mount -t glusterfs gluster1:k8s-volume /mnt/

 

注意防火墙问题,可以用命令关闭防火墙:

systemctl stop firewalld.service