分类目录归档:web

Deploy a Laravel Application to Kubernetes using Gitlab CI

Prerequisites

This article assumes you have a basic understanding of Docker and Kubernetes, Gitlab CI and that you have already set up a Kubernetes Cluster.

Start a Laravel Project

The first thing you’ll need is a Laravel application, use composer to start a new project.

Dockerize your Laravel Project

There’s a number of ways to dockerize your Laravel project. You may use the official Nginx and PHP images form dockerhub, but I found it’s a bit troublesome to set them up.

So instead of messing around with all the different kinds of docker images, I came across thecodingmachine/docker-images-php, a set of production-ready docker images.

To build a production-ready image, we will use thecodingmachine/php:7.3-v2-slim-apache as our base image. The Dockerfile looks like this:

We will configure Gitlab CI to build the docker image automatically later.

Create Kubernetes Deployment Files

Here are all the yaml files we need to deploy our Laravel Application.

Deployment

Out deployment.yaml contains two deployments actually. One is for our main Laravel application, while the other one is for Laravel Horizon. If you do not plan to use Horizon, you can simply remove it.

There’s an init container to optimize configuration and route loading. To share the application code between init container and app container, it uses an emptyDir.

There’s also affinity setting to tell Kubernetes to try it best to schedule the pods among different nodes as to avoid downtime.

CronJob

We use cronjob.yaml that leverage Kubernete’s CronJob to run php artisan schedule:run every minute. We feel like this is a more robust way of scheduling cron by fine-tuning activeDeadlineSeconds , backoffLimit and startingDeadlingSeconds to make sure our cron gets scheduled.

ConfigMap, Ingress & Service

Our ingress.yaml and service.yaml is pretty standard, we use CloudFlare DNS verification to obtain HTTPS certificates from Let’s Encrypt (It’s commented).

As for configmap, it’s recommended to use secret to store sensitive information like the database password.

Here’s a GitHub Repo in case you would like to clone it. Merge request is also appreciated!

Set up Gitlab CI

Now that we have our app and kubernetes config ready, we can go ahead to setup Gitlab CI to automate our deployment. (We assume you are using Kubernetes token authentication)

Setup CI/CD Environment Variables

Within Repo > Settings > CI / CD, we need to store our Kubernetes Cluster credentials into Gitlab CI’s environment variables.

Move the Dockerfile to your project’s root directory, then create a folder called k8 and store all the kubernetes yaml files inside. Create a file called .gitlab-ci.yml that contains the folllowing.

The CI/CD script contains two steps. First is to build our Laravel Application and push to Amazon ECR (you may configure it to another Docker Image Repo as you like). Then it moves on to deploy our Laravel Application image to our Kubernetes cluster.

In line 36 and 37, the $KUBE_URL and $KUBE_TOKEN are the two environment variables that we set up above.

In line 40, we ask kubectl to apply our k8s configuration files.

In line 41, it a hack to trigger redeployment of pods. Since we have set the imagePullPolicy to always, Kubernetes will automatically re-pull our docker image to the latest version. Combine with our deployment’s rolling update strategy, there’s should be no downtime in deployment updates.

In production, we actually use Kustomize to maintain deployment of multiple Git branches and environment. But we are looking to switch to Helm as it seems to be an easier and more popular deployment method.

img_6067

The joy of CI/CD. Push your code, wait for 4 minutes and it’s deployed automatically, without downtime.

And that’s it! the joy of CI/CD. Just push your code, wait for 4 minutes and it’s deployed automatically, without downtime.

laravel-k8s-master

pip安装pycurl报错解决

 

错误1:Command “python setup.py egg_info” failed with error code 1 in /tmp/pip-install-u_NBeS/pycurl/

解决:

pip install –upgrade

pip install pycurl==7.43.0

 

错误2:运行时报错:

 

selenium + chrome爬虫环境搭建

在目录 /etc/yum.repos.d/ 下新建文件 google-chrome.repo

内容如下:

yum -y install google-chrome-stable –nogpgcheck

安装后查看安装目录:

which google-chrome-stable

which google-chrome

查看版本号:

google-chrome –version

如果想看更多内容就用

google-chrome –help #会列出所有参数

然而在使用selenium +chrome时有报错:

需要多加二个参数:

其它参数有:

[‘–headless’, ‘–disable-gpu’, ‘–no-sandbox’, ‘–disable-extensions’, ‘–disable-dev-shm-usage’]

到这其实安装完后通过vnc登录进去也开不开,可以参考这篇解决:

https://blog.csdn.net/tiandaochouqin99/article/details/79643248

tomcat杀进程方法

有时候tomcat停不掉,需要租如下修改:

在catalina.sh文件的PRGDIR=dirname "$PRG"行后面添加

if [ -z “$CATALINA_PID” ]; then
CATALINA_PID=$PRGDIR/CATALINA_PID
cat $CATALINA_PID
fi
在shutdown.sh文件的最后一行 stop后面添加
-force

在bin目录创建存id的文件
touch CATALINA_PID

Elk企业级日志平台详细部署+监控插件

1、操作系统优化:

/etc/sysctl.conf里增加:

vm.max_map_count=262144

/etc/security/limits.conf  增加:

*                soft   nofile          65536

*                hard   nofile          65536

*                soft   nproc           16384

*                hard   nproc           32768

/etc/security/limits.d/90-nproc.conf  修改如下:

*          soft    nproc     2048

root       soft    nproc     unlimited

添加主机名/etc/hosts

10.1.14.39  test-20160224.novalocal

10.1.14.40  test-20160224-1.novalocal

10.1.14.41  test-20160224-2.novalocal

2、下载elasticsearch-5.5.3.tar.gz

Tar –zxvf elasticsearch-5.5.3.tar.gz

Mv  elasticsearch-5.5.3    /home/htdocs/

需要用普通用户启动,新建webadmin用户。

chown -R webadin.webadmin /home/htdocs/elasticsearch-5.5.3

修改配置文件:

# ======================== Elasticsearch Configuration =========================

cluster.name: es-cluster

node.name: test-20160224.novalocal

#node.master: true

#node.data: true

path.data: /home/datas/es

path.logs: /home/logs/es

network.host: 10.1.14.39

http.port: 9200

transport.tcp.port: 9300

transport.tcp.compress: true

discovery.zen.ping.unicast.hosts: [“10.1.14.39:9300″,”10.1.14.40:9300″,”10.1.14.41:9300″]

discovery.zen.minimum_master_nodes: 1

#gateway.recover_after_nodes: 3

#action.destructive_requires_name: true

bootstrap.memory_lock: false

bootstrap.system_call_filter: false

http.cors.enabled: true

http.cors.allow-origin: “*”

http.cors.allow-headers: Authorization,Content-Type

script.engine.groovy.inline.search: on

script.engine.groovy.inline.aggs: on

#xpack.graph.enabled: false

#xpack.ml.enabled: false

#xpack.security.enabled: false

新建数据目录mkdir –p /home/datas/es   /home/logs/es; chown –R webadmin.webadmin /home/logs/es /home/datas/es

Cd   /home/htdocs/elasticsearch-5.5.3/bin

./ elasticsearch  -d  #启动

scp拷贝elasticsearch-5.5.3这个目录到10.1.14.40,101.14.41上,修改配置文件:

10.1.14.40  elasticsearch.yml

 

# ======================== Elasticsearch Configuration =========================

cluster.name: es-cluster

node.name: test-20160224-1.novalocal

#node.master: true

#node.data: true

path.data: /home/datas/es

path.logs: /home/logs/es

network.host: 10.1.14.40

http.port: 9200

transport.tcp.port: 9300

transport.tcp.compress: true

discovery.zen.ping.unicast.hosts: [“10.1.14.39:9300″,”10.1.14.40:9300″,”10.1.14.41:9300″]

discovery.zen.minimum_master_nodes: 1

#gateway.recover_after_nodes: 3

#action.destructive_requires_name: true

bootstrap.memory_lock: false

bootstrap.system_call_filter: false

http.cors.enabled: true

http.cors.allow-origin: “*”

http.cors.allow-headers: Authorization,Content-Type

script.engine.groovy.inline.search: on

script.engine.groovy.inline.aggs: on

#xpack.graph.enabled: false

#xpack.ml.enabled: false

#xpack.security.enabled: false

10.1.14.41  elasticsearch.yml

# ======================== Elasticsearch Configuration =========================

cluster.name: es-cluster

node.name: test-20160224-2.novalocal

#node.master: true

#node.data: true

path.data: /home/datas/es

path.logs: /home/logs/es

network.host: 10.1.14.41

http.port: 9200

transport.tcp.port: 9300

transport.tcp.compress: true

discovery.zen.ping.unicast.hosts: [“10.1.14.39:9300″,”10.1.14.40:9300″,”10.1.14.41:9300″]

discovery.zen.minimum_master_nodes: 1

#gateway.recover_after_nodes: 3

#action.destructive_requires_name: true

bootstrap.memory_lock: false

bootstrap.system_call_filter: false

http.cors.enabled: true

http.cors.allow-origin: “*”

http.cors.allow-headers: Authorization,Content-Type

script.engine.groovy.inline.search: on

script.engine.groovy.inline.aggs: on

#xpack.graph.enabled: false

#xpack.ml.enabled: false

#xpack.security.enabled: false

启动:

Cd  /home/htdocs/elasticsearch-5.5.3/bin

./ elasticsearch   -d

3、安装head监控插件

安装node.js

wget https://nodejs.org/dist/v6.10.2/node-v6.10.2-linux-x64.tar.xz

xz –d node-v6.10.2-linux-x64.tar.xz

tar xvf node-v6.10.2-linux-x64.tar

mv node-v6.10.2-linux-x64 /usr/local/node

vim /etc/profile  #以下是所有添加的环境变量,包括了之前的jdk,红色是添加的内容

 

# node –v

# npm –v

安装head插件:

chown –R webadmin.webadmin elasticsearch-head

npm install -g grunt

npm install -g grunt-cli

cd elasticsearch-head

npm install

vi elasticsearch-head/_site/app.js找到如下几行,红色是修改的内容:

vi  elasticsearch-head/Gruntfile.js增加如下,红色是增加内容:

 

访问:http://10.1.14.39:9100/

重启后,启动:

grunt server  &

bigdesk安装:

wget http://yellowcong.qiniudn.com/bigdesk-master.zip

解压,

然后配置nginx直到这个目录里直接访问即可:

 

#cerebro 插件安装

wget https://github.com/lmenezes/cerebro/releases/download/v0.6.5/cerebro-0.6.5.tgz

tar zxvf cerebro-0.6.5.tgz

cd cerebro-0.6.5/

bin/cerebro

eplug

Zookeeper安装:

版本:zookeeper-3.4.10.tar.gz

也是这三台节点

解压 : tar –zxvf zookeeper-2.4.10.tar.gz

Mv zookeeper-3.4.10  /usr/local/zookeeper

编辑:/usr/local/zookeeper/conf/ zoo.cfg

拷贝zookeeper目录到10.1.14.40, 10.1.14.41上。

分别在三台主机的dataDir路径下创建一个文件名为myid的文件,10.1.14.39, 10.1.14.40, 10.1.14.41 分别是:0,1,2

例如在10.1.14.39,

Cd   /data/zk/zk0/data/

#Cat myid

0

在10.1.14.40上这个myid文件内容是1,  41上是2。

启动:

Cd  /usr/local/zookeeper/

bin/zkServer.sh start

停止是:

bin/zkServer.sh stop

查看状态是:

bin/zkServer.sh status

注:Zookeeper默认会将控制台信息输出到启动路径下的zookeeper.out中,显然在生产环境中我们不能允许Zookeeper这样做,通过如下方法,可以让Zookeeper输出按尺寸切分的日志文件:

修改conf/log4j.properties文件,将zookeeper.root.logger=INFO, CONSOLE改为

zookeeper.root.logger=INFO, ROLLINGFILE修改bin/zkEnv.sh文件,将

ZOO_LOG4J_PROP=”INFO,CONSOLE”改为ZOO_LOG4J_PROP=”INFO,ROLLINGFILE”

然后重启zookeeper,就ok了

4、kafka安装,下载kafka_2.12-1.0.0.tgz,解压。

mv kafka_2.12-1.0.0  /usr/local/

进入config目录,编辑server.properties,内容如下:

 

修改完毕后拷贝这个目录到其它2台上,编辑文件,只需修改broker.id

10.1.14.40   broker.id=1

10.1.14.41   broker.id=2

三台分别启动:

./kafka-server-start.sh ../config/server.properties &

创建topic,

 

生产者:

bin/kafka-console-producer.sh –broker-list 10.1.14.39:9092,10.1.14.40:9092,10.1.14.41:9092 –topic my-replicated-topic

输入:

Hello kafka

消费者:

bin/kafka-console-consumer.sh –bootstrap-server   10.1.14.39:9092,10.1.14.40:9092,10.1.14.41:9092    –from-beginning –topic my-replicated-topic

会收到,hello kafka.

#创建nginx topic

./kafka-topics.sh –create –zookeeper 10.1.14.39:2181,10.1.14.40:2181,10.1.14.41:2181 –replication-factor 3 –partitions 3 –topic nginx-visitor-access-log

Filebeat安装:

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.1.1-x86_64.rpm

sudo rpm -vi filebeat-5.1.1-x86_64.rpm

#查看es中数据:

curl ‘10.1.14.39:9200/_cat/indices?v’

#配置filebeat

/etc/init.d/filebeat start # 启动

#logstash 安装:

官方下载logstash-6.1.2

解压

Mv logstash-6.1.2 /usr/local/

编写conf文件输出到es:

logstash_to_es.conf

 

./logstash -f logstash_to_es.conf  &  # 启动Logstash

#配置nginx日志

log_format main ‘$remote_addr – $remote_user [$time_local] ‘

‘”$request” $status $body_bytes_sent ‘

‘”$http_referer” “$http_user_agent”‘;

#配置Kibana

官网下载kibana-5.3.2-linux-x86_64

解析,进入config 目录:

配置:kibana.yml 修改如下:

server.port: 5601

server.host: “0.0.0.0”

elasticsearch.url: “http://10.1.14.39:9200″

kibana.index: “.kibana”

bin/kibana &  #启动

#访问测试:

kibana