EFK 日志服务-地理位置 geo_point 配置

EFK docker-compose

包含了 elasticsearch 和 kibana 和 fluentd 的 docker-compose

注意,fluentd 需要定制 dockerfile 来支持插件,下边提供了 dockerfile

version: '2'

services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.3.1
    ports:
      - "$HOST:9200:9200"
      - "$HOST:9300:9300"
    environment:
      cluster.name: docker-cluster
      discovery.type: single-node
      ES_JAVA_OPTS: "-Xmx256m -Xms256m"
      ELASTIC_PASSWORD: $ELK_PASSWORD
    volumes:
      - ./elasticsearch/data0:/usr/share/elasticsearch/data
    networks:
      - efk

  kibana:
    image: docker.elastic.co/kibana/kibana:7.3.1
    ports:
      - "$HOST:5601:5601"
    environment:
      elasticsearch.url: http://elasticsearch:9200
      elasticsearch.username: $ELK_USER
      elasticsearch.password: $ELK_PASSWORD
      xpack.monitoring.ui.container.elasticsearch.enabled: “true”
      server.host: "0"
    networks:
      - efk
    depends_on:
      - elasticsearch

  fluentd:
    image: horan/fluentd
    ports:
      - $HOST:24224:24224
      - $HOST:24224:24224/udp
    environment:
      ELK_USER: $ELK_USER
      ELK_PASSWORD: $ELK_PASSWORD
    volumes:
      - ./fluentd/config/fluent.conf:/fluentd/etc/fluent.conf
      - ./fluentd/geoip:/geoip
    networks:
      - efk
    depends_on:
      - elasticsearch

networks:
  efk:
    driver: bridge

fluentd dockerfile

FROM fluent/fluentd:v1.7-debian-1

# Use root account to use apt
USER root

# below RUN includes plugin as examples elasticsearch is not required
# you may customize including plugins as you wish
RUN buildDeps="make autoconf gcc g++ libc-dev build-essential libgeoip-dev libmaxminddb-dev" \
   && apt-get update \
   && apt-get install -y --no-install-recommends $buildDeps \
   && gem install fluent-plugin-geoip \
   && gem install fluent-plugin-elasticsearch

USER fluent

fluentd.conf

<source>
  @type forward
  port 24224
  bind 0.0.0.0
</source>

<filter docker.**>
  @type parser
  key_name log
  reserve_data true
  <parse>
    @type json
    time_key time
    time_format %Y-%m-%dT%H:%M:%S%:z
    keep_time_key true
  </parse>
</filter>

<filter docker.**>
  @type geoip

  # Specify one or more geoip lookup field which has ip address (default: host)
  geoip_lookup_key  ip

  # Set adding field with placeholder (more than one settings are required.)
  <record>
    geoip.city            ${city.names.en["ip"]}
    geoip.latitude        ${location.latitude["ip"]}
    geoip.longitude       ${location.longitude["ip"]}
    geoip.country         ${country.iso_code["ip"]}
    geoip.country_name    ${country.names.en["ip"]}
    geoip.postal_code     ${postal.code["ip"]}
    geoip.region_code     ${subdivisions.0.iso_code["ip"]}
    geoip.region_name     ${subdivisions.0.names.en["ip"]}
    geoip.location        ${latitude["remote"]},${longitude["remote"]}

    # lat lon as properties
    # ex. {"lat" => 37.4192008972168, "lon" => -122.05740356445312 }
    #location_properties  '{"lat":${location.latitude["ip"]},"lon":${location.longitude["ip"]}}'
    # lat lon as string
    # ex. "37.4192008972168,-122.05740356445312"
    #location_string      '${location.latitude["ip"]},${location.longitude["ip"]}'
    # lat lon as array (it is useful for Kibana's bettermap.)
    # ex. [-122.05740356445312, 37.4192008972168]
    #location_array      '[${location.longitude["ip"]},${location.latitude["ip"]}]'
  </record>

  # To avoid get stacktrace error with `[null, null]` array for elasticsearch.
  skip_adding_null_record  true
</filter>

<match docker.**>
  @type copy
  <store>
    @type stdout
  </store>
  <store>
    @type                   elasticsearch
    host                    elasticsearch
    port                    9200
    user                    "#{ENV['ELK_USER']}"
    password                "#{ENV['ELK_PASSWORD']}"
    logstash_format  true
    flush_interval       1s
  </store>
 </match>

geo_point 的定义

通过我们定义的 fluentd.conf 可以看到,geoip.location 是被解析最外层 json 下的 key 再打到 elasticsearch 的,又因为 logstash_format true 会按日期来建立索引,所以我们需要先在 elasticsearch 做一个 template mapping 的映射,可以利用 kibana 的 dev tools 来发请求

PUT _template/logstash
{
  "index_patterns": [
    "logstash-*"
  ],
  "mappings": {
    "properties": {
      "geoip.location": {
        "type": "geo_point"
      }
    }
  }
}

如果是已经启动的日志,需要将旧数据全部删除后做映射,如下:

DELETE /logstash-*

最后再 kibana 删除旧的 index 重建之后 geoip.location 就是 geo_point 了

详情可见 github: https://github.com/horan-geeker/docker-efk

点赞
vvv
vvv

fluentd 似乎没法直接转换成geo_point,必须手动指定

2019-09-08 16:37:18