'elasticsearch'에 해당되는 글 364건

  1. 2020.06.19 [Elasticsearch] X-pack Security API Key 사용 해 보기
  2. 2020.06.17 [Elasticsearch] script 사용 시 "#! Deprecation: Deprecated field [inline] used, expected [source] instead"
  3. 2020.06.03 [Filebeat] template 설정 setup.template.append_fields
  4. 2020.05.27 [Elasticsearch]7.x breaking changes.
  5. 2020.05.25 [Filebeat] setup.ilm & setup.template 잘 쓰려면.
  6. 2020.04.08 [Elasticsearch] UTC 를 사용 하는 이유.
  7. 2020.04.07 [Kibana] Visualize Metric JSON Input 예제
  8. 2020.04.07 [Elasticsearch] Docker Compose 실행 시 permission denied
  9. 2020.04.03 [Elastic] Elasticsearch, Metricbeat, Docker Compose 삽질.
  10. 2020.04.03 [Elasticsearch] Index Lifecycle Management 실행 주기

[Elasticsearch] X-pack Security API Key 사용 해 보기

Elastic/Elasticsearch 2020. 6. 19. 11:07

Elastic Stack 이 좋은 이유는 기본 Basic license 까지 사용이 가능 하다는 것입니다.

사실 이것 말고도 엄청 많죠 ㅎㅎ 

 

https://www.elastic.co/subscriptions

 

딱 API keys management 까지 사용이 됩니다. ㅎㅎㅎ

 

먼저 사용하기에 앞서서 Elasticsearch 와 Kibana 에 x-pack 사용을 위한 설정을 하셔야 합니다.

 

[Elasticsearch]

- elasticsearch.yml

xpack.monitoring.enabled: true
xpack.ml.enabled: true
xpack.security.enabled: true

xpack.security.authc.api_key.enabled: true
xpack.security.authc.api_key.hashing.algorithm: "pbkdf2"
xpack.security.authc.api_key.cache.ttl: "1d"
xpack.security.authc.api_key.cache.max_keys: 10000
xpack.security.authc.api_key.cache.hash_algo: "ssha256"

위 설정은 기본이기 때문에 환경에 맞게 최적화 하셔야 합니다.

https://www.elastic.co/guide/en/elasticsearch/reference/7.8/security-settings.html#api-key-service-settings

 

[Kibana]

- kibana.yml

xpack:
  security:
    enabled: true
    encryptionKey: "9c42bff2e04f9b937966bda03e6b5828"
    session:
      idleTimeout: 600000
    audit:
      enabled: true

 

이렇게 설정 한 후 id/password 설정을 하시면 됩니다.

 

# bin/elasticsearch-setup-passwords interactive
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y

Enter password for [elastic]:
Reenter password for [elastic]:
Enter password for [apm_system]:
Reenter password for [apm_system]:
Enter password for [kibana]:
Reenter password for [kibana]:
Enter password for [logstash_system]:
Reenter password for [logstash_system]:
Enter password for [beats_system]:
Reenter password for [beats_system]:
Enter password for [remote_monitoring_user]:
Reenter password for [remote_monitoring_user]:
Changed password for user [apm_system]
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [beats_system]

Changed password for user [remote_monitoring_user]
Changed password for user [elastic]

 

이렇게 설정이 끝나면 kibana 에 접속해서 API key 를 생성 하시면 됩니다.

아래 문서는 생성 시 도움이 되는 문서 입니다.

 

www.elastic.co/guide/en/elasticsearch/reference/current/security-privileges.html

www.elastic.co/guide/en/elasticsearch/reference/7.7/security-api-put-role.htmlwww.elastic.co/guide/en/elasticsearch/reference/7.7/defining-roles.htmlwww.elastic.co/guide/en/elasticsearch/reference/7.7/security-api-create-api-key.html

 

Kibana Console 에서 아래와 같이 생성이 가능 합니다.

POST /_security/api_key
{
  "name": "team-index-command",
  "expiration": "10m", 
  "role_descriptors": { 
    "role-team-index-command": {
      "cluster": ["all"],
      "index": [
        {
          "names": ["*"],
          "privileges": ["all"]
        }
      ]
    }
  }
}

{
  "id" : "87cuynIBjKAXtnkobGgo",
  "name" : "team-index-command",
  "expiration" : 1592529999478,
  "api_key" : "OlVGT_Q8RGq1C_ASHW7pGg"
}

생성 이후 사용을 위해서는 

 

- ApiKey 는 id:api_key 를 base64 인코딩 합니다.

base64_encode("87cuynIBjKAXtnkobGgo"+":"+"OlVGT_Q8RGq1C_ASHW7pGg")
==> VGVVOXluSUJHUUdMaHpvcUxDVWo6aUtfSmlEMmdSMy1FUUFpdENCYzF1QQ==
curl -H 
  "Authorization: ApiKey VGVVOXluSUJHUUdMaHpvcUxDVWo6aUtfSmlEMmdSMy1FUUFpdENCYzF1QQ==" 
  http://localhost:9200/_cluster/health

이제 용도와 목적에 맞춰서 API key 를 만들고 사용 하시면 되겠습니다.

 

Trackbacks 0 : Comments 0

Write a comment


[Elasticsearch] script 사용 시 "#! Deprecation: Deprecated field [inline] used, expected [source] instead"

Elastic/Elasticsearch 2020. 6. 17. 08:20

에러 메시지를 보면 답이 나와 있습니다.

inline 대신 source 를 사용 하라는 이야기 입니다.

 

[ASIS]

  "aggs": {
    "3": {
      "date_histogram": {
        "field": "@timestamp",
        "fixed_interval": "30s",
        "time_zone": "Asia/Seoul",
        "min_doc_count": 1
      },
      "aggs": {
        "1": {
          "max": {
            "field": "system.cpu.total.pct",
            "script": {
              "inline": "doc['system.cpu.total.pct'].value *100",
              "lang": "painless"
            }
          }
        }
      }
    }
  }

 

[TOBE]

  "aggs": {
    "3": {
      "date_histogram": {
        "field": "@timestamp",
        "fixed_interval": "30s",
        "time_zone": "Asia/Seoul",
        "min_doc_count": 1
      },
      "aggs": {
        "1": {
          "max": {
            "field": "system.cpu.total.pct",
            "script": {
              "source": "doc['system.cpu.total.pct'].value *100",
              "lang": "painless"
            }
          }
        }
      }
    }
  }

이상 끝.

Trackbacks 0 : Comments 0

Write a comment


[Filebeat] template 설정 setup.template.append_fields

Elastic/Beats 2020. 6. 3. 14:15

참고 문서)

https://www.elastic.co/guide/en/beats/filebeat/current/configuration-template.html

 

설정 중에 필요한 내용이 있어서 기록해 봅니다.

 

setup.template.append_fields

 

A list of fields to be added to the template and Kibana index pattern.

This setting adds new fields. It does not overwrite or change existing fields.

This setting is useful when your data contains fields that Filebeat doesn’t know about in advance.

If append_fields is specified along with overwrite: true,

Filebeat overwrites the existing template and applies the new template when creating new indices.

Existing indices are not affected.

If you’re running multiple instances of Filebeat with different append_fields settings,

the last one writing the template takes precedence.

Any changes to this setting also affect the Kibana index pattern.

 

Example config:

setup.template.overwrite: true

setup.template.append_fields:

  - name: test.name

     type: keyword

  - name: test.hostname

     type: long

 

이 설정으로 dynamic mapping 이나 template 관련 번거로움을 간단하게 해결 할 수 있습니다.

저는 몇 개 field 에 대해서 date 로 설정을 해야 해서 이 설정을 사용했습니다.

 

Trackbacks 0 : Comments 0

Write a comment


[Elasticsearch]7.x breaking changes.

Elastic/Elasticsearch 2020. 5. 27. 14:05

다 알아야 하겠지만 이 정도는 알고 넘어 가면 좋을 것 같아 뽑아 봤습니다.

 

참고문서)

https://www.elastic.co/guide/en/elasticsearch/reference/7.7/breaking-changes-7.0.html

 

thread_pool.listener.size and thread_pool.listener.queue_size have been deprecated
cluster.remote.connect is deprecated in favor of node.remote_cluster_client
auth.password deprecated, auth.secure_password setting instead.
Deprecation of sparse vector fields
The vector functions of the form function(query, doc['field']) are deprecated, and the form function(query, 'field') should be used instead. 
The nGram and edgeNGram tokenizer names haven been deprecated with 7.6. 
	-> The tokenizer name should be changed to the fully equivalent ngram or edge_ngram names for new indices and in index templates.
Starting in version 7.4, a + in a URL will be encoded as %2B by all REST API functionality.
	-> es.rest.url_plus_as_space to true
After starting each shard the elected master node must perform a reroute to search for other shards that could be allocated. (>7.4)
Auto-release of read-only-allow-delete block
	-> es.disk.auto_release_flood_stage_block: true
pidfile setting is being replaced by node.pidfile
processors setting is being replaced by node.processors
The common query has been deprecated.
	-> match query
Only a single port may be given for each seed host.
	-> "10.11.12.13:9300-9310" then Elasticsearch would only use 10.11.12.13:9300 for discovery.
The http.tcp_no_delay setting is deprecated in 7.1. It is replaced by http.tcp.no_delay.
The network.tcp.connect_timeout setting is deprecated in 7.1
	-> transport.connect_timeout.
transport.tcp.port is replaced by transport.port
transport.tcp.compress is replaced by transport.compress
transport.tcp.connect_timeout is replaced by transport.connect_timeout
transport.tcp_no_delay is replaced by transport.tcp.no_delay
transport.profiles.profile_name.tcp_no_delay is replaced by transport.profiles.profile_name.tcp.no_delay
transport.profiles.profile_name.tcp_keep_alive is replaced by transport.profiles.profile_name.tcp.keep_alive
transport.profiles.profile_name.reuse_address is replaced by transport.profiles.profile_name.tcp.reuse_address
transport.profiles.profile_name.send_buffer_size is replaced by transport.profiles.profile_name.tcp.send_buffer_size
transport.profiles.profile_name.receive_buffer_size is replaced by transport.profiles.profile_name.tcp.receive_buffer_size
delimited_payload_filter renaming
	-> delimited_payload
The standard token filter has been removed
The standard_html_strip analyzer has been deprecated,
	-> combination of the standard tokenizer and html_strip char_filter
Shard preferences _primary, _primary_first, _replica, and _replica_first are removed
indices.breaker.fielddata.limit has been lowered from 60% to 40% of the JVM heap size.
The index_options field for numeric fields has been deprecated in 6 and has now been removed.
The classic similarity has been removed.
Adaptive replica selection has been enabled by default.
Semantics changed for max_concurrent_shard_requests
	->  In 7.0 this changed to be the max number of concurrent shard requests per node. The default is now 5
Negative boosts are not allowed
The filter context has been removed from Elasticsearch’s query builders, the distinction between queries and filters is now decided in Lucene depending on whether queries need to access score or not.
Tribe node functionality has been removed in favor of Cross Cluster Search.
Camel case and underscore parameters deprecated in 6.x have been removed
The default for node.name is now the hostname
The setting node.store.allow_mmapfs has been renamed to node.store.allow_mmap.
The setting http.enabled previously allowed disabling binding to HTTP, only allowing use of the transport client. This setting has been removed, as the transport client will be removed in the future, thus requiring HTTP to always be enabled.
Trackbacks 0 : Comments 0

Write a comment


[Filebeat] setup.ilm & setup.template 잘 쓰려면.

Elastic/Beats 2020. 5. 25. 17:09

가끔 설정은 yml 에 잘 했는데 안될 때가 있습니다.

이게 어디에서 뭘로 했느냐에 따라 다른데요.

 

보통 beats 를 이용할 경우 자동으로 alias 생성 및 설정이 됩니다.

이 경우 elasticsearch 에서 동일하게 사용 하기 위한 ilm 과 template 설정을 하게 되면 동작이 잘 안되는 경우가 있는데요.

 

큰 차이는 is_write_index 설정 때문 입니다.

뭐 이것도 경우에 따라 다르게 나올 수 있지만 일반적이라면 저 설정 때문이다 생각 하시면 됩니다.

 

beats 에서는 libbeat 에서 자동으로 설정을 해줍니다.

 

// CreateAlias sends request to Elasticsearch for creating alias.
func (h *ESClientHandler) CreateAlias(alias Alias) error {
	// Escaping because of date pattern
	// This always assume it's a date pattern by sourrounding it by <...>
	firstIndex := fmt.Sprintf("<%s-%s>", alias.Name, alias.Pattern)
	firstIndex = url.PathEscape(firstIndex)

	body := common.MapStr{
		"aliases": common.MapStr{
			alias.Name: common.MapStr{
				"is_write_index": true,
			},
		},
	}

	// Note: actual aliases are accessible via the index
	status, res, err := h.client.Request("PUT", "/"+firstIndex, "", nil, body)
	if status == 400 {
		// HasAlias fails if there is an index with the same name, that is
		// what we want to check here.
		_, err := h.HasAlias(alias.Name)
		if err != nil {
			return err
		}
		return errOf(ErrAliasAlreadyExists)
	} else if err != nil {
		return wrapErrf(err, ErrAliasCreateFailed, "failed to create alias: %s", res)
	}

	return nil
}

하지만, elasticsearch 에서 수동으로 설정을 한다고 하면 꼭 "is_write_index:true" 설정을 해주셔야 alias 생성 및 등록이 잘 된다는거 알고 넘어 가자고요.

 

Trackbacks 0 : Comments 0

Write a comment


[Elasticsearch] UTC 를 사용 하는 이유.

Elastic/Elasticsearch 2020. 4. 8. 12:28

분명히 어디선가 공식 문서 또는 글을 봤는데 찾지를 못하겠습니다.

 

https://www.elastic.co/guide/en/elasticsearch/reference/current/date.html

https://discuss.elastic.co/t/elastic-utc-time/191877

 

discuss 에 보면 elastic team member 가 코멘트 한 내용이 있습니다.

Elasticsearch doesn't have a native concept of a "pure" date, only of instants in time. 
Given that, I think that it is reasonable to represent a pure date as the instant of 
midnight UTC on that date. 
If you use a different timezone then you will encounter problems as some "dates" 
will not be a whole number of days apart, because of occasions 
when the clocks go forwards or back for daylight savings.

Formatting a date as an instant without an explicit timezone is probably a bad idea, 
because something somewhere will have to guess what timezone to use 
and you may get inconsistent results if those guesses are inconsistent. 
Always specify the timezone, and in this case I recommend UTC as I mentioned above.

When you run an aggregation involving times, 
you can tell Elasticsearch what timezone to use 27. 
It defaults to UTC according to those docs.

I do not, however, know how to get Kibana to interpret 
these dates as you want in a visualisation. 
It's probably best to ask on the Kibana forum for help with that.

그리고 reference 문서에는 아래와 같은 내용이 있습니다.

Internally, dates are converted to UTC (if the time-zone is specified) 
and stored as a long number representing milliseconds-since-the-epoch.

그냥 Elastic Stack 은 기본적으로 @timestamp 값을 저장 시 UTC 0 를 기준으로 저장을 한다고 이해 하시고 질의 시점에 변환을 하거나 별도 localtime 에 맞는 custum timestamp field 를 추가해서 사용하는게 정신 건강에 좋다고 생각 하십시오.

 

추가적으로,

 

Date Query 관련 질의 시 

- "time_zone" 파라미터 설정은 now 값에 영향을 주지 않습니다.

참고 하세요.

Trackbacks 0 : Comments 0

Write a comment


[Kibana] Visualize Metric JSON Input 예제

Elastic/Kibana 2020. 4. 7. 20:11

kibana 에서 visualize 중 metric 을 구성 할 때 또는 다른 visualize 이더라도 비슷 합니다.

JSON Input 은 아래와 같이 넣으 실 수 있습니다.

{
  "script": {
    "inline": "doc['system.filesystem.free'].value / 1024 / 1024 /1024",
    "lang": "painless"
  }
}

좀 더 자세한 내용이 궁금 하신 분들은 아래 문서 한번 보시면 좋습니다.

 

공식문서)

https://www.elastic.co/guide/en/elasticsearch/reference/master/modules-scripting-fields.html

 

 

Trackbacks 0 : Comments 0

Write a comment


[Elasticsearch] Docker Compose 실행 시 permission denied

Elastic/Elasticsearch 2020. 4. 7. 15:54

docker compose 실행 시  permission denied 에러 경험을 하실 수 있습니다.

아래와 같은 방법으로 문제를 해결 할 수 있으니 환경에 맞게 적용 해 보면 좋을 것 같습니다.

 

docker-compose.yml)

    volumes:
      - /elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
      - /elasticsearch/data:/usr/share/elasticsearch/data:rw
      - /elasticsearch/logs:/usr/share/elasticsearch/logs:rw

 

Case 1)

$ chmod 777 /elasticsearch/data

$ chmod 777 /elasticsearch/logs

 

Case 2)

- docker-compose.yml 안에 uid, gid 를 설정

user: "1002:0" # uid 는 host user 로 설정, gid 는 container group 으로 설정

 

Elasticsearch 의 경우 docker 로 구성 하시면 Container 내부에서 1000:0 으로 실행 됩니다.

즉, uid 1000(elasticsearch), gid 0(root) 가 됩니다.

그래서 bind mount 의 경우 host 에서의 권한과 container 에서의 권한을 잘 맞춰 주셔야 합니다.

 

$ id -u username

$ id -g username

$ docker exec -it container-name /bin/bash

 

그럼 즐거운 Elastic 되세요.

Trackbacks 0 : Comments 0

Write a comment


[Elastic] Elasticsearch, Metricbeat, Docker Compose 삽질.

Elastic 2020. 4. 3. 18:48

metricbeat 사용 시 템플릿 설정이 잘 못 되었다면 반드시 삭제 하고 다시 생성 합니다.

또는

- setup.template.overwrite: true

를 설정 합니다.

 

metricbeat 에서 데이터를 잘 보내고 있는데 데이터가 정상적으로 들어 오지 않으면, 템플릿을 리셋 하거나 아래 설정을 확인 합니다.

- setup.template.settings._source.enabled: true

 

이미 경험했던 내용이고 기록도 했었는데 바로 찾아 내지 못하고 삽질을 했네요.

반성 중입니다.

Trackbacks 0 : Comments 0

Write a comment


[Elasticsearch] Index Lifecycle Management 실행 주기

Elastic/Elasticsearch 2020. 4. 3. 14:04

설정 하고 왜 바로 동작 안하지 그랬는데, 다 이유가 있었습니다.

 

[LifecycleSettings.java]

/*
 * Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
 * or more contributor license agreements. Licensed under the Elastic License;
 * you may not use this file except in compliance with the Elastic License.
 */
package org.elasticsearch.xpack.core.ilm;

import org.elasticsearch.common.Strings;
import org.elasticsearch.common.settings.Setting;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.xpack.core.scheduler.CronSchedule;

/**
 * Class encapsulating settings related to Index Lifecycle Management X-Pack Plugin
 */
public class LifecycleSettings {
    public static final String LIFECYCLE_POLL_INTERVAL = "indices.lifecycle.poll_interval";
    public static final String LIFECYCLE_NAME = "index.lifecycle.name";
    public static final String LIFECYCLE_INDEXING_COMPLETE = "index.lifecycle.indexing_complete";
    public static final String LIFECYCLE_ORIGINATION_DATE = "index.lifecycle.origination_date";
    public static final String LIFECYCLE_PARSE_ORIGINATION_DATE = "index.lifecycle.parse_origination_date";
    public static final String LIFECYCLE_HISTORY_INDEX_ENABLED = "indices.lifecycle.history_index_enabled";

    public static final String SLM_HISTORY_INDEX_ENABLED = "slm.history_index_enabled";
    public static final String SLM_RETENTION_SCHEDULE = "slm.retention_schedule";
    public static final String SLM_RETENTION_DURATION = "slm.retention_duration";


    public static final Setting<TimeValue> LIFECYCLE_POLL_INTERVAL_SETTING = Setting.positiveTimeSetting(LIFECYCLE_POLL_INTERVAL,
        TimeValue.timeValueMinutes(10), Setting.Property.Dynamic, Setting.Property.NodeScope);
    public static final Setting<String> LIFECYCLE_NAME_SETTING = Setting.simpleString(LIFECYCLE_NAME,
        Setting.Property.Dynamic, Setting.Property.IndexScope);
    public static final Setting<Boolean> LIFECYCLE_INDEXING_COMPLETE_SETTING = Setting.boolSetting(LIFECYCLE_INDEXING_COMPLETE, false,
        Setting.Property.Dynamic, Setting.Property.IndexScope);
    public static final Setting<Long> LIFECYCLE_ORIGINATION_DATE_SETTING =
        Setting.longSetting(LIFECYCLE_ORIGINATION_DATE, -1, -1, Setting.Property.Dynamic, Setting.Property.IndexScope);
    public static final Setting<Boolean> LIFECYCLE_PARSE_ORIGINATION_DATE_SETTING = Setting.boolSetting(LIFECYCLE_PARSE_ORIGINATION_DATE,
        false, Setting.Property.Dynamic, Setting.Property.IndexScope);
    public static final Setting<Boolean> LIFECYCLE_HISTORY_INDEX_ENABLED_SETTING = Setting.boolSetting(LIFECYCLE_HISTORY_INDEX_ENABLED,
        true, Setting.Property.NodeScope);


    public static final Setting<Boolean> SLM_HISTORY_INDEX_ENABLED_SETTING = Setting.boolSetting(SLM_HISTORY_INDEX_ENABLED, true,
        Setting.Property.NodeScope);
    public static final Setting<String> SLM_RETENTION_SCHEDULE_SETTING = Setting.simpleString(SLM_RETENTION_SCHEDULE,
        // Default to 1:30am every day
        "0 30 1 * * ?",
        str -> {
        try {
            if (Strings.hasText(str)) {
                // Test that the setting is a valid cron syntax
                new CronSchedule(str);
            }
        } catch (Exception e) {
            throw new IllegalArgumentException("invalid cron expression [" + str + "] for SLM retention schedule [" +
                SLM_RETENTION_SCHEDULE + "]", e);
        }
    }, Setting.Property.Dynamic, Setting.Property.NodeScope);
    public static final Setting<TimeValue> SLM_RETENTION_DURATION_SETTING = Setting.timeSetting(SLM_RETENTION_DURATION,
        TimeValue.timeValueHours(1), TimeValue.timeValueMillis(500), Setting.Property.Dynamic, Setting.Property.NodeScope);
}

설정을 하셨다면 이제는 기다리시면 됩니다. 

Trackbacks 0 : Comments 0

Write a comment