'Flush'에 해당되는 글 2건

  1. 2015.12.09 [Elasticsearch - The Definitive Guide] Making Changes Persistent
  2. 2014.08.04 [ElasticSearch] Refresh/Flush/Optimize - by elasticsearch.org

[Elasticsearch - The Definitive Guide] Making Changes Persistent

Elastic/TheDefinitiveGuide 2015.12.09 17:02

per-segment search works 와 더불어 알아 두면 좋을 것 같아 올려봅니다.



원문 Snippet)

Elasticsearch uses this commit point during startup or when reopening an index to decide which segments belong to the current shard.


Elasticsearch added a translog, or transaction log, which records every operation in Elasticsearch as it happens.

아래는 원문에 나와 있는 Making Changes Persistent 에 대한 flow 정리 입니다.

1) write in-memory buffer & translog

2) write new segment file without fsync

3) flush in-memory buffer (run refresh and not yet flush)

4) write in-memory buffer & append translog 

5) write new segment file (run flush)

6) flush in-memory buffer

7) write commit point on disk

8) flush filesystem cache with fsync

9) delete old translog

10) create new translog

기본적으로 refresh 와 flush 는 간단하게 아래와 같이 이해 하시면 됩니다.

- refresh

검색 가능한 상태로 만들어 줍니다.

- flush

fsync 작업을 합니다. (commit point 기록 및 translog 제거)

Trackback 0 : Comment 0

[ElasticSearch] Refresh/Flush/Optimize - by elasticsearch.org

Elastic/Elasticsearch 2014.08.04 17:48

원본) http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/inside-a-shard.html

1. near real-time search

refresh apiedit

In Elasticsearch, this lightweight process of writing and opening a new segment is called a refresh. By default, every shard is refreshed automatically once every second. This is why we say that Elasticsearch has near real-time search: document changes are not visible to search immediately, but will become visible within one second.

2. making changes persistent

flush apiedit

The action of performing a commit and truncating the translog is known in Elasticsearch as a flush. Shards are flushed automatically every 30 minutes, or when the translog becomes too big. See thetranslog documentation for settings that can be used to control these thresholds.

3. segment merging 

optimize apiedit

The optimize API is best described as the forced merge API. It forces a shard to be merged down to the number of segments specified in the max_num_segments parameter. The intention is to reduce the number of segments (usually to 1) in order to speed up search performance.

Trackback 0 : Comment 0