'solr'에 해당되는 글 11건

  1. 2018.01.31 [검색] re-ranking 시 사용하는 함수에 주의 하세요.
  2. 2013.01.03 elasticsearch routing 적용하기 1
  3. 2012.11.22 Elasticsearch Cluster 구성하기 (shards, replicas)
  4. 2012.11.16 elasticsearch 설치 및 한글형태소분석기 적용 따라하기. 3
  5. 2012.11.14 solr + tomcat 연동 시 admin 에서 한글 검색 이슈
  6. 2012.11.14 solr 막 설치 하기.
  7. 2012.04.27 [solr] Indexing & Searching 맛보기 - solr 기초 마지막..
  8. 2012.04.25 [solr] solrconfig.xml 맛보기
  9. 2012.04.24 [solr] schema.xml 맛보기
  10. 2012.04.18 solr 문서보고 무작정 따라하기.

[검색] re-ranking 시 사용하는 함수에 주의 하세요.

ITWeb/검색일반 2018. 1. 31. 11:19

Elasticsearch 나 Solr 나 모두 내장 함수를 제공 하고 있습니다. (Lucene function 이기도 합니다.)

그런데 이 내장 함수를 이용해서 re-ranking 작업을 많이들 하시는 데요.

하는건 문제가 없지만 시스템 리소스에 영향을 주거나 성능적으로 문제가 되는 함수는 사용하지 않도록 주의 하셔야 합니다.

그냥 무심코 사용했다가 왜 성능이 안나오지 하고 맨붕에 빠지실 수 있습니다.

Function Score Query 나 Script 를 이용한 re-ranking 시 꼭 검토하세요.

re-ranking 은 보통 질의 시점에 수행을 하기 때문에 기본적으로 operation cost 가 비쌉니다.


lucene/queries/function/valuesource

TermFreqValueSource.java)

/**
* Function that returns {@link org.apache.lucene.index.PostingsEnum#freq()} for the
* supplied term in every document.
* <p>
* If the term does not exist in the document, returns 0.
* If frequencies are omitted, returns 1.
*/
public class TermFreqValueSource extends DocFreqValueSource {
public TermFreqValueSource(String field, String val, String indexedField, BytesRef indexedBytes) {
super(field, val, indexedField, indexedBytes);
}

@Override
public String name() {
return "termfreq";
}

@Override
public FunctionValues getValues(Map context, LeafReaderContext readerContext) throws IOException {
Fields fields = readerContext.reader().fields();
final Terms terms = fields.terms(indexedField);

return new IntDocValues(this) {
PostingsEnum docs ;
int atDoc;
int lastDocRequested = -1;

{ reset(); }

public void reset() throws IOException {
// no one should call us for deleted docs?

if (terms != null) {
final TermsEnum termsEnum = terms.iterator();
if (termsEnum.seekExact(indexedBytes)) {
docs = termsEnum.postings(null);
} else {
docs = null;
}
} else {
docs = null;
}

if (docs == null) {
docs = new PostingsEnum() {
@Override
public int freq() {
return 0;
}

@Override
public int nextPosition() throws IOException {
return -1;
}

@Override
public int startOffset() throws IOException {
return -1;
}

@Override
public int endOffset() throws IOException {
return -1;
}

@Override
public BytesRef getPayload() throws IOException {
throw new UnsupportedOperationException();
}

@Override
public int docID() {
return DocIdSetIterator.NO_MORE_DOCS;
}

@Override
public int nextDoc() {
return DocIdSetIterator.NO_MORE_DOCS;
}

@Override
public int advance(int target) {
return DocIdSetIterator.NO_MORE_DOCS;
}

@Override
public long cost() {
return 0;
}
};
}
atDoc = -1;
}

@Override
public int intVal(int doc) {
try {
if (doc < lastDocRequested) {
// out-of-order access.... reset
reset();
}
lastDocRequested = doc;

if (atDoc < doc) {
atDoc = docs.advance(doc);
}

if (atDoc > doc) {
// term doesn't match this document... either because we hit the
// end, or because the next doc is after this doc.
return 0;
}

// a match!
return docs.freq();
} catch (IOException e) {
throw new RuntimeException("caught exception in function "+description()+" : doc="+doc, e);
}
}
};
}
}



:

elasticsearch routing 적용하기

Elastic/Elasticsearch 2013. 1. 3. 14:47

Reference URL

Routing 이란?
  - 색인 시 특정 shard 로 색인을 하고, 검색 시 지정된 shard 로만 검색 할 수 있도록 지원
  - 색인 field 중 unique key 에 해당하는 값을 가지고 routing path 를 지정
  - 검색 시 지정한 path 를 query 에 주어 분산된 indices 를 모두 검색 하지 않고 지정된 indices 만 검색
  - routing field 는 store yes, index not_analyzed 로 설정 되어야 함
 
  - 기본 설정
        "routing" : {
            "required" : true,
            "path" : "test.user_uniq_id"
        }
Routing 설정하기
  - index 생성 시 routing 설정을 포함 시켜 생성 (replica 1, shards 50)
    "settings" : {
        "number_of_shards" : 50,
        "number_of_replicas" : 1,
        "index" : {
            "analysis" : {
                "analyzer" : {
                    "kr_analyzer" : {
                        "type" : "custom",
                        "tokenizer" : "kr_tokenizer",
                        "filter" : ["trim", "kr_filter", "kr_synonym"]
                    },
                    "kr_analyzer" : {
                        "type" : "custom",
                        "tokenizer" : "kr_tokenizer",
                        "filter" : ["trim", "kr_filter", "kr_synonym"]
                    }
                },
                "filter" : {
                    "kr_synonym" : {
                        "type" : "synonym",
                        "synonyms_path" : "analysis/synonym.txt"
                    }
                }
            }
        },
        "routing" : {
            "required" : true,
            "path" : "test.user_uniq_id"
        }
    },
    "mappings" : {
        "test" : {
            "properties" : {
                "docid" : { "type" : "string", "store" : "yes", "index" : "not_analyzed"},
                "title" : { "type" : "string", "store" : "yes", "index" : "analyzed", "term_vector" : "yes", "analyzer" : "kr_analyzer" },
                "user_uniq_id" : { "type" : "string", "store" : "yes", "index" : "not_analyzed" },
                "ymdt" : { "type" : "date", "format" : "yyyyMMddHHmmss", "store" : "yes", "index" : "not_analyzed" }
                }
            }
        }
    }
}'
 
  - document 색인 시 setRouting 설정을 해줘야 정상 동작 함
    IndexRequestBuilder requestBuilder = client.prepareIndex(indexName, indexType);
    requestBuilder.setId(docId);
    requestBuilder.setRouting(docMeta.getUserUniqId());
    requestBuilder.setSource(jsonBuilder);
routing search query uri
  - http://localhost:9200/index0/_search?source={"query":{"bool":{"must":[{"term":{"user_uniq_id":"honggildong@elastic.com"}}],"must_not":[],"should":[]}},"from":0,"size":50,"sort":[{"ymdt":"asc"}],"facets":{}}&routing=honggildong@elastic.com&pretty=true




:

Elasticsearch Cluster 구성하기 (shards, replicas)

Elastic/Elasticsearch 2012. 11. 22. 17:34

많이 부족하지만 일단 스스로 정리하기 위해서.. 올립니다. ^^;

[Reference]

    http://www.elasticsearch.org

    http://www.elasticsearchtutorial.com/elasticsearch-in-5-minutes.html


[URI Command]

    http://10.101.254.223:9200/_status

    http://10.101.254.223:9200/_cluster/state?pretty=true

    http://10.101.254.223:9200/_cluster/nodes?pretty=true

    http://10.101.254.223:9200/_cluster/health?pretty=true


[색인파일 setting 확인]

    http://10.101.254.223:9200/depth1_1/_settings?pretty=true


[색인파일 mapping 확인]

    http://10.101.254.223:9200/depth1_1/_mapping?pretty=true


[색인파일의 URI Step 정의]

    /depth1/

        index 명 (각 서비스 단위의 색인명 or vertical service 명)

        예)

            /blog

            /cafe

    /depth1/depth2/

        index type 명

        예)

            /blog/user

            /blog/post

            /cafe/user

            /cafe/post

    /depth1/depth2/depth3

        색인된 document unique key (id)


[색인파일의 생성, shard, replica 설정]

    http://www.elasticsearch.org/guide/reference/api/admin-indices-create-index.html


    - case 1

        curl -XPUT 'http://10.101.254.221:9200/depth1/'


    - case 2

        curl -XPUT 'http://10.101.254.221:9200/depth1_2/' -d '

        index :

            number_of_shards : 3

            number_of_replicas : 2

        '


    - case 3 : recommended

        curl -XPUT 'http://10.101.254.223:9200/depth1_1/' -d '{

            "settings" : {

                "index" : {

                    "number_of_shards" : 3,

                    "number_of_replicas" : 2

                }

            }

        }'


    - case 4

        curl -XPUT 'http://10.101.254.223:9200/depth1_1/' -d '{

            "settings" : {

                "number_of_shards" : 3,

                "number_of_replicas" : 2

            }

        }'


[색인파일 mapping 설정]

    http://www.elasticsearch.org/guide/reference/mapping/

    http://www.elasticsearch.org/guide/reference/mapping/core-types.html

    ※ 이 영역에서 색인 또는 검색 시 사용할 analyzer 나 tokenizer 를 지정 한다.

    ※ solr 의 경우 schema.xml 에서 정의 하는 설정을 여기서 수행 함.


    curl -XPUT 'http://10.101.254.223:9200/depth1_1/depth2_1/_mapping' -d '

    {

        "depth2_1" : {

            "properties" : {

                "FIELD명" : {"type" : "string", "store" : "yes"}

            }

        }

    }'


[데이터 색인]

    http://www.elasticsearch.org/guide/reference/api/index_.html


    curl -XPUT 'http://10.101.254.223:9200/blog/user/dilbert' -d '{ "name" : "Dilbert Brown" }'


    curl -XPUT 'http://10.101.254.223:9200/blog/post/1' -d '

    {

        "user": "dilbert",

        "postDate": "2011-12-15",

        "body": "Search is hard. Search should be easy." ,

        "title": "On search"

    }'


    curl -XPUT 'http://10.101.254.223:9200/blog/post/2' -d '

    {

        "user": "dilbert",

        "postDate": "2011-12-12",

        "body": "Distribution is hard. Distribution should be easy." ,

        "title": "On distributed search"

    }'


    curl -XPUT 'http://10.101.254.223:9200/blog/post/3' -d '

    {

        "user": "dilbert",

        "postDate": "2011-12-10",

        "body": "Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet dolore magna aliquam erat volutpat" ,

        "title": "Lorem ipsum"

    }'


    curl -XPUT 'http://10.101.254.223:9200/blog/post/4' -d '

    {

        "user": "dilbert",

        "postDate": "2011-12-11",

        "body": "한글 형태소 분석기 테스트와 shard 그리고 replica 테스트" ,

        "title": "elastic search 분산설정"

    }'


    curl -XGET 'http://10.101.254.223:9200/blog/user/dilbert?pretty=true'

    curl -XGET 'http://10.101.254.221:9200/blog/post/1?pretty=true'

    curl -XGET 'http://10.101.254.223:9200/blog/post/2?pretty=true'

    curl -XGET 'http://10.101.254.221:9200/blog/post/3?pretty=true'


[검색테스트]

    http://www.elasticsearch.org/guide/reference/api/search/uri-request.html

    - user에 dilbert 가 포함되어 있는 것

    curl 'http://10.101.254.221:9200/blog/post/_search?q=user:dilbert&pretty=true'

    http://10.101.254.223:9200/blog/post/_search?q=user:dilbert&pretty=true


    - title 에 search 가 포함 안된 것

    curl 'http://10.101.254.223:9200/blog/post/_search?q=-title:search&pretty=true'

    http://10.101.254.223:9200/blog/post/_search?q=-title:search&pretty=true


    - title 에 search 는 있고 distributed 는 없는 것

    curl 'http://10.101.254.223:9200/blog/post/_search?q=+title:search%20-title:distributed&pretty=true&fields=title'

    http://10.101.254.223:9200/blog/post/_search?q=+title:search%20-title:distributed&pretty=true&fields=title


    - range 검색

    curl -XGET 'http://10.101.254.223:9200/blog/_search?pretty=true' -d '

    {

        "query" : {

            "range" : {

                "postDate" : { "from" : "2011-12-10", "to" : "2011-12-12" }

            }

        }

    }'


    - blog 라는 색인 파일 전체 검색

        http://10.101.254.223:9200/blog/_search?q=user:dilbert&pretty=true

        http://10.101.254.223:9200/blog/_search?q=name:dilbert&pretty=true


    - routing 검색 (정확한 의미 파악이 어려움)

        http://10.101.254.223:9200/blog/_search?routing=dilbert?prettry=true


[Clustering 설정 정보]

    ※ 서버1

        cluster.name: cluster_es1

        node.name: node_es1

        node.master: true

        node.data: true

        node.rack: rack_es1

        index.number_of_shards: 3

        index.number_of_replicas: 2

        network.host: 10.101.254.223

        transport.tcp.port: 9300

        http.port: 9200

        gateway.type: local

        gateway.recover_after_nodes: 1

        gateway.recover_after_time: 5m

        gateway.expected_nodes: 2

        cluster.routing.allocation.node_initial_primaries_recoveries: 4

        cluster.routing.allocation.node_concurrent_recoveries: 2

        indices.recovery.max_size_per_sec: 0

        indices.recovery.concurrent_streams: 5

        discovery.zen.minimum_master_nodes: 1

        discovery.zen.ping.timeout: 3s

        discovery.zen.ping.unicast.hosts: ["10.101.254.223:9300", "10.101.254.221:9300"]

        cluster.routing.allocation.allow_rebalance: "indices_all_active"

        indices.recovery.concurrent_streams: 3

        action.auto_create_index: true

        index.mapper.dynamic: true


    ※ 서버2

        cluster.name: cluster_es1

        node.name: node_es2

        node.master: true

        node.data: true

        node.rack: rack_es1

        index.number_of_shards: 3

        index.number_of_replicas: 2

        network.host: 10.101.254.221

        transport.tcp.port: 9300

        http.port: 9200

        gateway.type: local

        gateway.recover_after_nodes: 1

        gateway.recover_after_time: 5m

        gateway.expected_nodes: 2

        cluster.routing.allocation.node_initial_primaries_recoveries: 4

        cluster.routing.allocation.node_concurrent_recoveries: 2

        indices.recovery.max_size_per_sec: 0

        indices.recovery.concurrent_streams: 5

        discovery.zen.minimum_master_nodes: 1

        discovery.zen.ping.timeout: 3s

        discovery.zen.ping.unicast.hosts: ["10.101.254.223:9300", "10.101.254.221:9300"]

        cluster.routing.allocation.allow_rebalance: "indices_all_active"

        indices.recovery.concurrent_streams: 3

        action.auto_create_index: true

        index.mapper.dynamic: true


[설정파일의미]

    ※ 클러스터링할 그룹명 (묶고자 하는 서버들에 elasticsearch.yml 파일에서 이름을 동일하게 주면 클러스터링 됨)

        cluster.name: group1


    ※ 검색 및 색인 서버로 사용하고자 할 경우 설정

        node.master: true

        node.data : true


    ※ 검색 전용 서버로 사용하고자 할 경우 설정 (검색 로드발라서)

        node.master: false

        node.data : false


    ※ 색인 전용 서버로 사용하고자 할 경우 설정

        node.master: false

        node.data : true


    ※ 이건 용도를 잘 모르겠음

        node.master: true

        node.data : false


    ※ 색인 파일(데이터) 사이즈가 작을 경우 수치를 작게 (1), 사이즈가 클 경우 수치를 크게 (기본 5)

    ※ 하나의 색인 파일을 몇 개로 나눠서 저장할 것인지 정의

        index.number_of_shards: 5


    ※ 색인 파일에 대한 복사본 생성 수치 (기본 1)

        index.number_of_replicas: 1


    ※ 설정 후 서버간 클러스터링 되는 과정을 파악하기 어려움

        두 대의 서버에 cluster.name 을 같게 해서 실행 시켜면 자동으로 clustering 됨


    ※ 서버 한대에서 여러개의 elasticsearch instacne 실행 방법

        ./elasticsearch -p pidfile1 -Des.config=elasticsearch/config/elasticsearch1.yml

        ./elasticsearch -p pidfile2 -Des.config=elasticsearch/config/elasticsearch2.yml


        ※ 기타 옵션

        -Xmx1g -Xms1g -Des.max-open-files=true


    ※ node 의 의미

        elasticsearch 에서 node == index 에 의미를 가짐

:

elasticsearch 설치 및 한글형태소분석기 적용 따라하기.

Elastic/Elasticsearch 2012. 11. 16. 13:03

[ElasticSearch 설치하기]

    ※ 참고 URL

    http://www.elasticsearch.org/tutorials/2010/07/01/setting-up-elasticsearch.html

    http://mimul.com/pebble/default/2012/02/23/1329988075236.html

    https://github.com/chanil1218/elasticsearch-analysis-korean

    http://apmlinux.egloos.com/2976457


    ※ 다운로드

    wget --no-check-certificate https://github.com/downloads/elasticsearch/elasticsearch/elasticsearch-0.19.11.tar.gz


    ※ 압축해제

    tar -xvzf elasticsearch-0.19.11.tar.gz


    ※ 링크생성

    ln -s elasticsearch-0.19.11 elasticsearch


    ※ 설정

    cd elasticsearch/config

    vi elasticsearch.yml

        # cluster.name: elasticsearch

        cluster.name: MyCluster


        # network.host: 192.168.0.1

        network.host: 10.101.254.223


        # http.port: 9200

        http.port: 9200


    ※ 실행

    bin/elasticsearch -f

    OR

    bin/elasticsearch -p pidfile


    ※ 기능확인

    curl -X GET http://10.101.254.223:9200/


    ※ 관리툴설치

    bin/plugin -install mobz/elasticsearch-head

    http://10.101.254.223:9200/_plugin/head/


    ※ 한글형태소분석기설치

    bin/plugin -install chanil1218/elasticsearch-analysis-korean/1.1.0


    ※ 한글형태소분석기 설정 (elasticsearch 재실행 후 설정)

    curl -XPUT http://10.101.254.223:9200/test  -d '{

    "settings" : {

        "index": {

            "analysis": {

                "analyzer": {

                    "kr_analyzer": {

                        "type": "custom"

                            , "tokenizer": "kr_tokenizer"

                            ,"filter" : ["trim","kr_filter"]

                    }

                    , "kr_analyzer": {

                        "type": "custom"

                            , "tokenizer": "kr_tokenizer"

                            ,"filter" : ["trim","kr_filter"]

                    }

                }

            }

        }

    }

    }'


    ※ 한글형태소분석기 테스트

    curl -XGET 'http://10.101.254.223:9200/test/_analyze?analyzer=kr_analyzer&pretty=true' -d '전주비빔밥'

        ※ 한글형태소분석결과

        {

          "tokens" : [ {

            "token" : "전주비빔밥",

            "start_offset" : 0,

            "end_offset" : 5,

            "type" : "word",

            "position" : 1

          }, {

            "token" : "전주",

            "start_offset" : 0,

            "end_offset" : 2,

            "type" : "word",

            "position" : 2

          }, {

            "token" : "비빔밥",

            "start_offset" : 2,

            "end_offset" : 5,

            "type" : "word",

            "position" : 3

          } ]

        }

:

solr + tomcat 연동 시 admin 에서 한글 검색 이슈

Elastic/Elasticsearch 2012. 11. 14. 15:50

solr + tomcat 연동은 이미 이전 글에 있습니다.
solr 로 검색해 보시면 되구요.

tomcat 기본설치 후 solr admin 에서 한글이 깨지지 않게 하려면 server.xml 에 아래 내용을 추가해 주셔야 합니다.

뭐 기본이니 다들 아시겠지만.. ^^;

<Connector port="8080" protocol="HTTP/1.1"

               connectionTimeout="20000"

            URIEncoding="UTF-8"

               redirectPort="8443" />

설정 후 재시작 하셔야 합니다. ㅎㅎ

:

solr 막 설치 하기.

Elastic/Elasticsearch 2012. 11. 14. 13:00

[solr 공통]

    1. solr 를 다운로드

        http://mirror.apache-kr.org/lucene/solr/4.0.0/apache-solr-4.0.0.tgz

        http://apache.mirror.cdnetworks.com/lucene/solr/3.6.1/apache-solr-3.6.1.tgz

    2. 압축 해제

        tar -xvzf apache-solr-4.0.0.tgz

    3. 압축 해제 후 example 경로로 이동

        cd apache-solr-4.0.0/example

    4. To launch Jetty with the Solr WAR, and the example configs, just run the start.jar ...

        [example]$ java -jar start.jar

    5. 접속해 보기

        http://10.101.254.223:8983/solr

        바로 화면이 뜸

    6. 나머지는 그냥 tutorial 에 나와 있는 데로 그냥 따라만 하면 동작 함.

        http://lucene.apache.org/solr/4_0_0/tutorial.html

        http://lucene.apache.org/solr/api-3_6_1/doc-files/tutorial.html


[solr-3.6.1 / 다중 solr 실행]

    1. tomcat 사용하기

    2. apache-tomcat 다운로드 및 압축 해제

    3. tomcat 아래 디렉토리 만들기

        tomcat/conf/Catalina

        mkdir -p tomcat/conf/Catalina/localhost

        tomcat/data

        tomcat/data/solr

        tomcat/data/solr/dev

        mkdir -p tomcat/data/solr/dev/conf

        mkdir -p tomcat/data/solr/dev/data

        tomcat/data/solr/prod

        mkdir -p tomcat/data/solr/prod/conf

        mkdir -p tomcat/data/solr/prod/data

    4. apache-solr-3.6.1.war 복사

        solr 압축 해제한 경로 내 dist 에 존재

        cp apache-solr-3.6.1/dist/apache-solr-3.6.1.war tomcat/data/solr/

    5. solrdev.xml 생성

        cd tomcat/conf/Catalina/localhost

        <Context docBase="/home/user/app/tomcat/data/solr/apache-solr-3.6.1.war" debug="0" crossContext="true">

            <Environment name="solr/home" type="java.lang.String" value="/home/user/app/tomcat/data/solr/dev" override="true" />

        </Context>

    6. solrprod.xml 생성

        cd /home/user/app/tomcat/conf/Catalina/localhost

        <Context docBase="/home/user/app/tomcat/data/solr/apache-solr-3.6.1.war" debug="0" crossContext="true">

            <Environment name="solr/home" type="java.lang.String" value="/home/user/app/tomcat/data/solr/prod" override="true" />

        </Context>

    7. solr conf 파일 복사

        [conf]$ cp -R * /home/user/app/tomcat/data/solr/dev/conf/

        [conf]$ cp -R * /home/user/app/tomcat/data/solr/prod/conf/

        [conf]$ pwd

        /home/user/app/apache-solr-3.6.1/example/solr/conf


그냥 급하게 막 설치하거구요.
그 이외 설정이랑 색인, 검색등은 이전 글 참고 하시면 되겠습니다.

:

[solr] Indexing & Searching 맛보기 - solr 기초 마지막..

Elastic/Elasticsearch 2012. 4. 27. 12:16

가장 중요한 설정 파일 두가지에 대해서 살펴 보았습니다.
solrconfig.xml 과 schema.xml 
아주 중요한 내용들을 설정 하기 때문에 지속적인 학습과 연구가 필요 합니다.
공부합시다.. ㅎㅎ

기본적으로는 아래 문서 보시면 쉽게 이해를 하실 수 있습니다.

우선 post.jar 를 분석해 보겠습니다.
post.jar 를 풀어 보면 SimplePostTool.class 가 들어가 있습니다.

[SimplePostTool.java]
- 이 파일은 package 내 dependency 가 없습니다.
- 그냥 가져다가 사용을 하셔도 됩니다. 
- 저는 solr + tomcat 구성으로 해서 http://localhost:8080/solrdev/update 로 코드 상에 설정 값을 변경했습니다.
- 그럼 색인할 데이터는 어디서 가져와??? 
- 보통은 DB 에 content 를 저장하고 있죠, DB 에 있는 데이터를 select 해 와서 solr 에서 요구하는 format 으로 파일을 생성 하시면 됩니다. xml 을 많이 사용하니 select 해 온 데이터를 xml 파일로 생성 하시면 됩니다.
- 저는 그냥 java project 하나 생성해서 색인할 url 변경하고 SimplePostTool.java 를 다시 묶었습니다.

- 제가 실행시켜 본 화면 입니다.
- 위에 보시면 Main-Class 어쩌구 에러 보이시죠.. 
- MANIFEST 파일을 만들어서 넣어 주시면 됩니다, 중요한건 보이시죠.. 제일 뒤에 개행을 꼭 해주셔야 합니다.

- 그리고 검색을 해보죠.
- 검색 쿼리는 belkin 입니다.

- 참 색인 데이터를 안봤군요.

[ipod_other.xml]
- solr 설치 하시면 example/exampledocs/ 아래 들어 있습니다.

<!--

 Licensed to the Apache Software Foundation (ASF) under one or more

 contributor license agreements.  See the NOTICE file distributed with

 this work for additional information regarding copyright ownership.

 The ASF licenses this file to You under the Apache License, Version 2.0

 (the "License"); you may not use this file except in compliance with

 the License.  You may obtain a copy of the License at


     http://www.apache.org/licenses/LICENSE-2.0


 Unless required by applicable law or agreed to in writing, software

 distributed under the License is distributed on an "AS IS" BASIS,

 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

 See the License for the specific language governing permissions and

 limitations under the License.

-->

<add>

<doc>

  <field name="id">F8V7067-APL-KIT</field>

  <field name="name">Belkin Mobile Power Cord for iPod w/ Dock</field>

  <field name="manu">Belkin</field>

  <field name="cat">electronics</field>

  <field name="cat">connector</field>

  <field name="features">car power adapter, white</field>

  <field name="weight">4</field>

  <field name="price">19.95</field>

  <field name="popularity">1</field>

  <field name="inStock">false</field>

  <!-- Buffalo store -->

  <field name="store">45.17614,-93.87341</field>

  <field name="manufacturedate_dt">2005-08-01T16:30:25Z</field>

</doc>

<doc>

  <field name="id">IW-02</field>

  <field name="name">iPod &amp; iPod Mini USB 2.0 Cable</field>

  <field name="manu">Belkin</field>

  <field name="cat">electronics</field>

  <field name="cat">connector</field>

  <field name="features">car power adapter for iPod, white</field>

  <field name="weight">2</field>

  <field name="price">11.50</field>

  <field name="popularity">1</field>

  <field name="inStock">false</field>

  <!-- San Francisco store -->

  <field name="store">37.7752,-122.4232</field>

  <field name="manufacturedate_dt">2006-02-14T23:55:59Z</field>

</doc>

</add>

- 검색 결과 화면 입니다.


자, 지금까지 solr 설치, 설정, 색인과 검색을 맛보기로 해봤습니다.
이제 부터는 각자 열공하셔서 필요한 만큼 사용하시면 될것 같습니다.


Good luck!!

:

[solr] solrconfig.xml 맛보기

Elastic/Elasticsearch 2012. 4. 25. 14:42

solr 자체에 대한 설정 파일을 안보고 왔었내요.
solrconfig.xml 파일이 뭐 하는 넘인지 보도록 하겠습니다.

원문은 아래 링크에 있습니다.
다만, solrconfig.xml 파일고 문서를 비교해 보니.. 좀 다르내요.. 
둘다 참고 하셔야 할 듯 합니다.
저는 기냥 solrconfig.xml 파일을 가지고 정리 합니다. ^^;

[solrconfig.xml]
- Solr 설정을 위한 대부분의 정보를 가지고 있다고 하는 군요.

solrconfig.xml is the file that contains most of the parameters for configuring Solr itself.

[lib directive]
- solr plugin 이나 jar 파일들을 loading 하기 위해 사용되는 directive 입니다.
- 아래 주석을 포함한 샘플 코드를 보시면 이해 하기 쉬우실 겁니다.
- 내용에 이런게 있내요, dir 에 매치 되는게 없으면 그냥 무시 하지만 path 로 정확하게 지정 했을 경우 없으면 에러가 난다는군요.

[dataDir directive]
- 절대경로, 상대경로에 주의 해서 사용을 하세요.
- 색인데이터가 위치하는 경로 입니다.

dataDir parameter

Used to specify an alternate directory to hold all index data other than the default ./data under the Solr home. If replication is in use, this should match the replication configuration. If this directory is not absolute, then it is relative to the current working directory of the servlet container.

<!-- Data Directory

       Used to specify an alternate directory to hold all index data
       other than the default ./data under the Solr home.  If
       replication is in use, this should match the replication
       configuration.
    -->
  <dataDir>${solr.data.dir:}</dataDir>

[luceneMatchVersion directive]
- solr 에서 사용하는 lucene 버전을 명시 합니다.

  <!-- Controls what version of Lucene various components of Solr
       adhere to.  Generally, you want to use the latest version to
       get all bug fixes and improvements. It is highly recommended
       that you fully re-index after changing this setting as it can
       affect both how text is indexed and queried.
  --> 

<luceneMatchVersion>LUCENE_40</luceneMatchVersion>

[directoryFactory directive]
- 이건 lucene 에서도 indexWriter 에서 사용하게 되는 내용과 같은 거라고 보시면 됩니다.
- 색인 파일에 대한 file system 이라고 이해 하시고, lucene 에서는 Direcotry Class 를 참고 하시면 됩니다.

<!-- The DirectoryFactory to use for indexes.

       
       solr.StandardDirectoryFactory, the default, is filesystem
       based and tries to pick the best implementation for the current
       JVM and platform.  One can force a particular implementation
       via solr.MMapDirectoryFactory, solr.NIOFSDirectoryFactory, or
       solr.SimpleFSDirectoryFactory.

       solr.RAMDirectoryFactory is memory based, not
       persistent, and doesn't work with replication.
    -->
  <directoryFactory name="DirectoryFactory" 
                    class="${solr.directoryFactory:solr.StandardDirectoryFactory}"/>

[indexConfig directive]
- 이 부분은 색인을 하기 위한 상세 설정을 하게 됩니다.
- 관련 지식이 부족 할 경우 접근하기 매우 어렵습니다. (저는 잘 하냐구요? 후덜덜;;;;)

[jmx directive]
http://wiki.apache.org/solr/SolrJmx

[updateHandler directive] - 어떤 조건하에 commit 수행을 하도록 설정을 합니다. - solr 에서 add/replace/commit/delete 그리고 query 에 의한 delete 시에 사용 되는 것 같습니다. - 주석에 나와 있는 링크 참고하세요.

[indexReaderFactory directive]
- lucene 의 indexReader 와 같은 역할을 하는 거라고 보시면 될 것 같습니다.

<!-- IndexReaderFactory

       Use the following format to specify a custom IndexReaderFactory,
       which allows for alternate IndexReader implementations.

       ** Experimental Feature **

       Please note - Using a custom IndexReaderFactory may prevent
       certain other features from working. The API to
       IndexReaderFactory may change without warning or may even be
       removed from future releases if the problems cannot be
       resolved.


       ** Features that may not work with custom IndexReaderFactory **

       The ReplicationHandler assumes a disk-resident index. Using a
       custom IndexReader implementation may cause incompatibility
       with ReplicationHandler and may cause replication to not work
       correctly. See SOLR-1366 for details.

    -->
  <!--
  <indexReaderFactory name="IndexReaderFactory" class="package.class">
    <str name="someArg">Some Value</str>
  </indexReaderFactory >
  -->
  <!-- By explicitly declaring the Factory, the termIndexDivisor can
       be specified.
    -->
  <!--
     <indexReaderFactory name="IndexReaderFactory" 
                         class="solr.StandardIndexReaderFactory">
       <int name="setTermIndexDivisor">12</int>
     </indexReaderFactory > 

-->

[query directive]
- cache 에 대한 설정과 indexSeacher 에 대한 action 을 설정 합니다.

[requestDispatcher directive]
- /select?qt=xxxx 에 대한 처리를 설정 합니다.
[requestParsers directive]
- solr 가 parsing 하기 위한 설정 또는 제한
[httpCaching directive]
- cache control 설정을 합니다.

[requestHandler & searchHandler directive]
- 이 넘들은 아래 주석에서 처럼 searchHandler 를 한번 보시고 보시면 이해가 쉽습니다.
- 말은 쉽습니다 했으나.. 겁나 어려운 영역 입니다.
- 이 설정을 어떻게 하느냐에 따라서 검색 결과가 막 영향을 받을 테니까요..

[searchComponent directive]
- 이 넘은 searchHandler 대신 사용하는가 봅니다.
- 그리고 requestHandler 랑 조합해서 사용을 하는 군요.
- 역시 이 부분도 설정이 매우 어렵습니다. ㅡ.ㅡ;;

[updateRequestProcessorChain directive]
- update request 에 대한 추가적인 처리가 필요 할때 뭔가 하는 것 같은데 직접 돌려봐야겠내요.

[queryReponseWriter directive]
- response format 을 지정 합니다.

[queryParser, valueSourceParse, transformer, admin directive]
- queryParser 와 valueSourceParse 의 경우는 검색 query 에 대해 처리 하기 위한 내용입니다.
- 아래 URL 참고 하시면 됩니다.


:

[solr] schema.xml 맛보기

Elastic/Elasticsearch 2012. 4. 24. 15:42

[원본링크]


Solr Wiki 에 정의된 내용을 한번 보도록 하죠.

The schema.xml file contains all of the details about which fields your documents can contain, and how those fields should be dealt with when adding documents to the index, or when querying those fields.

- 인덱싱 또는 쿼리 할때 문서의 field 에 대해 자세한 정보를 담고 있는 파일 이라고 되어 있군요.
- 결국, Solr 에서 색인이나 검색 시 문서의 기본 스키마 정보를 제공한다고 보면 됩니다.
- 뭐, 파일 이름도 그렇고 설명도 굳이 제가 설명하지 않아도 누구나 이해가 되는 내용이내요.. ^^;;


[Data Types]

The <types> section allows you to define a list of <fieldtype> declarations you wish to use in your schema, along with the underlying Solr class that should be used for that type, as well as the default options you want for fields that use that type.

- <types> 라는 지시어는 <fieldtype> 의 목록으로 구성 된다고 하는 군요.
- <fieldtype> 정의에 따라서 색인이나 검색 시 구현되는 방법에 영향을 미치게 됩니다.
- 내용이 너무 길어서 몽창 설명 하기는 어렵고.. 링크 참고하세요
- 단, 차분히 schema.xml 파일을 정독해 보시고 주석에 나와 있는 것 처럼 관련 내용들을 한번 씩은 보시는게 이해하는데 아주 좋습니다.


[Fields]
- 이 지시자는 문서를 추가 하거나 검색할때 사용합니다.

The <fields> section is where you list the individual <field> declarations you wish to use in your documents. Each <field> has a name that you will use to reference it when adding documents or executing searches, and an associated type which identifies the name of the fieldtype you wish to use for this field. There are various field options that apply to a field. These can be set in the field type declarations, and can also be overridden at an individual field's declaration.


- 주석에도 설명이 되어 있지만, 위에서 선언한 <fieldtype> 의 name 에 해당하는 label 을 드디어 사용하게 됩니다. 아마도 어디서 사용하는지 궁금 하셨을 수도 있겠내요.

 <!-- Valid attributes for fields:
     name: mandatory - the name for the field
     type: mandatory - the name of a previously defined type from the 
       <types> section
     indexed: true if this field should be indexed (searchable or sortable)
     stored: true if this field should be retrievable
     multiValued: true if this field may contain multiple values per document
     omitNorms: (expert) set to true to omit the norms associated with
       this field (this disables length normalization and index-time
       boosting for the field, and saves some memory).  Only full-text
       fields or fields that need an index-time boost need norms.
       Norms are omitted for primitive (non-analyzed) types by default.
     termVectors: [false] set to true to store the term vector for a
       given field.
       When using MoreLikeThis, fields used for similarity should be
       stored for best performance.
     termPositions: Store position information with the term vector.  
       This will increase storage costs.
     termOffsets: Store offset information with the term vector. This 
       will increase storage costs.
     required: The field is required.  It will throw an error if the
       value does not exist
     default: a value that should be used if no value is specified
       when adding a document.
   -->

- <dynamicField> 이넘은 field name 을 찾지 못하게 되면 특정 패턴에 매칭된 이름을 사용하게 됩니다.


[Unique Key]
- 색인하는 전체 문서에 대한 unique key 로 사용 됩니다.
- 이 Key 값으로 문서를 업데이트 하거나 삭제 하거나 할 수 있겠죠.

The Unique Key Field

The <uniqueKey> declaration can be used to inform Solr that there is a field in your index which should be unique for all documents. If a document is added that contains the same value for this field as an existing document, the old document will be deleted.

It is not mandatory for a schema to have a uniqueKey field.

Note that if you have enabled the QueryElevationComponent in solrconfig.xml it requires the schema to have a uniqueKey of type StrField. It cannot be, for example, an int field.


[기타]
- 기타라고 빼놓았지만 역시 중요한 지시자 입니다.

[defaultSearchField]
- 검색 시 정확한 필드명이 지정되지 않았을 경우 기본 검색 필드로 사용 합니다.

[solrQueryParser]
- AND, OR 오퍼레이션을 정의 합니다.

[copyField]
- source 에 해당하는 field 의 value 를 dest 에 해당하는 field 로 복사 합니다.

[similarity]
- 색인 작업 시 solr 가 사용하는 하위 클래스를 지정하는데 사용 합니다.
- 없다면, Lucene 의 DefaultSimilarity 를 사용합니다.


:

solr 문서보고 무작정 따라하기.

ITWeb/서버관리 2012. 4. 18. 15:11

solr 설치 문서는 정말 잘 정리 되어 있습니다.
그냥 문서만 보고 따라 하시면 누구나 쉽게 설치해서 정상적인 화면을 확인 하실 수 있으니 한번 해보세요.
설치 링크 정보는 이전 글 참고 하시면 됩니다.

제가 문서 보고 따라한거 그대로 스크랩 합니다.
아 그리고 저는 tomcat, jdk 모두 매뉴얼 설치로 테스트 했습니다.

[Solr Tutorial]

Solr Tutorial

Overview

This document covers the basics of running Solr using an example schema, and some sample data.

Requirements

To follow along with this tutorial, you will need...

  1. Java 1.5 or greater. Some places you can get it are from OracleOpen JDKIBM, or 
    Running java -version at the command line should indicate a version number starting with 1.5. Gnu's GCJ is not supported and does not work with Solr.
  2. Solr release.

Getting Started

Please run the browser showing this tutorial and the Solr server on the same machine so tutorial links will correctly point to your Solr server.

Begin by unziping the Solr release and changing your working directory to be the "example" directory. (Note that the base directory name may vary with the version of Solr downloaded.) For example, with a shell in UNIX, Cygwin, or MacOS:

user:~solr$ ls
solr-nightly.zip
user:~solr$ unzip -q solr-nightly.zip
user:~solr$ cd solr-nightly/example/

Solr can run in any Java Servlet Container of your choice, but to simplify this tutorial, the example index includes a small installation of Jetty.

To launch Jetty with the Solr WAR, and the example configs, just run the start.jar ...

user:~/solr/example$ java -jar start.jar
2012-03-27 17:11:29.529:INFO::Logging to STDERR via org.mortbay.log.StdErrLog
2012-03-27 17:11:29.696:INFO::jetty-6.1-SNAPSHOT
...
2012-03-27 17:11:32.343:INFO::Started SocketConnector@0.0.0.0:8983

This will start up the Jetty application server on port 8983, and use your terminal to display the logging information from Solr.

You can see that the Solr is running by loading http://localhost:8983/solr/admin/ in your web browser. This is the main starting point for Administering Solr.

Indexing Data

Your Solr server is up and running, but it doesn't contain any data. You can modify a Solr index by POSTing XML Documents containing instructions to add (or update) documents, delete documents, commit pending adds and deletes, and optimize your index.

The exampledocs directory contains samples of the types of instructions Solr expects, as well as a java utility for posting them from the command line (a post.sh shell script is also available, but for this tutorial we'll use the cross-platform Java client).

To try this, open a new terminal window, enter the exampledocs directory, and run "java -jar post.jar" on some of the XML files in that directory, indicating the URL of the Solr server:

user:~/solr/example/exampledocs$ java -jar post.jar solr.xml monitor.xml
SimplePostTool: version 1.4
SimplePostTool: POSTing files to http://localhost:8983/solr/update..
SimplePostTool: POSTing file solr.xml
SimplePostTool: POSTing file monitor.xml
SimplePostTool: COMMITting Solr index changes..

You have now indexed two documents in Solr, and committed these changes. You can now search for "solr" using the "Make a Query" interface on the Admin screen, and you should get one result. Clicking the "Search" button should take you to the following URL...

http://localhost:8983/solr/select/?q=solr&start=0&rows=10&indent=on

You can index all of the sample data, using the following command (assuming your command line shell supports the *.xml notation):

user:~/solr/example/exampledocs$ java -jar post.jar *.xml
SimplePostTool: version 1.4
SimplePostTool: POSTing files to http://localhost:8983/solr/update..
SimplePostTool: POSTing file gb18030-example.xml
SimplePostTool: POSTing file hd.xml
SimplePostTool: POSTing file ipod_other.xml
SimplePostTool: POSTing file ipod_video.xml
SimplePostTool: POSTing file mem.xml
SimplePostTool: POSTing file money.xml
SimplePostTool: POSTing file monitor2.xml
SimplePostTool: POSTing file monitor.xml
SimplePostTool: POSTing file mp500.xml
SimplePostTool: POSTing file sd500.xml
SimplePostTool: POSTing file solr.xml
SimplePostTool: POSTing file utf8-example.xml
SimplePostTool: POSTing file vidcard.xml
SimplePostTool: COMMITting Solr index changes..

...and now you can search for all sorts of things using the default Solr Query Syntax (a superset of the Lucene query syntax)...

There are many other different ways to import your data into Solr... one can

Updating Data

You may have noticed that even though the file solr.xml has now been POSTed to the server twice, you still only get 1 result when searching for "solr". This is because the example schema.xml specifies a "uniqueKey" field called "id". Whenever you POST instructions to Solr to add a document with the same value for the uniqueKey as an existing document, it automatically replaces it for you. You can see that that has happened by looking at the values for numDocs and maxDoc in the "CORE"/searcher section of the statistics page...

http://localhost:8983/solr/admin/stats.jsp

numDocs represents the number of searchable documents in the index (and will be larger than the number of XML files since some files contained more than one <doc>). maxDoc may be larger as the maxDoc count includes logically deleted documents that have not yet been removed from the index. You can re-post the sample XML files over and over again as much as you want and numDocs will never increase, because the new documents will constantly be replacing the old.

Go ahead and edit the existing XML files to change some of the data, and re-run the java -jar post.jar command, you'll see your changes reflected in subsequent searches.

Deleting Data

You can delete data by POSTing a delete command to the update URL and specifying the value of the document's unique key field, or a query that matches multiple documents (be careful with that one!). Since these commands are smaller, we will specify them right on the command line rather than reference an XML file.

Execute the following command to delete a document

java -Ddata=args -Dcommit=no -jar post.jar "<delete><id>SP2514N</id></delete>"

Now if you go to the statistics page and scroll down to the UPDATE_HANDLERS section and verify that "deletesById : 1"

If you search for id:SP2514N it will still be found, because index changes are not visible until changes are committed and a new searcher is opened. To cause this to happen, send a commit command to Solr (post.jar does this for you by default):

java -jar post.jar

Now re-execute the previous search and verify that no matching documents are found. Also revisit the statistics page and observe the changes in both the UPDATE_HANDLERS section and the CORE section.

Here is an example of using delete-by-query to delete anything with DDR in the name:

java -Ddata=args -jar post.jar "<delete><query>name:DDR</query></delete>"

Commit can be an expensive operation so it's best to make many changes to an index in a batch and then send the commit command at the end. There is also an optimize command that does the same thing as commit, in addition to merging all index segments into a single segment, making it faster to search and causing any deleted documents to be removed. All of the update commands are documented here.

To continue with the tutorial, re-add any documents you may have deleted by going to the exampledocs directory and executing

java -jar post.jar *.xml

Querying Data

Searches are done via HTTP GET on the select URL with the query string in the q parameter. You can pass a number of optional request parameters to the request handler to control what information is returned. For example, you can use the "fl" parameter to control what stored fields are returned, and if the relevancy score is returned:

Solr provides a query form within the web admin interface that allows setting the various request parameters and is useful when testing or debugging queries.

Sorting

Solr provides a simple method to sort on one or more indexed fields. Use the "sort' parameter to specify "field direction" pairs, separated by commas if there's more than one sort field:

"score" can also be used as a field name when specifying a sort:

Complex functions may also be used to sort results:

If no sort is specified, the default is score desc to return the matches having the highest relevancy.

Highlighting

Hit highlighting returns relevent snippets of each returned document, and highlights terms from the query within those context snippets.

The following example searches for video card and requests highlighting on the fields name,features. This causes a highlighting section to be added to the response with the words to highlight surrounded with <em> (for emphasis) tags.

...&q=video card&fl=name,id&hl=true&hl.fl=name,features

More request parameters related to controlling highlighting may be found here.

Faceted Search

Faceted search takes the documents matched by a query and generates counts for various properties or categories. Links are usually provided that allows users to "drill down" or refine their search results based on the returned categories.

The following example searches for all documents (*:*) and requests counts by the category field cat.

...&q=*:*&facet=true&facet.field=cat

Notice that although only the first 10 documents are returned in the results list, the facet counts generated are for the complete set of documents that match the query.

We can facet multiple ways at the same time. The following example adds a facet on the boolean inStock field:

...&q=*:*&facet=true&facet.field=cat&facet.field=inStock

Solr can also generate counts for arbitrary queries. The following example queries for ipod and shows prices below and above 100 by using range queries on the price field.

...&q=ipod&facet=true&facet.query=price:[0 TO 100]&facet.query=price:[100 TO *]

One can even facet by date ranges. This example requests counts for the manufacture date (manufacturedate_dt field) for each year between 2004 and 2010.

...&q=*:*&facet=true&facet.date=manufacturedate_dt&facet.date.start=2004-01-01T00:00:00Z&facet.date.end=2010-01-01T00:00:00Z&facet.date.gap=+1YEAR

More information on faceted search may be found on the faceting overview and faceting parameterspages.

Search UI

Solr includes an example search interface built with velocity templating that demonstrates many features, including searching, faceting, highlighting, autocomplete, and geospatial searching.

Try it out at http://localhost:8983/solr/browse

Text Analysis

Text fields are typically indexed by breaking the text into words and applying various transformations such as lowercasing, removing plurals, or stemming to increase relevancy. The same text transformations are normally applied to any queries in order to match what is indexed.

The schema defines the fields in the index and what type of analysis is applied to them. The current schema your server is using may be accessed via the [SCHEMA] link on the admin page.

The best analysis components (tokenization and filtering) for your textual content depends heavily on language. As you can see in the above [SCHEMA] link, the fields in the example schema are using a fieldType named text_general, which has defaults appropriate for all languages.

If you know your textual content is English, as is the case for the example documents in this tutorial, and you'd like to apply English-specific stemming and stop word removal, as well as split compound words, you can use the text_en_splitting fieldType instead. Go ahead and edit the schema.xml in thesolr/example/solr/conf directory, to use the text_en_splitting fieldType for the text and features fields like so:

   <field name="features" type="text_en_splitting" indexed="true" stored="true" multiValued="true"/>
   ...
   <field name="text" type="text_en_splitting" indexed="true" stored="false" multiValued="true"/>

Stop and restart Solr after making these changes and then re-post all of the example documents usingjava -jar post.jar *.xml. Now queries like the ones listed below will demonstrate English-specific transformations:

  • A search for power-shot can match PowerShot, and adata can match A-DATA by using theWordDelimiterFilter and LowerCaseFilter.
  • A search for features:recharging can match Rechargeable using the stemming features of PorterStemFilter.
  • A search for "1 gigabyte" can match 1GB, and the commonly misspelled pixima can matches Pixma using the SynonymFilter.

A full description of the analysis components, Analyzers, Tokenizers, and TokenFilters available for use is here.

Analysis Debugging

There is a handy analysis debugging page where you can see how a text value is broken down into words, and shows the resulting tokens after they pass through each filter in the chain.

This url shows how "Canon Power-Shot SD500" would shows the tokens that would be instead be created using the text_en_splitting type. Each row of the table shows the resulting tokens after having passed through the next TokenFilter in the analyzer. Notice how both powershot and powershot are indexed. Tokens generated at the same position are shown in the same column, in this case shot and powershot. (Compare the previous output with The tokens produced using the text_general field type.)

Selecting verbose output will show more details, such as the name of each analyzer component in the chain, token positions, and the start and end positions of the token in the original text.

Selecting highlight matches when both index and query values are provided will take the resulting terms from the query value and highlight all matches in the index value analysis.

Other interesting examples:

Conclusion

Congratulations! You successfully ran a small Solr instance, added some documents, and made changes to the index and schema. You learned about queries, text analysis, and the Solr admin interface. You're ready to start using Solr on your own project! Continue on with the following steps:

  • Subscribe to the Solr mailing lists!
  • Make a copy of the Solr example directory as a template for your project.
  • Customize the schema and other config in solr/conf/ to meet your needs.

Solr has a ton of other features that we haven't touched on here, including distributed search to handle huge document collections, function queriesnumeric field statistics, and search results clustering. Explore the Solr Wiki to find more details about Solr's many features.

Have Fun, and we'll see you on the Solr mailing lists!


[Installing Solr on Ubuntu Linux]

Installing Solr on Ubuntu Linux


Following are instructions for installing the Solr search server on Ubuntu linux. There are several manual steps in setting up Solr, and most of the other documents I came across on the internet are inadequate in some (or in many) ways so I enlisted the help of colleagues and documented the steps start-to-finish here. 

I found Solr not to my liking, encountering significant scaling issues while indexing beyond 4-5 million small documents and so I've abandoned this application in favor of more standard/robust solutions with a far larger community (e.g. mySQL) and more ubiquitous technology with long evolutionary histories (RDBMS) behind them. The problem of indexing XML documents is best solved by avoidance. Digitally born data should exist in normalized and relational states from the get-go. 

These instructions have been tested with Hardy Heron 8.04, and will likely work with other recent versions of Ubuntu and Debian-based distros with little or no modification.

Before You Start
Solr can be setup several ways -- these instructions lead up to a Solr environment deployed in Tomcat, with separate development and production areas. Once you've done this a couple times (or carefully read this document a few times), you could set up three environments, just one, or whatever layout suits your needs. There are hardcoded pathing dependencies of which you need to be aware. 

(1) Download and install the latest JDK from Sun.

You'll want to get the latest Java JDK from Sun http://java.sun.com/javase/downloads/index.jsp and install it first. At the time these instructions were written, I had installed Sun's jdk1.6.0_10. I'm unsure if it's required, but I also made sure that "ant" was installed on my Ubuntu box (for ant, I simply used Ubuntu's handy package installer Synaptic). 

I downloaded the Sun JDK to my user home directory and chmod +x'd the .bin exectuable. I sudo'd to root and executed the file. It made me scroll through the license agreement and decompressed itself. I then mv'd it to /opt/jdk1.6.0_10. 

Java needs at least two environment settings in order to be useful. You'll eventually need to set up CLASSPATH as well, but that's not essential for the instructions in this document. I made the following .bashrc additions to both my ordinary user account (/home/{username}/.bashrc), as well as for the root account (/root/.bashrc). Go into each .bashrc file and add the following (which may be slightly different if you chose a different location or have a different version of the JDK): 

export PATH=/opt/jdk1.6.0_10/bin:$PATH
export JAVA_HOME=/opt/jdk1.6.0_10

Whenever you make changes to .bashrc you should issue a "source .bashrc" to instruct the shell to re-read the file (otherwise you'd have to logout, and then log back in). You should now be able to type "which java" and see something like this: /opt/jdk1.6.0_10/bin/java, depending on the version you downloaded. 

(2) Download and install the latest Tomcat.

Rather than lean on the Tomcat 5.5 version which was part of the Ubuntu repositories at the time of this writing, I downloaded the latest Tomcat: http://tomcat.apache.org. I brought it down to my user directory, decompressing it via gunzip and "tar xvf". It creates a Tomcat directory, populated with everything it needs. 

As you use Tomcat over the lifespan of your project/development you may want a more succinct name than something like "apache-tomcat-6.0.16" so I decided to rename (mv) this directory to simply "tomcat6". The instructions which follow in this document will use that abbreviated "tomcat6" convention. 

I then did this:

sudo su
mv tomcat6 /usr/local/

You can move it somewhere else -- I picked this location because a colleague who led me through most of these steps put it in that location on his box and I decided to remain consistent with his setup. Maybe you want it in /usr/share/ or somewhere else. Before going further, you should test Tomcat. At this stage, I'm still sudo'd as root. 

cd /usr/local/tomcat6/bin
./startup.sh

You should see a message like this:
Using CATALINA_BASE:   /usr/local/tomcat6
Using CATALINA_HOME:   /usr/local/tomcat6
Using CATALINA_TMPDIR: /usr/local/tomcat6/temp
Using JRE_HOME:       /opt/jdk1.6.0_10
(Note that JRE_HOME is the location of the Sun JDK installed in an earlier step. You really need this -- if Tomcat is aimed at a JRE that you don't want, or can't find it, you can't go any further.) Eventually you'll probably want to create a Tomcat specific user, and give it appropriate/minimal rights, instead of using root. 

Go to your browser and type this: 

http://localhost:8080/

Go to Tomcat servlet examples and click a couple of them, click a couple jsp examples also. They should execute without complaining. At this stage we've installed the latest JDK, the latest Tomcat, and things are talking to one another. If you're getting something wildly different, you can't go any further here. In order to complete this document, it should be "all systems go" at this point. 

Before going further, you should shut Tomcat back down:

cd /usr/local/tomcat6/bin
./shutdown.sh

(3) Download and install Solr

I downloaded the latest Solr here: http://www.apache.org/dyn/closer.cgi/lucene/solr/. As with Tomcat, I issued gunzip and "tar xvf" to decompress it to my home user directory. It creates a directory called "apache-solr-1.2.0". 

We need to manually create some directories within /usr/local/tomcat6. This setup will yield us two Solr locations within your Tomcat instance: one for development, another for production. There are other ways to set up Solr, but if this is your first attempt you may want to follow this convention. It's unclear why /Catalina and /Catalina/localhost aren't created automatically with a Tomcat install. Probably just to keep our salaries up. The /data/solr directory, as you can see, will have an identical structure below it for dev and prod. Each of those directories additionally has corresponding /conf and /data directories below it. 

Make these directories: 

/usr/local/tomcat6/conf/Catalina
/usr/local/tomcat6/conf/Catalina/localhost
/usr/local/tomcat6/data
/usr/local/tomcat6/data/solr
/usr/local/tomcat6/data/solr/dev
/usr/local/tomcat6/data/solr/dev/conf
/usr/local/tomcat6/data/solr/dev/data
/usr/local/tomcat6/data/solr/prod
/usr/local/tomcat6/data/solr/prod/conf
/usr/local/tomcat6/data/solr/prod/data

Now we should copy the solr "war" file into position for deployment. Go to the directory where you decompressed solr in an earlier step, and go into the dist subdirectory. For instance: apache-solr-1.2.0/dist. 

cp apache-solr-1.2.0.war /usr/local/tomcat6/data/solr

Now, in /usr/local/tomcat6/conf/Catalina/localhost we need to create and save two files which will be read the next time you start Tomcat, and (hopefully) properly deploy Solr. Use a text editor of your choice and create these two files in the /Catalina/localhost subdirectory. 

cd /usr/local/tomcat6/conf/Catalina/localhost

solrdev.xml 

<Context docBase="/usr/local/tomcat6/data/solr/apache-solr-1.2.0.war" debug="0" crossContext="true">
<Environment name="solr/home" type="java.lang.String" value="/usr/local/tomcat6/data/solr/dev" override="true" />
</Context>

solrprod.xml 

<Context docBase="/usr/local/tomcat6/data/solr/apache-solr-1.2.0.war" debug="0" crossContext="true">
<Environment name="solr/home" type="java.lang.String" value="/usr/local/tomcat6/data/solr/prod" override="true" />
</Context>

There are some sample configuration files which come with the Solr distribution you downloaded. Let's copy those into their proper position. Go to the working directory where you downloaded solr, and into the /example/solr/conf subdirectory: /apache-solr-1.2.0/example/solr/conf. You should see something like this: 
admin-extra.html  schema.xml    solrconfig.xml  synonyms.txt
protwords.txt     scripts.conf  stopwords.txt   xslt
Copy everything here to your development solr configuration directory: 

cp -R * /usr/local/tomcat6/data/solr/dev/conf

Do the same for your production location also: 

cp -R * /usr/local/tomcat6/data/solr/prod/conf

Time to test. Everything should now be in place. Sacrifice a chicken and restart Tomcat: 

cd /usr/local/tomcat6/bin
./startup.sh

Go to your browser and type this: 

http://localhost:8080/solrprod

and also: 

http://localhost:8080/solrdev

This this point you should see a "Welcome to Solr!" message with a "Solr Admin" link. If you can click the click and see an example search interface you've probably successfully installed Solr.

: