Prepare the environment

A Linux cloud host, installed JDK8 environment

Set up the environment

  • Download:

Wget mirror.bit.edu.cn/apache/luce… /usr/local = /usr/local

  • Unpack the

The tar – ZXF solr – 7.2.1. TGZ

  • Start the
cd /usr/local/solr-7.2.1/bin./solr start -p 30004Copy the code
  • Check whether the startup is successful

./solr status

  • access

http://IP: port /solr/#/ is accessed

To create the core

If solr_home is returned when you run the./solr status command, go to the /usr/local/solr-7.2.1/server/solr directory

cd configsets/

Copy the core template provided by Solr and place it in the solr directory

cp sample_techproducts_configs/ .. /

Rename the core

mv sample_techproducts_configs guava_item

Go to the Web page and create core

Chinese word segmentation

This Chinese participle is used by solr’s own participle

Go to the /usr/local/solr-7.2.1/contrib/analysis-extras/lucene-libs directory

Cp lucene – analyzers – smartcn – 7.2.1. Jar/usr/local/solr – 7.2.1 / server/solr webapp/webapp/WEB – INF/lib

Go to the newly created core directory. In this example, the core directory is /usr/local/solr-7.2.1/server/solr/guava_item

Go to the conf directory and change the managed-schema configuration file

In the file add: refer to the blog: blog.csdn.net/a897180673/…

Note: Solr needs to be restarted because new lib was added and not loaded into memory by Solr

Run the /usr/local/solr-7.2.1/bin command./solr restart -p 30004 command

Revisit the Web to test the success of the word segmentation