One, foreword
Recently, I have been thinking about a site-searching function. I learned Lucene before, and then I heard the word Solr. Elasticsearch is also based on Lucene.
I went to a couple of Elasticsearch tutorials and found a lot of them are Linux-based, but I’m not familiar with Linux and rarely use it. Just a few simple commands, wait for the real use of Linux to slowly read.
I found a well-written tutorial:
Blog.csdn.net/laoyang360/…
The first step is to install Elasticsearch, as described in the previous tutorial, with a one-click setup script. Here is used to record my personal installation process.
PS: March 22, 2018 18:58:12 update: I don’t recommend installing Elasticsearch on Windows because it’s a bit of a hassle and has a few minor issues. Learn Elasticsearch on Linux. If you don’t want to pay for it, you can use a virtual machine
Installing Elasticsearch on Windows (not recommended)
2.1 installation Elasticsearch
Install Elasticsearch to run as the Window service, which is available in version 2.3.3. So I went to install 2.3.3.
You can find the corresponding version in the search box on the official website
To run as a Windows service, the link given in the tutorial is dead. I found one
www.cnblogs.com/viaiu/p/571…
2.2 installation head
To install the head plugin, go to the appropriate directory and use the following command:
plugin install mobz/elasticsearch-head
Copy the code
2.3 Installing the Kibana and Logstash plug-ins to run as services
The links given are stackOverFlow questions and YouTube videos. StackOverFlow didn’t solve the problem with my setup process. I wasn’t connected to the Internet and couldn’t get in.
Later, I found another tutorial, which also went well:
Segmentfault.com/a/119000001…
The Logstash version is: 2.3.3 corresponds to Elasticsearch.
Download Kibana version: 4.5.0
If the following information is displayed, the installation is successful
2.4 installation shields
plugin install license
plugin install shield
Copy the code
Restart, and execute:
Adding an Administrator
bin/shield/esusers useradd adminName -r admin
Copy the code
Add a user for Kibana
esusers useradd kibanaserver -r kibana4_server
Copy the code
On its configuration (Kibana.yml) add:
kibana_elasticsearch_username: kibanaserver The #Kibana service will use this username to access the ElasticSearch server.
kibana_elasticsearch_password: zhongfucheng # your password
Copy the code
You can log in only if you have configured the Kibanaserver account
2.5 Installing a Word Divider
This tutorial gives the connection to use, I screenshot the main:
The IK word divider is available in version 1.9.3. The article is presented as MVN package download. So in Github we download the resouce type.
To decompress the zip to the current file, take the conf data to the source file (not including the folder!).
2.6 Pinyin word divider
I downloaded Chinese word divider before, and found pinyin word divider when WATCHING the tutorial behind. The installation of pinyin word divider is very similar to the installation of Chinese word divider
Version 1.7.3 is available for Elasticsearch2.3.3
End of Elasticsearch for Windows.
Install Elasticsearch on Linux
These are my notes when I was building a project,
3.1 download the Elasticsearch
Elasticserach is very easy to download and provides search to download. I won’t post the link here. Just go to the official website. Or go to my Elasticsearch study record.
I downloaded version 2.3.3, because I downloaded version 2.3.3 when I was developing Windows, just to be consistent.
3.2 installation Elasticsearch
The tar - XZVF elasticsearch - 2.3.3. Tar. GzCopy the code
Just go to the bin directory and execute…
You need to execute ElasticSearch like this if you are using root
./elasticsearch -d -Des.insecure.allow.root=true
Copy the code
It is now possible to obtain the information using the following statement
curl -X GET 'http://localhost:9200'
Copy the code
Want to access through the network, you will need to modify the configuration file, the reference link blog.csdn.net/u012599988/…
Also open the port on the ESC server to access:
3.2.1 Download the Head plug-in
When downloading the head plugin, you need to change the user and group of ElasticSearch, otherwise it won’t let you download it. The command is
Add users and groups
groupadd elasticsearch
useradd elasticsearch -g elasticsearch -p 123456
Copy the code
Modifying folder permissions
Chown -r elasticsearch: elasticsearch elasticsearch - 2.3.3Copy the code
After that, you can execute the command to download the head plug-in.
./plugin install mobz/elasticsearch-head
Copy the code
Do not download the Shield plugin immediately after downloading the head plugin. Create an index on the head plugin first!
Otherwise, if you download the Shield plug-in and then access the head plug-in, you will not be able to connect to the node!
It took me a long time to finish!!!!! There are a lot of people online who have been in this situation and don’t have a good answer. They’re all talking about configuration files.
I found the answer by looking at an issue suggested by someone else on Github. Reference: https://github.com/mobz/elasticsearch-head/issues/191#issuecomment-132636493
Remember, create the index in the head plugin first, then download the Shield plugin, otherwise you will not be able to connect to the Head plugin!
3.2.2 Download Permission Shield
I have downloaded Shiled in Windows development, in order to keep consistent, I will also download.
Enter the command:
plugin install license
plugin install shield
Copy the code
After downloading, configure an administrator user
bin/shield/esusers useradd adminName -r admin
Copy the code
If the article has the wrong place welcome to correct, everybody exchanges with each other. Students who are used to reading technical articles on wechat and want to get more Java resources can follow the wechat public account :Java3y
Fourth, REST API
Elasticsearch and its various plugins were downloaded in the previous chapter. Now that you know the concepts, it’s time to take a look at how Elasticsearch works.
Reference links:
Blog.csdn.net/laoyang360/…
The REST API for Elasticsearch is now available in this tutorial.
www.yiibai.com/elasticsear…
Five, the JAVA API
After all, we use Java, so we need to know how Elasticsearch connects to Java. I found a few tutorials:
www.cnblogs.com/wenbronk/p/…
www.cnblogs.com/tutu21ybz/p…
Later I found a more systematic tutorial and recommend this:
Blog.csdn.net/napoay/arti…
I also followed the tutorial to write a Demo, which also made a lot of mistakes. Paste code:
import com.carrotsearch.hppc.cursors.ObjectObjectCursor;
import org.elasticsearch.action.admin.indices.close.CloseIndexResponse;
import org.elasticsearch.action.admin.indices.create.CreateIndexRequest;
import org.elasticsearch.action.admin.indices.create.CreateIndexResponse;
import org.elasticsearch.action.admin.indices.delete.DeleteIndexResponse;
import org.elasticsearch.action.admin.indices.exists.indices.IndicesExistsRequest;
import org.elasticsearch.action.admin.indices.exists.indices.IndicesExistsResponse;
import org.elasticsearch.action.admin.indices.exists.types.TypesExistsResponse;
import org.elasticsearch.action.admin.indices.open.OpenIndexResponse;
import org.elasticsearch.action.bulk.BulkRequestBuilder;
import org.elasticsearch.action.delete.DeleteResponse;
import org.elasticsearch.action.deletebyquery.DeleteByQueryAction;
import org.elasticsearch.action.deletebyquery.DeleteByQueryRequestBuilder;
import org.elasticsearch.action.get.GetResponse;
import org.elasticsearch.action.get.MultiGetItemResponse;
import org.elasticsearch.action.get.MultiGetResponse;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.action.index.IndexResponse;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.action.update.UpdateRequest;
import org.elasticsearch.client.Client;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.cluster.metadata.MappingMetaData;
import org.elasticsearch.common.collect.ImmutableOpenMap;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.InetSocketTransportAddress;
import org.elasticsearch.index.query.QueryBuilder;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.script.Script;
import org.elasticsearch.script.ScriptService;
import org.elasticsearch.search.SearchHit;
import org.elasticsearch.search.SearchHits;
import org.elasticsearch.shield.ShieldPlugin;
import org.junit.Test;
import java.io.*;
import java.net.InetAddress;
import java.net.UnknownHostException;
import java.util.List;
import java.util.concurrent.ExecutionException;
import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;
import static org.elasticsearch.index.query.QueryBuilders.termQuery;
import org.elasticsearch.client.transport.TransportClient;
/** * Created by ozc on 2017/11/5. */
public class ElasticsearchDemo {
/** * create *@throws UnknownHostException
*/
@Test
public void CreateIndex(a) throws UnknownHostException {
// Connect to the client
Client client = TransportClient.builder().build()
.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("127.0.0.1"), 9300));
// Convert the data into JSON strings and load them as collections
List<String> jsonData = DataFactory.getInitJsonData();
// Create blog index, article type, data is the above JSON string
for (int i = 0; i < jsonData.size(); i++) {
IndexResponse response = client.prepareIndex("blog"."article"."1").setSource(jsonData.get(i)).get();
if (response.isCreated()) {
System.out.println("Created successfully!");
}
}
client.close();
}
/** * query *@throws UnknownHostException
*/
@Test
public void selectIndex(a) throws UnknownHostException {
Settings settings = Settings.settingsBuilder()
.put("cluster.name"."elasticsearch")
.put("shield.user"."zhongfucheng:zhongfucheng")
.build();
TransportClient client = TransportClient.builder()
.addPlugin(ShieldPlugin.class)
.settings(settings).build();
client.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("127.0.0.1"), 9300));
// Query the document containing the Hibernate keyword in the title field
QueryBuilder qb1 = termQuery("title"."hibernate");
Git git git git git git git git git git git
QueryBuilder qb2= QueryBuilders.multiMatchQuery("git"."title"."content");
SearchResponse response = client.prepareSearch("blog").setTypes("article").setQuery(qb2).execute()
.actionGet();
SearchHits hits = response.getHits();
if (hits.totalHits() > 0) {
for (SearchHit hit : hits) {
System.out.println("score:"+hit.getScore()+":\t"+hit.getSource());// .get("title")}}else {
System.out.println("I got zero results."); }}/** * Update * with updateRequest@throws IOException
* @throws ExecutionException
* @throws InterruptedException
*/
@Test
public void update1(a) throws IOException, ExecutionException, InterruptedException {
// Connect to the client
Client client = TransportClient.builder().build()
.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("127.0.0.1"), 9300));
UpdateRequest uRequest = new UpdateRequest();
uRequest.index("blog");
uRequest.type("article");
uRequest.id("1");
uRequest.doc(jsonBuilder().startObject().field("content"."Learning objective To understand the meaning of Java generics generation SSSS").endObject());
client.update(uRequest).get();
}
/** * Use script update, need to change the configuration file@throws IOException
* @throws ExecutionException
* @throws InterruptedException
*/
@Test
public void update2(a) throws IOException, ExecutionException, InterruptedException {
// Connect to the client
Client client = TransportClient.builder().build()
.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("127.0.0.1"), 9300));
client.prepareUpdate("blog"."article"."1")
.setScript(new Script("Ctx._source. title = "git entry "", ScriptService.ScriptType.INLINE, null.null))
.get();
}
/** * Update with doc *@throws IOException
* @throws ExecutionException
* @throws InterruptedException
*/
@Test
public void update3(a) throws IOException, ExecutionException, InterruptedException {
// Connect to the client
Client client = TransportClient.builder().build()
.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("127.0.0.1"), 9300));
client.prepareUpdate("blog"."article"."1")
.setDoc(jsonBuilder().startObject().field("content"."Comparison between SVN and Git 222222...).endObject()).get();
}
/** * Use updateRequest to add new fields *@throws IOException
* @throws ExecutionException
* @throws InterruptedException
*/
@Test
public void update4(a) throws IOException, ExecutionException, InterruptedException {
// Connect to the client
Client client = TransportClient.builder().build()
.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("127.0.0.1"), 9300));
UpdateRequest updateRequest = new UpdateRequest("blog"."article"."1")
.doc(jsonBuilder().startObject().field("commet"."0").endObject());
client.update(updateRequest).get();
}
/** * Update with UpdateRequest and create a new index * if the document does not exist@throws IOException
* @throws ExecutionException
* @throws InterruptedException
*/
@Test
public void update5(a) throws IOException, ExecutionException, InterruptedException {
// Connect to the client
Client client = TransportClient.builder().build()
.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("127.0.0.1"), 9300));
IndexRequest indexRequest = new IndexRequest("blog"."article"."10").source(jsonBuilder().startObject()
.field("title"."Git installed 10").field("content".Git... 10").endObject());
UpdateRequest uRequest2 = new UpdateRequest("blog"."article"."10").doc(
jsonBuilder().startObject().field("title"."Git installed").field("content"."Git...").endObject())
.upsert(indexRequest);
client.update(uRequest2).get();
}
/** * drop the specified index *@throws IOException
* @throws ExecutionException
* @throws InterruptedException
*/
@Test
public void deleteSpecificIndex(a) throws IOException, ExecutionException, InterruptedException {
// Connect to the client
Client client = TransportClient.builder().build()
.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("127.0.0.1"), 9300));
DeleteResponse dResponse = client.prepareDelete("blog"."article"."10").execute()
.actionGet();
if (dResponse.isFound()) {
System.out.println("Deleted successfully");
} else {
System.out.println("Delete failed"); }}/** * drop the entire index *@throws IOException
* @throws ExecutionException
* @throws InterruptedException
*/
@Test
public void deleteIndex(a) throws IOException, ExecutionException, InterruptedException {
// The name of the index library to drop
String indexName = "zhognfucheng";
if(! isIndexExists(indexName)) { System.out.println(indexName +" not exists");
} else {
Client client = TransportClient.builder().build().addTransportAddress(
new InetSocketTransportAddress(InetAddress.getByName("127.0.0.1"),
9300));
DeleteIndexResponse dResponse = client.admin().indices().prepareDelete(indexName)
.execute().actionGet();
if (dResponse.isAcknowledged()) {
System.out.println("delete index "+indexName+" successfully!");
}else{
System.out.println("Fail to delete index "+indexName); }}}// Create index library
@Test
public void createIndex(a) {
// The name of the index library to be created
String indexName = "shcool";
try {
Client client = TransportClient.builder().build().addTransportAddress(
new InetSocketTransportAddress(InetAddress.getByName("127.0.0.1"), 9300));
// Create index library
if (isIndexExists(indexName)) {
System.out.println("Index " + indexName + " already exits!");
} else {
CreateIndexRequest cIndexRequest = new CreateIndexRequest(indexName);
CreateIndexResponse cIndexResponse = client.admin().indices().create(cIndexRequest)
.actionGet();
if (cIndexResponse.isAcknowledged()) {
System.out.println("create index successfully!");
} else {
System.out.println("Fail to create index!"); }}}catch(UnknownHostException e) { e.printStackTrace(); }}/** * Export json from Elasticsearch */
@Test
public void ElasticSearchBulkOut(a) {
try {
/ / initialization
Settings settings = Settings.settingsBuilder()
.put("cluster.name"."elasticsearch").build();/ / cluster. The name in elasticsearch. Yml
// Connect to the client
Client client = TransportClient.builder().settings(settings).build()
.addTransportAddress(new InetSocketTransportAddress(
InetAddress.getByName("127.0.0.1"), 9300));
// Match all queries
QueryBuilder qb = QueryBuilders.matchAllQuery();
SearchResponse response = client.prepareSearch("blog")
.setTypes("article").setQuery(qb)
.execute().actionGet();
// Get the hit record
SearchHits resultHits = response.getHits();
// Walk through the hit record and write to the file
File article = new File("C:\\ElasticsearchDemo\\src\\java\\file\\bulk.txt");
FileWriter fw = new FileWriter(article);
BufferedWriter bfw = new BufferedWriter(fw);
if (resultHits.getHits().length == 0) {
System.out.println("Got zero data!");
} else {
for (int i = 0; i < resultHits.getHits().length; i++) {
String jsonStr = resultHits.getHits()[i]
.getSourceAsString();
System.out.println(jsonStr);
bfw.write(jsonStr);
bfw.write("\n");
}
}
bfw.close();
fw.close();
} catch (UnknownHostException e) {
e.printStackTrace();
} catch(IOException e) { e.printStackTrace(); }}/** * Import to Elasticsearch */
@Test
public void ElasticSearchBulkInput(a) {
try {
Settings settings = Settings.settingsBuilder()
.put("cluster.name"."elasticsearch").build();// cluster.name is configured in elasticSearch. yml
Client client = TransportClient.builder().settings(settings).build()
.addTransportAddress(new InetSocketTransportAddress(
InetAddress.getByName("127.0.0.1"), 9300));
File article = new File("C:\\ElasticsearchDemo\\src\\java\\file\\bulk.txt");
FileReader fr=new FileReader(article);
BufferedReader bfr=new BufferedReader(fr);
String line = null;
BulkRequestBuilder bulkRequest=client.prepareBulk();
int count=0;
while((line=bfr.readLine())! =null){
bulkRequest.add(client.prepareIndex("test"."article").setSource(line));
if (count%10= =0) {
bulkRequest.execute().actionGet();
}
count++;
//System.out.println(line);
}
bulkRequest.execute().actionGet();
bfr.close();
fr.close();
} catch (UnknownHostException e) {
e.printStackTrace();
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch(IOException e) { e.printStackTrace(); }}/** * check whether the index exists by name@param indexName
* @return* /
public boolean isIndexExists(String indexName) {
boolean flag = false;
try {
Client client = TransportClient.builder().build().addTransportAddress(
new InetSocketTransportAddress(InetAddress.getByName("127.0.0.1"), 9300));
IndicesExistsRequest inExistsRequest = new IndicesExistsRequest(indexName);
// Check whether multiple index names exist
//IndicesExistsResponse indexResponse = client.admin().indices().prepareExists("blog","blog1").execute().actionGet();
IndicesExistsResponse inExistsResponse = client.admin().indices()
.exists(inExistsRequest).actionGet();
if (inExistsResponse.isExists()) {
flag = true;
} else {
flag = false; }}catch (UnknownHostException e) {
e.printStackTrace();
}
return flag;
}
/** * delete all documents under type ** * delete all records by table ** ps (install delete-by-query) and import poM package ** If you run Elastic as a Windows service, you'll need to restart service in your bin directory. * /
@Test
public void deleteByQuery(a) throws UnknownHostException {
Client client = TransportClient.builder().build().addTransportAddress(
new InetSocketTransportAddress(InetAddress.getByName("127.0.0.1"), 9300));
String deletebyquery = "{\"query\": {\"match_all\": {}}}";
DeleteByQueryRequestBuilder response = new DeleteByQueryRequestBuilder(client,DeleteByQueryAction.INSTANCE);
response.setIndices("blog").setTypes("article").setSource(deletebyquery)
.execute()
.actionGet();
}
//-------------------------------------------------------------------
/** * Java three layer nested query, I feel is used very little. Post a link to: * http://blog.csdn.net/napoay/article/details/52060659 * < p > * search with the same parent id document, I also feel very little. Posted a link to it: * http://blog.csdn.net/napoay/article/details/52118408 * * cluster configuration, use now. Post a link to it: * http://blog.csdn.net/napoay/article/details/52202877 * /
//-------------------------------------------------------------------
/** * delete id * from index "blog", type "article", id "1" (need to add the following to configuration file), otherwise an exception will occur ** script.inline: on script.indexed: on script.engine.groovy.inline.aggs: on * *@throws UnknownHostException
*/
@Test
public void deleteField(a) throws UnknownHostException {
Client client = TransportClient.builder().build().addTransportAddress(
new InetSocketTransportAddress(InetAddress.getByName("127.0.0.1"), 9300));
client.prepareUpdate("blog"."article"."1").setScript(new Script("ctx._source.remove(\"title\")",ScriptService.ScriptType.INLINE, null.null)).get();
// Delete attributes from attributes
//client.prepareUpdate("test", "document", "1").setScript(new Script( "ctx._source.processInstance.remove(\"id\")",ScriptService.ScriptType.INLINE, null, null)).get();
}
/** * The connection mode of cliet will be changed with the addition of Elasticsearch security plugin. * * website gave maven coordinates can not be found, only to download a JAR package. * * You need to configure Kibana as well as Elasticsearch. * * Links: * http://blog.csdn.net/sd4015700/article/details/50427852 * * the current configuration Elasticsearch zhongfucheng: zhongfucheng * The current configuration of kibana kibanaserver: zhongfucheng (ps: this part also need to modify the configuration files, see the above link) * * /
@Test
public void changeClient(a) throws UnknownHostException {
Settings settings = Settings.settingsBuilder()
.put("cluster.name"."elasticsearch")
.put("shield.user"."zhongfucheng:zhongfucheng")
.build();
TransportClient client = TransportClient.builder()
.addPlugin(ShieldPlugin.class)
.settings(settings).build();
client.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("127.0.0.1"), 9300));
System.out.println(client);
}
/** * Check whether the type exists *@throws UnknownHostException
*/
@Test
public void isTypeExists(a) throws UnknownHostException {
Settings settings = Settings.settingsBuilder()
.put("cluster.name"."elasticsearch")
.put("shield.user"."zhongfucheng:zhongfucheng")
.build();
TransportClient client = TransportClient.builder()
.addPlugin(ShieldPlugin.class)
.settings(settings).build();
client.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("127.0.0.1"), 9300));
// bolg is the index and article is the type
TypesExistsResponse typeResponse = client.admin().indices()
.prepareTypesExists("blog").setTypes("article")
.execute().actionGet();
System.out.println(typeResponse.isExists());
}
/** * close index */
@Test
public void closeIndex(a) throws UnknownHostException {
Settings settings = Settings.settingsBuilder()
.put("cluster.name"."elasticsearch")
.put("shield.user"."zhongfucheng:zhongfucheng")
.build();
TransportClient client = TransportClient.builder()
.addPlugin(ShieldPlugin.class)
.settings(settings).build();
client.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("127.0.0.1"), 9300));
CloseIndexResponse cIndexResponse = client.admin().indices().prepareClose("shcool")
.execute().actionGet();
if (cIndexResponse.isAcknowledged()) {
System.out.println("Closing index succeeded"); }}/** * open index *@throws UnknownHostException
*/
@Test
public void openIndex(a) throws UnknownHostException {
Settings settings = Settings.settingsBuilder()
.put("cluster.name"."elasticsearch")
.put("shield.user"."zhongfucheng:zhongfucheng")
.build();
TransportClient client = TransportClient.builder()
.addPlugin(ShieldPlugin.class)
.settings(settings).build();
client.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("127.0.0.1"), 9300));
OpenIndexResponse oIndexResponse = client.admin().indices()
.prepareOpen("shcool")
.execute().actionGet();
System.out.println(oIndexResponse.isAcknowledged());
}
/** * get a collection of documents * can get the same index, the same type, different ID of the document collection */
@Test
public void getDocCollection(a) throws UnknownHostException {
Settings settings = Settings.settingsBuilder()
.put("cluster.name"."elasticsearch")
.put("shield.user"."zhongfucheng:zhongfucheng")
.build();
TransportClient client = TransportClient.builder()
.addPlugin(ShieldPlugin.class)
.settings(settings).build();
client.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("127.0.0.1"), 9300));
MultiGetResponse multiGetItemResponses = client.prepareMultiGet()
.add("blog"."article"."1") / / comment 1
Add ("twitter", "tweet", "2", "3", "4") // Comment 2
.add("test"."article"."AV-K5x1NAQtVzQCf247Q") / / comment 3
.get();
for (MultiGetItemResponse itemResponse : multiGetItemResponses) { / / comment 4
GetResponse response = itemResponse.getResponse();
if (response.isExists()) { / / comment 5
String json = response.getSourceAsString(); / / comment 6System.out.println(json); }}/** Note 1: Get a document with a single ID. Note 2: Pass in multiple ids to get multiple documents from the same index/type name. Note 3: Documents in different indexes can be retrieved simultaneously. Note 4: Iterate over the result set. Note 5: Verify that the document exists. Note 6: Get the document source. */
}
/** * Get each Type and Mapping * under the index@throws IOException
*/
@Test
public void getIndexTypeAndMapping(a) throws IOException {
Settings settings = Settings.settingsBuilder()
.put("cluster.name"."elasticsearch")
.put("shield.user"."zhongfucheng:zhongfucheng")
.build();
TransportClient client = TransportClient.builder()
.addPlugin(ShieldPlugin.class)
.settings(settings).build();
client.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("127.0.0.1"), 9300));
ImmutableOpenMap<String, MappingMetaData> mappings = client.admin().cluster().prepareState().execute()
.actionGet().getState().getMetaData().getIndices().get("blog").getMappings();
for (ObjectObjectCursor<String, MappingMetaData> cursor : mappings) {
System.out.println(cursor.key); // Each type under the index
System.out.println(cursor.value.getSourceAsMap()); // Mapping for each type}}/** ** Elasticsearch also provides analysis aggregation, similar to grouping functions and statistics in Mysql. Put one link: * * * * / http://blog.csdn.net/napoay/article/details/53484730
}
Copy the code
The code above is to: blog.csdn.net/napoay/arti… So far.
There are several problems when practicing Demo:
- With SHiled authentication, maven coordinates cannot be downloaded, only JAR package can be imported (official maven coordinates cannot be found….) Shield scrapped)
- [Root@localhost] [root@localhost] [root@localhost] [root@localhost] [root@localhost] [root@localhost] [root@localhost] [root@localhost] ** found several articles on the Internet, which could not solve my problem:
- Blog.csdn.net/sd4015700/a…
- Blog.csdn.net/lvyuan1234/…
- I finally found the answer in a blog post: update the Elasticsearch connection code
Settings settings = Settings.settingsBuilder()
.put("cluster.name"."elasticsearch")
.put("shield.user"."zhongfucheng:zhongfucheng")
.build();
TransportClient client = TransportClient.builder()
.addPlugin(ShieldPlugin.class)
.settings(settings).build();
client.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("127.0.0.1"), 9300));
Copy the code
Six, summarized
I used Elasticsearch briefly in my own project and encountered a number of bugs. For example, when creating a Client for Elasticsearch, I had to add the following configuration: .put(“client.transport.sniff”, true), otherwise Elasticsearch has been very, very slow to use and has been a real sniff.
I was using Elasticsearch for autocomplete, as shown below:
For Elasticsearch, you can suggest the option.
Blog.csdn.net/gumpeng/art…
www.tcao.net/article/86….
Blog.csdn.net/liyantianmi…
Specific operation:
- Create a field for auto-completion when creating the Mapping and set the following Mapping
// Autocomplete attribute --------
.startObject("suggestName")
.field("type"."completion")
.field("analyzer"."standard")
.field("payloads"."true")
.endObject()
// Autocomplete attributes End --------
Copy the code
When a user adds a new index record, the autocomplete field is our key field
public static String String2JSON(String... strings) throws IOException {
String suggestName = strings[2];
if (suggestName.length() > 0) {
}
XContentBuilder builder = jsonBuilder()
.startObject()
.field("userId", strings[0])
.field("webSiteAddr", strings[1])
.field("webSiteName", strings[2])
// Autocomplete fields
.field("suggestName", strings[2])
.endObject();
return builder.string();
}
Copy the code
The rest of the basics are actually evolved from Lucene. If you haven’t learned Lucene, check out my other blog post:
- Lucene is that simple
If the article has the wrong place welcome to correct, everybody exchanges with each other. Students who are used to reading technical articles on wechat and want to get more Java resources can follow the wechat public account :Java3y