对于不分词的field执行聚合操作.
先来看一个请求1
2
3
4
5
6
7
8
9
10GET /test_index/test_type/_search
{
"aggs": {
"group_by_test_field": {
"terms": {
"field": "test_field"
}
}
}
}
返回值:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30{
"error": {
"root_cause": [
{
"type": "illegal_argument_exception",
"reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [test_field] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory."
}
],
"type": "search_phase_execution_exception",
"reason": "all shards failed",
"phase": "query",
"grouped": true,
"failed_shards": [
{
"shard": 0,
"index": "test_index",
"node": "f57uV91xS_GRTQS2Ho81rg",
"reason": {
"type": "illegal_argument_exception",
"reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [test_field] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory."
}
}
],
"caused_by": {
"type": "illegal_argument_exception",
"reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [test_field] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory."
}
},
"status": 400
}
请求是根据test_field去做聚合分析的,执行的时候报错,大概意思就是说,你必须打开fielddata,然后将正排索引数据加载到内存中,才可以对分词的field执行聚合操作,而且会消耗很大的内存
所以,对于分词的field要做聚合操作的话,需要设置fielddata=true
设置fielddata为true
直接设置就可以了,不需要删除索引,重新建立1
2
3
4
5
6
7
8
9POST /test_index/_mapping/test_type
{
"properties": {
"test_field":{
"type": "text",
"fielddata": true
}
}
}
执行完毕,查询看一下1
GET /test_index/_mapping/test_type
返回值:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20{
"test_index": {
"mappings": {
"test_type": {
"properties": {
"test_field": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
},
"fielddata": true
}
}
}
}
}
}
设置好了之后,再执行最开始那个聚合请求1
2
3
4
5
6
7
8
9
10
11GET /test_index/test_type/_search
{
"size": 0,
"aggs": {
"group_by_test_field": {
"terms": {
"field": "test_field"
}
}
}
}
返回值:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26{
"took": 26,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 3,
"max_score": 0,
"hits": []
},
"aggregations": {
"group_by_test_field": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "test",
"doc_count": 3
}
]
}
}
}
已经是可以执行的了
使用内置field执行聚合操作
看下这个索引的_mapping, test_field中有一个内置的field是test_field.keyword,这个内置的field是不会进行分词的,所以,也可以的在不设置fielddata=true的情况下,使用内置的keyword 来进行聚合操作
1 | GET /test_index/test_type/_search |
返回值:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26{
"took": 8,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 3,
"max_score": 0,
"hits": []
},
"aggregations": {
"group_by_test_field": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "test",
"doc_count": 3
}
]
}
}
}
这里需要注意一下,内置的keyword这个field还有一个ignore_above属性,默认是256,前面有说过,是只保留field值的前256个,所以要控制field值在256以内,或者也可以自己设置ignore_above的值
分词field+fileddata工作原理
如果某个field设置的是不分词的,那么在index-time的时候,就会自动生成doc_value,针对这些不分词的field执行聚合操作的时候,自动就会用doc value来执行
对于分词的field,是没有doc_value的. 在index-time的时候,如果某个field是分词的,那么是不会给它建立doc_value的,因为分词后,占用的空间过于大,所以默认是不支持分词field进行聚合的
如果要对分词的field执行聚合的话,是必须打开和使用fielddata,完全存在纯内存中, 结构和doc_value是类似的,但是只会将fielddata加载到内存中来,然后基于内存中的fielddata执行分词field的聚合操作,最开始的那个请求的报错信息中也说了,可能会占用大量的内存
那为什么fielddata必须存在内存?
因为分词的字符串,需要按照term进行聚合,需要执行更加复杂的算法和操作,如果基于磁盘和os cache,那么性能会很差