Get desktop application:
View/edit binary Protocol Buffers messages
Manage ingestion data from Kafka
Create a new consumer
Request describe how new Consumer should be created
Kafka boostrap servers
Kafka group ID
Summa `index_name` which will ingest data from Kafka topics
Consumer name, used for further referencing consumer in API and configs
List of topics to consume
Get a single consumer
Get a list of all consumers
(message has no fields)
Remove a consumer
Manages indices
Attaches index to Summa server. Attaching allows to incorporate and start using of downloaded or network indices
Attach index request
Index name for attaching
Attach index engine request
Description of the attached index
Committing all collected writes to the index
Store the state of index to the storage
Returned data from the commit command
Pure time spent for committing
Copy documents from one index to another
Copy documents from one index to another. Their schemes must be compatible
Where documents should be taken from
Where documents should be copied to
How to deal with conflicts on unique fields. Recommended to set to DoNothing for large updates and maintain uniqueness in your application
Copy documents response
Creates new index from scratch
Request for index creation
Index name
Index engine
Index schema in Tantivy format
Compression for store
Size of store blocks
Optional index fields
Merge policy
Creates new index from scratch
Request that changes index engine. Currently possible to convert File to IPFS
Name of index that will be migrated. It will be left intact after migration.
Name of index that will be created
Target index engine
Response describing migrated index
Deletes single document from the index by its primary key (therefore, index must have primary key)
Deletes index and physically removes file in the case of `FileEngine`
Stream of all documents from the index
Request a stream of all documents from the index
Single document from the index
Gets all existing index aliases
(message has no fields)
Gets index description
Gets all existing index descriptions
(message has no fields)
Adds document to the index in a streaming way
Adds document to the index
(message has no fields)
Merges multiple segments into a single one. Used for service purposes
Sets or replaces existing index alias
If set, equals to the previous alias of the index
Removes deletions from all segments
Loads all hot parts of the index into the memory
If set to false, only term dictionaries will be warmed, otherwise the entire index will be read.
Time spent in warming operation
Searches documents in the stored indices
Make search in Summa
Analyzes indices
Searches documents in the stored indices
Make search in Summa
Used in:
Used in:
Used in:
(message has no fields)
Attach file engine request
Used in:
(message has no fields)
Used in:
Used in:
Used in:
Used in:
Used in:
Total cache size in bytes
Collectors and CollectorOutputs
Used in:
Used in:
Compression library for store, implies on both performance and occupied disk space
Used in:
,Used in:
, ,Consumer description
Used in:
, ,Consumer name
Summa `index_name`
Used in:
(message has no fields)
Used in:
Used in:
,(message has no fields)
Used in:
,(message has no fields)
Used in:
Used in:
(message has no fields)
Used in:
(message has no fields)
Used in:
Used in:
Used in:
Used in:
Used in:
Used in:
Used in:
,Timestamp when index has been created
Unique fields of the index. Summa maintains unique constraint on them and uses for deduplicating data
Multi fields is ones that may have multiple values and processed as lists. All other fields will be forcefully converted to singular value
Text index description
Description containing `Index` metadata fields
Used in:
, , ,All index aliases
The number of committed documents
Used compression for `store`
All custom index attributes
Indexing operations that contains document serialized in JSON format
Used in:
Description of the `IndexEngine` responsible for managing files in the persistent storage
Used in:
Merge policy
Message that should be put in Kafka for ingesting by Summa consumers
Merge policy for implementing [LogMergePolicy](https://docs.rs/tantivy/latest/tantivy/merge_policy/struct.LogMergePolicy.html)
Used in:
Set if once merged segment should be left intact
Used in:
Used in:
Used in:
(message has no fields)
Used in:
Used in:
Schema of the index for memory engine
Merge policy that describes how to merge committed segments
Used in:
, , ,Used in:
Used in:
Used in:
Used in:
Used in:
Recursive query DSL
Used in:
, , , , ,Used in:
, , ,Used in:
Used in:
Used in:
Used in:
Used in:
,Which method should be used to request remote endpoint
URL template which will be used to generate real URL by variables substitution
Headers template which will be used to generate real URL by variables substitution
Description of the cache for the engine
Timeout for the request
Used in:
Used in:
,Used in:
Used in:
Used as request type in: PublicApi.search, SearchApi.search
The index name or alias
Query DSL. Use `MatchQuery` to pass a free-form query
Every collector is responsible of processing and storing documents and/or their derivatives (like counters) to return them to the caller
Is requiring fieldnorms needed for the query?
Used as response type in: PublicApi.search, SearchApi.search
Time spent inside of `search` handler
An array of collector outputs
Used in:
Used in:
Merge policy for compressing old segments
Used in:
Used in:
Used in:
Used in:
Used in: