These 11 commits are when the Protocol Buffers files have changed:
Commit: | 953fe11 | |
---|---|---|
Author: | Rishabh Singh | |
Committer: | GitHub |
gRPC query extension (#15982) Revives #14024 and additionally supports, Native queries gRPC health check endpoint This PR doesn't have the shaded module for packaging gRPC and Guava libraries since grpc-query module uses the same Guava version as that of Druid. The response is gRPC-specific. It provides the result schema along with the results as a binary "blob". Results can be in CSV, JSON array lines or as an array of Protobuf objects. If using Protobuf, the corresponding class must be installed along with the gRPC query extension so it is available to the Broker at runtime.
The documentation is generated from this commit.
Commit: | cbccde5 | |
---|---|---|
Author: | Kashif Faraz | |
Committer: | GitHub |
allow string dimension indexer to handle byte[] as base64 strings (#13573) (#13582) This PR expands `StringDimensionIndexer` to handle conversion of `byte[]` to base64 encoded strings, rather than the current behavior of calling java `toString`. This issue was uncovered by a regression of sorts introduced by #13519, which updated the protobuf extension to directly convert stuff to java types, resulting in `bytes` typed values being converted as `byte[]` instead of a base64 string which the previous JSON based conversion created. While outputting `byte[]` is more consistent with other input formats, and preferable when the bytes can be consumed directly (such as complex types serde), when fed to a `StringDimensionIndexer`, it resulted in an ugly java `toString` because `processRowValsToUnsortedEncodedKeyComponent` is fed the output of `row.getRaw(..)`. Converting `byte[]` to a base64 string within `StringDimensionIndexer` is consistent with the behavior of calling `row.getDimension(..)` which does do this coercion (and why many tests on binary types appeared to be doing the expected thing). I added some protobuf `bytes` tests, but they don't really hit the new `StringDimensionIndexer` behavior because they operate on the `InputRow` directly, and call `getDimension` to validate stuff. The parser based version still uses the old conversion mechanisms, so when not using a flattener incorrectly calls `toString` on the `ByteString`. I have encoded this behavior in the test for now, if we either update the parser to use the new flattener or just .. remove parsers we can remove this test stuff. Co-authored-by: Clint Wylie <cwylie@apache.org>
Commit: | d9e5245 | |
---|---|---|
Author: | Clint Wylie | |
Committer: | GitHub |
allow string dimension indexer to handle byte[] as base64 strings (#13573) This PR expands `StringDimensionIndexer` to handle conversion of `byte[]` to base64 encoded strings, rather than the current behavior of calling java `toString`. This issue was uncovered by a regression of sorts introduced by #13519, which updated the protobuf extension to directly convert stuff to java types, resulting in `bytes` typed values being converted as `byte[]` instead of a base64 string which the previous JSON based conversion created. While outputting `byte[]` is more consistent with other input formats, and preferable when the bytes can be consumed directly (such as complex types serde), when fed to a `StringDimensionIndexer`, it resulted in an ugly java `toString` because `processRowValsToUnsortedEncodedKeyComponent` is fed the output of `row.getRaw(..)`. Converting `byte[]` to a base64 string within `StringDimensionIndexer` is consistent with the behavior of calling `row.getDimension(..)` which does do this coercion (and why many tests on binary types appeared to be doing the expected thing). I added some protobuf `bytes` tests, but they don't really hit the new `StringDimensionIndexer` behavior because they operate on the `InputRow` directly, and call `getDimension` to validate stuff. The parser based version still uses the old conversion mechanisms, so when not using a flattener incorrectly calls `toString` on the `ByteString`. I have encoded this behavior in the test for now, if we either update the parser to use the new flattener or just .. remove parsers we can remove this test stuff.
Commit: | 44d6293 | |
---|---|---|
Author: | Abhishek Agarwal | |
Committer: | GitHub |
handle timestamps of complex types when parsing protobuf messages (#11293) * handle timestamps correctly when parsing protobuf * Add timestamp handling to ProtobufReader * disable checkstyle for generated sourcecode * Fix test * try this * refactor tests
Commit: | 362feed | |
---|---|---|
Author: | Gian Merlino | |
Committer: | Fangjin Yang |
add missing license headers, in particular to MD files; clean up RAT … (#6563) (#6611) * add missing license headers, in particular to MD files; clean up RAT exclusions * revert inadvertent doc changes * docs * cr changes * fix modified druid-production.svg
Commit: | afb239b | |
---|---|---|
Author: | David Lim | |
Committer: | Gian Merlino |
add missing license headers, in particular to MD files; clean up RAT … (#6563) * add missing license headers, in particular to MD files; clean up RAT exclusions * revert inadvertent doc changes * docs * cr changes * fix modified druid-production.svg
Commit: | 431d3d8 | |
---|---|---|
Author: | Gian Merlino | |
Committer: | GitHub |
Rename io.druid to org.apache.druid. (#6266) * Rename io.druid to org.apache.druid. * Fix META-INF files and remove some benchmark results. * MonitorsConfig update for metrics package migration. * Reorder some dimensions in inner queries for some reason. * Fix protobuf tests.
Commit: | 3400f60 | |
---|---|---|
Author: | Kenji Noguchi | |
Committer: | Fangjin Yang |
Protobuf extension (#4039) * move ProtoBufInputRowParser from processing module to protobuf extensions * Ported PR #3509 * add DynamicMessage * fix local test stuff that slipped in * add license header * removed redundant type name * removed commented code * fix code style * rename ProtoBuf -> Protobuf * pom.xml: shade protobuf classes, handle .desc resource file as binary file * clean up error messages * pick first message type from descriptor if not specified * fix protoMessageType null check. add test case * move protobuf-extension from contrib to core * document: add new configuration keys, and descriptions * update document. add examples * move protobuf-extension from contrib to core (2nd try) * touch * include protobuf extensions in the distribution * fix whitespace * include protobuf example in the distribution * example: create new pb obj everytime * document: use properly quoted json * fix whitespace * bump parent version to 0.10.1-SNAPSHOT * ignore Override check * touch
Commit: | 5ab6710 | |
---|---|---|
Author: | cheddar | |
Committer: | cheddar |
No more com.metamx.druid, it is now all io.druid!
Commit: | 56e2b95 | |
---|---|---|
Author: | cheddar | |
Committer: | cheddar |
OMG!!! A lot of stuff has been moved. Modules have been created and destroyed, but everything is compiling and unit tests are passing, OMFG this is awesome.!
Commit: | 89b0c84 | |
---|---|---|
Author: | Jan Rudert |
initial implementation of an protocol buffers firehose