These commits are when the Protocol Buffers files have changed: (only the last 100 relevant commits are shown)
Commit: | 31d998e | |
---|---|---|
Author: | Xinyuan Lin | |
Committer: | GitHub |
Merge branch 'master' into xinyuan-migrate-marker1
Commit: | 0db49f6 | |
---|---|---|
Author: | Xiao-zhen-Liu |
Merge branch 'refs/heads/master' into xiaozhen-input-port-storage # Conflicts: # core/amber/src/main/python/core/architecture/packaging/input_manager.py # core/amber/src/test/scala/edu/uci/ics/amber/engine/architecture/worker/DPThreadSpec.scala # core/amber/src/test/scala/edu/uci/ics/amber/engine/architecture/worker/DataProcessorSpec.scala
Commit: | 832a2dd | |
---|---|---|
Author: | Xinyuan Lin | |
Committer: | GitHub |
Add ASF header and RAT CI (#3415) Add the ASF header to all source code in the repo and introduce RAT CI. [Apache Release Audit Tool (RAT)](https://creadur.apache.org/rat) is a release audit tool focused on licenses. This PR adds the release audit tool to the GitHub Action workflow (CI). This would allow developers to quickly detect if new files during a PR submission or commit push were missing the license header. Add files to the optional .ratignore file if you want to exclude certain files and folders from being tested.
The documentation is generated from this commit.
Commit: | e01d1e3 | |
---|---|---|
Author: | Xinyuan Lin |
init
The documentation is generated from this commit.
Commit: | 10acf4d | |
---|---|---|
Author: | Xiao-zhen-Liu |
Add partitionings for port mat readers in inputManger of java side.
Commit: | 4c1cd86 | |
---|---|---|
Author: | Xinyuan Lin |
init
The documentation is generated from this commit.
Commit: | c2c2cdf | |
---|---|---|
Author: | Xinyuan Lin | |
Committer: | GitHub |
Xinyuan migrate marker (#3390)
Commit: | bb9f262 | |
---|---|---|
Author: | Xiao-zhen-Liu |
replace storageURI with storageURIs
Commit: | 82b6096 | |
---|---|---|
Author: | Shengquan Ni |
update
Commit: | b26a0e4 | |
---|---|---|
Author: | Xinyuan Lin |
update
Commit: | 0244757 | |
---|---|---|
Author: | Xinyuan Lin | |
Committer: | GitHub |
Merge branch 'master' into xinyuan-basic-for-loop
Commit: | ff30ae9 | |
---|---|---|
Author: | Xinyuan Lin | |
Committer: | GitHub |
Merge branch 'master' into xinyuan-migrate-marker
Commit: | a6a83f7 | |
---|---|---|
Author: | Xiaozhen Liu | |
Committer: | GitHub |
Remove Sink Operator in the Backend (#3312) ### Contents of this PR 1. Completely removes sink operators in the backend, including physical sink operator, sink operator executor and related proto definition, and places that refer to the sink operator. 2. Replaces the logic of `getPorts` of a region by saving ports separately (previously we are using links to get the ports). 3. Fixes the timing of flushing storage buffer for port storage writer threads. 4. Refactors Python worker to send `PortCompleted` even if no links are connected to an output port. Relevant test cases are also modified to remove sink operators from the test cases. ### Regarding region completion and `getPorts`: Currently our implementation of region completion logic is not correct as it uses links of a region to get ports of the region and use the completion of these ports to indicate completion of the region. This hacky implementation worked in the past when we created sink operators explicitly but will not work in this PR. Ideally: - A region should be defined using only the operators and links in the region. - The completion of a region should be based on the completion of its operators, and the completion of an operator is defined by the completion of all of its ports. However, we currently have a hacky implementation of “hash-join”-like operators that have dependee input ports, and we include operator with a dependee input port in two regions, which prevents us from relying on operators to define the completion of a region. Our ultimate goal is to have a clean separation of regions, but that requires removing cache read operator first. To proceed with removing sink operator, in this PR, we implement a workaround in the region completion logic and do not rely on operators of a region to get the ports. After clean separation of regions is done, we can implement the clean definition of region completion logic and remove redundant port information in regions. ### Progressive Computation **Note: This PR removed ProgressiveSinkOp. We will remove ProgressiveUtils soon due to the high cost of implementing the retraction operation. Since we now use Iceberg for storage with an append-only default, removing a record requires a full scan, making retraction impractical. We will revisit it if there is more use cases requiring progressive computation.** ### TODOS that are not part of this PR - Rename "sinkStorageTTLInSecs" and "sinkStorageCleanUpCheckIntervalInSecs" in the AmberConfig. - Remove/rename mentions of sink in the frontend.
Commit: | 34bc51a | |
---|---|---|
Author: | Xinyuan Lin | |
Committer: | GitHub |
Merge branch 'master' into xinyuan-basic-for-loop
Commit: | 2796c7a | |
---|---|---|
Author: | Xiaozhen Liu | |
Committer: | GitHub |
Use Output Ports of an Operator to Write Storage (#3295) Currently the Amber engine creates and uses sink operators to write the results of output ports of an operator. This design causes many problems, mainly because it alters the physical plan. Ideally a physical plan should not be changed once it is compiled. This PR updates the design to use output ports of an operator instead of sink operators to write port results to storage. #### Core Design Changes The changes implemented in this PR include: - The logic to add sink operators during compilation and scheduling are not removed in this PR. However, all the execution logic in the sink operator are removed. A sink operator will not do anything during execution. We will remove sink operators in the next PR. - `GlobalPortIdentity` is moved from `Region` to `workflow-core` so that it is accessible by all the modules. - The compiler does not create storage objects anymore. Instead, it produces a set of `GlobalPortIdentity` that need view results. This set is passed along to the scheduler as part of `WorkflowContext.workflowSettings`. In the future, this information will be directly produced by the frontend instead of by the compiler. - The scheduler combines the ports that need view result and materialized ports needed by the scheduler as part of a region. Ideally this information should be fixed once a region is created by the SchedulerGenerator. However, as the physical plan still needs to be changed currently (because of additional cache read operators), additional logic is implemented in the ScheduleGenerator to make sure this information is correct for all the regions. - The scheduler and resource allocator do not create storage objects directly and only assign storage URIs for each region as part of `resourceConfig`. - When a region is scheduled to execute, during the initialization of a region, these URIs are used to create storage objects. - `AssignPortRequest` is used to indicate whether an output port of a worker needs storage and to pass the storage URI information to a worker. Note this request is used for both input ports and output ports, and this PR only updates output ports. As a result, for input ports, empty storage URIs will be provided in `AssignPortRequest`. In the future, after we also use input ports to read storage, we will also update and use these storage URIs. - Note that since currently operators with dependee inputs belong to multiple regions, and AssignPortRequest is only used once for each worker, I had to implement additional logic in the to make sure all the regions that such an operator belongs to have the proper storage information (specifically, the output port connecting a dependee link belongs to two regions, and both regions need to have storageURI for this port) - Inside a worker (both Java and Python), the OutputManager is used to create writer threads for each output port that needs storage. The writing does not block the data processor, but will block the completion status of the operator/port. #### Relevant fixes This PR also contains a fix introduced by #3304, where the status of a workflow is not updated after running a workflow for the 2nd time. #### TODOs: - Completely remove sink op - Use `GlobalPortIdentity` for storage URIs - Remove the mention of "result" in the storage layer - Let the frontend specify view results on the port level - Remove cache read op and use input port for reading storage ### Important: Postgres Catalog Is Required After this PR This PR requires Postgres catalog to be set up because the Python Iceberg storage layer will be used (previously it was added in the codebase but not used), and Python Iceberg only works with Postgres catalog. If you have not switched to Postgres catalog, please refer to #3243 to set it up. We will soon make postgres catalog the default and possibly remove hadoop catalog. --------- Co-authored-by: Shengquan Ni <13672781+shengquan-ni@users.noreply.github.com>
Commit: | 22f0a70 | |
---|---|---|
Author: | Xinyuan Lin | |
Committer: | GitHub |
Merge branch 'master' into xinyuan-basic-for-loop
Commit: | 100d7bd | |
---|---|---|
Author: | Xinyuan Lin |
init
Commit: | 47266d3 | |
---|---|---|
Author: | Jiadong Bai | |
Committer: | Jiadong Bai |
Revert "Revert "Enhance Runtime Statistics with Cumulative Metrics and Tuple Size Measurement (#3262)"" This reverts commit e11f1b5e20d76b30521eb452d93495682b9e6bd5.
Commit: | 199897a | |
---|---|---|
Author: | Jiadong Bai | |
Committer: | Jiadong Bai |
Revert "Enhance Runtime Statistics with Cumulative Metrics and Tuple Size Measurement (#3262)" This reverts commit d3861cccdbcbe23499a8763672c10a5ee7c83635.
Commit: | 74da616 | |
---|---|---|
Author: | Shengquan Ni | |
Committer: | GitHub |
Add ChannelMarker support on pyAmber (#3245) This PR: 1. Added ChannelMarkerManager to track and align ChannelMarkers on the Python side. 2. Now messages to Python also attach ChannelIdentity to distinguish different channels, previously we used ActorVirtualIdentity. 3. Due to the ChannelIdentity change, the generated proto message on Python side has to assign the `is_control` field explicitly, otherwise its hash is not consistent. To give an example: ``` channel_id = ChannelIdentity(from_worker_id = ..., to_worker_id = ...) new_channel_id = bytes(channel_id) assert hash(channel_id) == hash(new_channel_id) // this will fail. ``` 4. The InputQueue now also uses channel ID as a key, instead of using 2 fixed keys: CONTROL, DATA. --------- Co-authored-by: Xinyuan Lin <xinyual3@uci.edu>
Commit: | d3861cc | |
---|---|---|
Author: | Chris | |
Committer: | GitHub |
Enhance Runtime Statistics with Cumulative Metrics and Tuple Size Measurement (#3262) This PR updates the runtime statistics by refining how metrics are stored and introducing new tuple size measurements. ### Changes 1. Cumulative Statistics: Instead of storing differential (diff) values for tuple counts and processing times, we now maintain cumulative values. This change simplifies performance tracking and analysis over time. 2. Tuple Size Metrics: Added metrics for both input and output tuple sizes. These sizes are computed using deep size calculations. The deep size calculation traverses the object graph starting from the target object, including all fields, array elements, and referenced objects. - Scala: Uses the `deepSizeOf` method. - Python: Uses the `pympler.asizeof` method. ### Migration To enable the new tuple size measurement in Python, please run: `pip install -r core/amber/requirements.txt` https://github.com/user-attachments/assets/1fc194bd-95d3-4946-b47a-e6e22b7b28a6
Commit: | 8e64347 | |
---|---|---|
Author: | Kunwoo Park |
Refactor codes
Commit: | 29ebbfc | |
---|---|---|
Author: | Kunwoo Park |
tuple size
Commit: | 7f6dddc | |
---|---|---|
Author: | Shengquan Ni |
Merge branch 'shengquan-add-channel-marker-mgr-py' into iced-tea-vldb-submission
Commit: | ef177fd | |
---|---|---|
Author: | Shengquan Ni |
WIP
Commit: | c0a95a0 | |
---|---|---|
Author: | Shengquan Ni |
update
Commit: | 9ea9d5d | |
---|---|---|
Author: | Shengquan Ni |
Merge remote-tracking branch 'icedtea/master' into iced-tea-vldb-submission
Commit: | dab8dc0 | |
---|---|---|
Author: | Xinyuan Lin |
init
Commit: | 4617890 | |
---|---|---|
Author: | Yicong Huang | |
Committer: | GitHub |
Make OpExecInitInfo serializable (#3183) Previously, creating a physical operator during compilation also required the creation of its corresponding executor instances. To delay this process so that executor instances were created within workers, we used a lambda function (in `OpExecInitInfo`). However, the lambda approach had a critical limitation: it was not serializable and it is language dependent. This PR addresses this issue by replacing the lambda functions in `OpExecInitInfo` with fully serializable Protobuf entities. The serialized information now ensures compatibility with distributed environments and is language-independent. Two primary types of `OpExecInitInfo` are introduced: 1. **`OpExecWithClassName`**: - **Fields**: `className: String`, `descString: String`. - **Behavior**: The language compiler dynamically loads the class specified by `className` and uses `descString` as its initialization argument. 2. **`OpExecWithCode`**: - **Fields**: `code: String`, `language: String`. - **Behavior**: The language compiler compiles the provided `code` based on the specified `language`. The arguments are already pre-populated into the code string. ### Special Cases The `ProgressiveSink` and `CacheSource` executors are treated as special cases. These executors require additional unique information (e.g., `storageKey`, `workflowIdentity`, `outputMode`) to initialize their executor instances. While this PR preserves the handling of these special cases, these executors will eventually be refactored or removed as part of the plan to move storage management to the port layer.
Commit: | 1d3561b | |
---|---|---|
Author: | Yicong Huang | |
Committer: | GitHub |
Remove duplicated scalapb definition (#3182) The scalapb proto definition both present in workflow-core and amber. This PR removes the second copy.
Commit: | 35f1849 | |
---|---|---|
Author: | Yicong Huang | |
Committer: | GitHub |
Move protobuf definitions under core package (#3181) This PR moves all definitions of protobuf under workflow-core to be within the core package name.
Commit: | 5524099 | |
---|---|---|
Author: | Yicong Huang | |
Committer: | GitHub |
Move output mode on port (#3169) This PR unifies the design of the output mode, and make it a property on an output port. Previously we had two modes: 1. SET_SNAPSHOT (return a snapshot of a table) 2. SET_DELTA (return a delta of tuples) And different chart types (e.g., HTML, bar chart, line chart). However, we are only using HTML type after switching to pyplot. Additionally, the output mode and chart type are associated with a logical operator, and passed along to the downstream sink operator, this does not support multiple output ports operators. ### New design 1. Move OutputMode onto an output port's property. 2. Unify to three modes: a. SET_SNAPSHOT (return a snapshot of a table) b. SET_DELTA (return a delta of tuples) c. SINGLE_SNAPSHOT (only used for visualizations to return a html) The SINGLE_SNAPSHOT is needed now as we need a way to differenciate a HTML output vs a normal data table output. This is due to the storage with mongo is limited by 16 mb, and HTMLs are usually larger than 16 mb. After we remove this limitation on the storage, we will remove the SINGLE_SNAPSHOT and fall back to SET_SNAPSHOT.
Commit: | 147d02e | |
---|---|---|
Author: | Shengquan Ni | |
Committer: | Noah Wang |
Update amber to depend on sub projects (#3111) This PR removes two copies of the code for workflow-core and workflow-operator and makes amber directly depend on those projects. Important points: 1. downgraded `snakeyaml` in `workflow-core` from 2.0 to 1.30 due to a conflict in `amber` project. 2. override all operator definitions in `workflow-compiling-service` with the latest changes in `amber`. Then deleted all operator definitions in `amber`. 3. we still have 2 implementations in `workflow-compiling-service` and `amber` for compilation. We should soon remove the implementation in `amber` Co-authored-by: Jiadong Bai <bobbaicloudwithpants@gmail.com> Co-authored-by: Yicong Huang <17627829+Yicong-Huang@users.noreply.github.com>
Commit: | 3a8382f | |
---|---|---|
Author: | Shengquan Ni | |
Committer: | Noah Wang |
Flatten micro-service folder (#3098) This PR flattens the micro-services folder into the core folder. Our next step will be to let the amber project depend on the micro-services so we can get rid of the two-copy issue ASAP. --------- Co-authored-by: Jiadong Bai <43344272+bobbai00@users.noreply.github.com>
Commit: | 0b1dec5 | |
---|---|---|
Author: | Xinyuan Lin | |
Committer: | Noah Wang |
Fix error reporting on python UDF (#3099) Fix the issue where the console consistently displays the error message "NoneType has no attribute" when encountering issues in the Python UDF. The root cause is that when the UDF contains errors, the executor fails to initialize. Despite this failure, the execution continues until the system later detects that the executor is None, leading to the same generic error message being reported. This fix ensures that errors are immediately detected, reported, and handled at the time they occur, providing more accurate and timely feedback. 
Commit: | 83b079c | |
---|---|---|
Author: | Jiadong Bai | |
Committer: | Noah Wang |
Add FatalErrorType and WorkflowFatalError proto definitions to the workflow-core (#3008) As titled, this PR adds the proto definition of `FatalErrorType` and `WorkflowFatalError` to `edu.uci.ics.amber` package. These two definitions will be used by `workflow-compiling-service`.
Commit: | bfe797d | |
---|---|---|
Author: | Shengquan Ni | |
Committer: | Noah Wang |
Migrate scala control messages to protobuf (#2950) This PR migrates all Scala control messages and their corresponding handlers to use gRPC. --- ### Main Changes (For both Scala and Python Engine) 1. **Rewrote `AsyncRPCServer` and `AsyncRPCClient`** - Adjusted both to fit into the generated gRPC interface, while keeping RPCs sent and received through actor messages. 2. **Centralized RPC Definitions** - All RPC messages and method definitions are now located in: ``` src/main/protobuf/edu/uci/ics/amber/engine/architecture/rpc ``` --- ### Steps to Create New RPCs 1. **Create a request proto message** in `controlcommands.proto` with all necessary inputs. 2. **Create a response proto message** in `controlreturns.proto`. 3. **Define the RPC**: - For **controller-handled RPCs**, add the definition to `controllerservice.proto`. - For **worker-handled RPCs**, add it to `workerservice.proto`. 4. **Generate Scala code** by running `protocGenerate` in the SBT shell. 5. **Generate Python code** by running `./scripts/python-proto-gen.sh` from `core` folder. 6. **Implement the RPC handler**: - Scala Engine: in either `ControllerAsyncRPCHandlerInitializer` or `WorkerAsyncRPCHandlerInitializer`, and move it to the `promisehandlers` folder for better organization. - Python Engine: create a new handler under `python/core/architecture/handlers/control` and override corresponding `async` method for the new rpc defined in worker service. --- ### Changes in Sending RPCs (Scala Engine) Example: Using the `StartWorkflow` RPC, which is handled by the controller. - **Before this PR:** ```scala send(StartWorkflow(), CONTROLLER).map { ret => // handle the return value } ``` - **After this PR:** ```scala controllerInterface.startWorkflow(EmptyRequest(), mkContext(CONTROLLER)).map { resp => // handle the response } ``` Co-authored-by: Yicong Huang <17627829+Yicong-Huang@users.noreply.github.com>
Commit: | 09e6017 | |
---|---|---|
Author: | Yicong Huang | |
Committer: | Noah Wang |
Encapsulate workflow basic dependencies into a workflow-core package (#2942) This PR adds a new sub-project in sbt called `workflow-core`, under micro-services. It contains a duplicated codebase of workflow core dependencies, including: - `Tuple`, `Attribute`, `Schema`; - `OperatorExecutor`; - `PhysicalOp`, `PhysicalPlan`; - Identities; - and other utility functions. ### Migration plan: After exporting workflow-core as a local dependency, we can build workflow-compiling-service and workflow-execution-service based on it.
Commit: | 8a64429 | |
---|---|---|
Author: | Yicong Huang | |
Committer: | Noah Wang |
Move Tuple, State, and WorkflowRuntimeStatistics into Amber package (#2913) As title. For a clean seperation between Texera and Amber, we need to move some engine definitions into amber. For now, we keep LogicalPlan, LogicalOperators (DESC) and the majority of Executors are in texera package. Tuple, State, Source/Sink Executors and WorkflowRuntimeStatistics related definitions are moved into Amber.
Commit: | 0937690 | |
---|---|---|
Author: | Xinyuan Lin | |
Committer: | Noah Wang |
Introduce Marker for Enhanced Data Handling (#2772) This PR introduces a new `Marker` concept, reclassifying `EndOfUpstream` as a `Marker`, enhancing how data payloads are processed. Upon encountering a `Marker`, the current payload is immediately sent downstream, optimizing the data flow. The introduction of the `State` Marker is planned for a future update. This change is foundational, keeping core functionality intact while setting the stage for further enhancements. A marker is used to transmit information downstream, while an internal marker is used to convey information within each operator. **Key changes:** - Reclassification of `EndOfUpstream` as a `Marker`. - Immediate downstream transmission of data upon `Marker` detection.
Commit: | 3b24665 | |
---|---|---|
Author: | Shengquan Ni | |
Committer: | GitHub |
Update amber to depend on sub projects (#3111) This PR removes two copies of the code for workflow-core and workflow-operator and makes amber directly depend on those projects. Important points: 1. downgraded `snakeyaml` in `workflow-core` from 2.0 to 1.30 due to a conflict in `amber` project. 2. override all operator definitions in `workflow-compiling-service` with the latest changes in `amber`. Then deleted all operator definitions in `amber`. 3. we still have 2 implementations in `workflow-compiling-service` and `amber` for compilation. We should soon remove the implementation in `amber` Co-authored-by: Jiadong Bai <bobbaicloudwithpants@gmail.com> Co-authored-by: Yicong Huang <17627829+Yicong-Huang@users.noreply.github.com>
Commit: | 66b946d | |
---|---|---|
Author: | Xinyuan Lin |
update
Commit: | 15a1b9e | |
---|---|---|
Author: | Xinyuan Lin | |
Committer: | GitHub |
Merge branch 'xinyuan-datatostate' into xinyuan-control-block
Commit: | 187ff49 | |
---|---|---|
Author: | Shengquan Ni | |
Committer: | GitHub |
Flatten micro-service folder (#3098) This PR flattens the micro-services folder into the core folder. Our next step will be to let the amber project depend on the micro-services so we can get rid of the two-copy issue ASAP. --------- Co-authored-by: Jiadong Bai <43344272+bobbai00@users.noreply.github.com>
Commit: | a0b636a | |
---|---|---|
Author: | Xinyuan Lin | |
Committer: | GitHub |
Fix error reporting on python UDF (#3099) Fix the issue where the console consistently displays the error message "NoneType has no attribute" when encountering issues in the Python UDF. The root cause is that when the UDF contains errors, the executor fails to initialize. Despite this failure, the execution continues until the system later detects that the executor is None, leading to the same generic error message being reported. This fix ensures that errors are immediately detected, reported, and handled at the time they occur, providing more accurate and timely feedback. 
Commit: | 84c910e | |
---|---|---|
Author: | Xinyuan Lin | |
Committer: | GitHub |
Merge branch 'master' into xinyuan-control-block
Commit: | 23a7c75 | |
---|---|---|
Author: | Xinyuan Lin |
update
Commit: | d4c3f00 | |
---|---|---|
Author: | Jiadong Bai | |
Committer: | GitHub |
Add FatalErrorType and WorkflowFatalError proto definitions to the workflow-core (#3008) As titled, this PR adds the proto definition of `FatalErrorType` and `WorkflowFatalError` to `edu.uci.ics.amber` package. These two definitions will be used by `workflow-compiling-service`.
Commit: | 632aaf8 | |
---|---|---|
Author: | Xinyuan Lin |
init
Commit: | 9aaca71 | |
---|---|---|
Author: | Shengquan Ni |
renaming
Commit: | 84eba2b | |
---|---|---|
Author: | Shengquan Ni | |
Committer: | GitHub |
Migrate scala control messages to protobuf (#2950) This PR migrates all Scala control messages and their corresponding handlers to use gRPC. --- ### Main Changes (For both Scala and Python Engine) 1. **Rewrote `AsyncRPCServer` and `AsyncRPCClient`** - Adjusted both to fit into the generated gRPC interface, while keeping RPCs sent and received through actor messages. 2. **Centralized RPC Definitions** - All RPC messages and method definitions are now located in: ``` src/main/protobuf/edu/uci/ics/amber/engine/architecture/rpc ``` --- ### Steps to Create New RPCs 1. **Create a request proto message** in `controlcommands.proto` with all necessary inputs. 2. **Create a response proto message** in `controlreturns.proto`. 3. **Define the RPC**: - For **controller-handled RPCs**, add the definition to `controllerservice.proto`. - For **worker-handled RPCs**, add it to `workerservice.proto`. 4. **Generate Scala code** by running `protocGenerate` in the SBT shell. 5. **Generate Python code** by running `./scripts/python-proto-gen.sh` from `core` folder. 6. **Implement the RPC handler**: - Scala Engine: in either `ControllerAsyncRPCHandlerInitializer` or `WorkerAsyncRPCHandlerInitializer`, and move it to the `promisehandlers` folder for better organization. - Python Engine: create a new handler under `python/core/architecture/handlers/control` and override corresponding `async` method for the new rpc defined in worker service. --- ### Changes in Sending RPCs (Scala Engine) Example: Using the `StartWorkflow` RPC, which is handled by the controller. - **Before this PR:** ```scala send(StartWorkflow(), CONTROLLER).map { ret => // handle the return value } ``` - **After this PR:** ```scala controllerInterface.startWorkflow(EmptyRequest(), mkContext(CONTROLLER)).map { resp => // handle the response } ``` Co-authored-by: Yicong Huang <17627829+Yicong-Huang@users.noreply.github.com>
Commit: | 78b0a49 | |
---|---|---|
Author: | Yicong Huang | |
Committer: | GitHub |
Encapsulate workflow basic dependencies into a workflow-core package (#2942) This PR adds a new sub-project in sbt called `workflow-core`, under micro-services. It contains a duplicated codebase of workflow core dependencies, including: - `Tuple`, `Attribute`, `Schema`; - `OperatorExecutor`; - `PhysicalOp`, `PhysicalPlan`; - Identities; - and other utility functions. ### Migration plan: After exporting workflow-core as a local dependency, we can build workflow-compiling-service and workflow-execution-service based on it.
Commit: | 25faefd | |
---|---|---|
Author: | Yicong Huang | |
Committer: | GitHub |
Move Tuple, State, and WorkflowRuntimeStatistics into Amber package (#2913) As title. For a clean seperation between Texera and Amber, we need to move some engine definitions into amber. For now, we keep LogicalPlan, LogicalOperators (DESC) and the majority of Executors are in texera package. Tuple, State, Source/Sink Executors and WorkflowRuntimeStatistics related definitions are moved into Amber.
Commit: | db9b842 | |
---|---|---|
Author: | Shengquan Ni |
update
Commit: | 00edf9a | |
---|---|---|
Author: | Xinyuan Lin | |
Committer: | GitHub |
Introduce Marker for Enhanced Data Handling (#2772) This PR introduces a new `Marker` concept, reclassifying `EndOfUpstream` as a `Marker`, enhancing how data payloads are processed. Upon encountering a `Marker`, the current payload is immediately sent downstream, optimizing the data flow. The introduction of the `State` Marker is planned for a future update. This change is foundational, keeping core functionality intact while setting the stage for further enhancements. A marker is used to transmit information downstream, while an internal marker is used to convey information within each operator. **Key changes:** - Reclassification of `EndOfUpstream` as a `Marker`. - Immediate downstream transmission of data upon `Marker` detection.
Commit: | 8cbb603 | |
---|---|---|
Author: | linxinyuan |
update
Commit: | 9e7a2ee | |
---|---|---|
Author: | Xinyuan Lin | |
Committer: | GitHub |
Fix One-To-One Partitioner Logic (#2684) The OneToOnePartitioner was not functioning correctly, causing issues in the partitioning logic. This update resolves the problem to ensure accurate partitioning behavior. - **Channel-Based Receiver Calculation:** The receivers in the Partitioner are now calculated by the channel instead of directly by the receivers. - **Removal of Duplicate Receivers:** Eliminated the possibility of having duplicate receivers when there are multiple channels connected to the same receiver worker. - **SinglePartitioner Functionality:** The SinglePartitioner will send all data to a single receiver node. - **OneToOnePartitioner Behavior:** When there are multiple workers, the OneToOnePartitioner will send data to the corresponding receiver node in the downstream operator. For example, in a hash join with two workers, the build phase workers can use the OneToOnePartitioner to send their hash tables to their corresponding downstream probe phase workers.
Commit: | 839e63f | |
---|---|---|
Author: | Xiao-zhen-Liu |
assign storage in PhysicalPlan.
Commit: | cf5a509 | |
---|---|---|
Author: | Kevin Wu | |
Committer: | GitHub |
Add R Source UDF and R UDF (Table API) to Texera (#2644) This PR is for the official inclusion of an R UDF and R Source UDF into Texera that can support R code in Texera workflows and can support Texera's Table API. Tuple API is currently not supported for R UDF. -------------- # Software versions required/supported: Python - 3.9.18 rpy2 (Python pacakge) - 3.5.11 rpy2-arrow (Python package) - 0.0.8 R - 4.3.3 reticulate (R package) - 1.36.1 arrow (R package) - 14.0.01 -------------- # Use cases/user requirements: - First the user must make sure that their Python and R versions are configured in the udf.conf file located at [/core/amber/src/main/resources/udf.conf](https://github.com/Texera/texera/pull/2644/commits/c8e35e9ee5fb360358797108bb24297567eb7d4a) - To use the R Source UDF: - This should be used when the user wishes to write R code to provide source data to any pipeline that use R UDFs. - The user does not need any input. The output in the R-UDF must be an R object that can be easily converted to a data.frame, more specifically, such that they can produce tuples. - When the operator is finished, the output is an Arrow Table - To use the R UDF: - This should be used when the user wishes to receive some input data, read and modify it and then return either different data or the same input data. - The user should expect an input of both a Table and a port - Currently the port argument is unused, although this may be used in the future - When the operator is finished, the output is an Arrow Table with any modifications/accesses from the R-UDF. ------------- # Showcase:   --------- Co-authored-by: Yicong Huang <17627829+Yicong-Huang@users.noreply.github.com>
Commit: | c1eea92 | |
---|---|---|
Author: | kunwp1 | |
Committer: | GitHub |
Enhancing Statistics Manager to Include Internal Ports (#2595) This PR enhances the existing statistics manager by extending its functionality to include internal ports’ statistics. Previously, the statistics manager only stored data related to external ports. With this update, the statistics manager will collect and manage information on external and internal ports. This expansion allows researchers to analyze and study internal data traffic patterns alongside external ones.
Commit: | 27669a1 | |
---|---|---|
Author: | kunwp1 | |
Committer: | GitHub |
Fix The Issue That Python Operators Cannot Be Paused And Resumed (#2590) This pull request addresses an issue preventing Python operators from properly pausing and resuming. The problem was traced back to a modification made in PR #2570, where the return type of the ControlReturnV2 message no longer included WorkerState. As a result, the async RPC server was unable to process the message. This PR fixes the issue by adding WorkerState back as a valid return type for ControlReturnV2. Now, both the pause and resume handlers will return a message that reflects the worker's current state.
Commit: | 6259864 | |
---|---|---|
Author: | kunwp1 | |
Committer: | GitHub |
Enhancing Workflow Runtime Statistics Measurement at Port Level (#2570) This pull request proposes an improved approach to measuring workflow runtime statistics. The current method gathers statistics at the worker level, resulting in inaccuracies for operators divided into multiple physical operators. For example, the `aggregate` and `join` operators did not display the correct count of output tuples. Key changes include: 1. **Port-Level Measurement:** The PR introduces a new method of measuring input/output tuple counts for each port ID within the statistics manager. This change ensures a more accurate representation of runtime statistics. 2. **External Port Statistics:** The updated statistics manager will exclusively measure the runtime statistics of external ports, further enhancing the precision of the data collected. 3. **Improved Statistics Aggregation:** The previous implementation had shortcomings regarding operators like `join` or `aggregate`, which can be divided into several physical operators. To address this issue, the new implementation has improved the aggregation of runtime statistics. The statistics from these operators are now summed up to display the total stats. 4. **Separation of Worker State and Runtime Statistics:** The PR separates worker state from worker runtime statistics. The state manager will continue to manage the worker state, while the statistics manager will handle worker runtime statistics. A new wrapper, `worker info`, has been introduced to encapsulate worker state and runtime statistics. [Correct number in hash join operator]  [Correct number in aggregate operator] 
Commit: | 8db29a7 | |
---|---|---|
Author: | Shengquan Ni |
update
Commit: | 95280bb | |
---|---|---|
Author: | Xiaozhen Liu | |
Committer: | GitHub |
Add Blocking Output Port (#2472) This PR adds an interface for specifying blocking output port in `PhysicalOp`. All the outgoing links connected to a blocking output port will be blocking links and will be used to create regions in RegionPlanGenerator. Definition of a `blocking` output port: if the operator does not produce anything on an output port until this operator has received all its inputs from all input ports, then this output port should be specified as blocking. The developer of an operator should be responsible for specifying this property. In this PR, based on the implementation, both physical operators of Aggregate operator have blocking output ports. We can also remove `blockingInputPorts` in the future as they are not meaningful. Only blocking output ports and input port dependencies should be used by `CostBasedRegionPlanGenerator`. For `ExpansionBasedGreedyRegionPlanGenerator`, we only use blocking inputs for creating regions. --------- Co-authored-by: Yicong Huang <17627829+Yicong-Huang@users.noreply.github.com>
Commit: | 38965e3 | |
---|---|---|
Author: | Shengquan Ni |
WIP, still need generate python proto
Commit: | 2fe9877 | |
---|---|---|
Author: | Yicong Huang | |
Committer: | GitHub |
Rename Operator to Executor in workers (#2460) The name "operator" has multiple meanings in the system. To avoid confusion, we now name the component that runs in a worker as `Executor`. This also changes all the corresponding managers, control messages and function names. For backward compatibility issue, for Python side, we keep the Operator class for now.
Commit: | 46eaf05 | |
---|---|---|
Author: | Yicong Huang | |
Committer: | GitHub |
Refactor InputManager and OutputManager (#2442) This PR refactors both Java/Scala and Python to have `InputManager` and `OutputManager`. The two managers manages `WorkerPort`s accordingly. On the Java/Scala side, they also manage input and output iterators, while Python side has a dedicated `TupleProcessingManager` to manage that. This will be refactored later. Now we add schemas on WorkerPorts. This allows the schema enforcement to happen on ports without PhysicalOp. The schema information is sent into workers through `AssignPort` control message.
Commit: | 665ceac | |
---|---|---|
Author: | Yicong Huang | |
Committer: | GitHub |
Use attribute names for range partitioning (#2440) This PR uses the attribute names instead of indices (of a schema) to specify a range partition. This is to relax the dependency on schema when specifying partitioning logic.
Commit: | 8777965 | |
---|---|---|
Author: | Yicong Huang | |
Committer: | GitHub |
Use attribute names for hash partitioning (#2427) This PR uses the attribute names instead of indices (of a schema) to specify a hash partition. This is to relax the dependency on schema when specifying partitioning logic. Additionally, a default HashPartition which has no attribute names would hash the entire tuple.
Commit: | dcbaf6c | |
---|---|---|
Author: | Shengquan Ni |
revert some changes
Commit: | 46b57f0 | |
---|---|---|
Author: | Shengquan Ni |
apply changes
Commit: | d88b2da | |
---|---|---|
Author: | Xinyuan Lin | |
Committer: | GitHub |
Merge branch 'master' into xinyuan-for-loop
Commit: | 3a38422 | |
---|---|---|
Author: | Xiaozhen Liu | |
Committer: | GitHub |
Remove Code Related to Skew-handling (#2379) Since we are not actively using skew-handling, and the maintenance cost of these pieces of code is too high, this PR removes them from Amber.
Commit: | 6fdcf6b | |
---|---|---|
Author: | Xinyuan Lin | |
Committer: | GitHub |
Merge branch 'master' into xinyuan-for-loop
Commit: | ffc24eb | |
---|---|---|
Author: | Yicong Huang | |
Committer: | GitHub |
Let workers report port completion (#2357) This PR lets the workers report port completion instead of link completion. The scheduler will use the port completion to determine a region's completion. We removed callbacks on link and worker completion. Detailed changes: 1. Now instead of passing input links to workers, we assign ports to workers during initialization. 2. Then we connect input channels to those ports. Note that a worker may not necessarily have all the ports connected with channels. For example, a UDF with two ports will only have channels connected to the first port during the first region, and the second port may be connected in the next region. 3. When a port is finished, the worker reports `PortCompletion` to the controller. - input port completion is after the worker processes all data from this input port. - output port completion is after the worker sends all data to this output port. At the PhysicalOp level, a port is completed when all its workers' ports are completed.
Commit: | eb343ba | |
---|---|---|
Author: | Jiadong Bai |
add env id to the workflow context
Commit: | e4d4c1a | |
---|---|---|
Author: | Xinyuan Lin |
update
Commit: | de1cea9 | |
---|---|---|
Author: | kunwp1 | |
Committer: | GitHub |
Storing Worker's Data Processing Time, Control Processing Time, and Idle Time as Workflow Runtime Statistics (#2337) This PR stores worker's data processing time, control processing time, and idle time as runtime statistics in MySQL. This information is saved in the `workflow_runtime_statistics` table. Because a single operator can have multiple workers, the number of workers for each operator is also stored in MySQL. The description of each time metric is shown below: - The worker's total runtime is defined as the duration from when the worker's state becomes "READY" to when the worker's state becomes "COMPLETED". The `runDPThreadMainLogic` API executes the DP thread, and codes have been inserted before and after calling the API to measure the total runtime. - The worker's data processing time is defined as the duration that the worker processes data messages. The `processDataPayload` and `continueDataProcessing` APIs process the data, and codes have been inserted inside the API to measure the data processing time. - The worker's control processing time is defined as the duration that the worker processes control messages. The `processControlPayload` API processes control messages, and codes have been inserted inside the API to measure the control processing time. - The worker's idle time is measured by subtracting the data processing time and control processing time from the total runtime. MySQL stores data processing time, control processing time, and idle time as nanoseconds. If an operator has two workers, the sum of each worker's time is stored in the corresponding column. Also, if the interval time that collects runtime statistics is two seconds, the data processing time, control processing time, and idle time inside the two seconds are stored in the database. In the frontend, data processing time, control processing time, idle time, and number of workers are added as new metrics to show a line chart. The aggregated values are displayed in the line chart. 
Commit: | c30b46e | |
---|---|---|
Author: | Shengquan Ni | |
Committer: | GitHub |
Rename ChannelID to ChannelIdentity (#2322) this PR refactors scala class `ChannelID` into a proto class `ChannelIdentity` to better align with other IDs.
Commit: | 6cadd8f | |
---|---|---|
Author: | Yicong Huang | |
Committer: | GitHub |
Refactor backend ports in compiler (#2299) This PR refactors the ports on the backend. ### PortIdentity Previously, we store ports in a list and used the index to refer to a port. This is not useful when the ports are mapped from logical operator to physical operator (the exact sequence of the ports may change), or when supporting dynamic ports (removing a port would cause the index change). Now we introduce `PortIdentity` as the id of a port. The same port and identity will be used across logical, physical, and in workers. The exact structure of PortIdentity is shown below (generated by protobuf): ``` final case class PortIdentity( id: Int = 0, internal: Boolean = false ) ``` A PortIdentity has a field for indicating `internal` or not. This is to differentiate internal ports and external ports. Internal ports are ports that are added between the physical operators of a logical operator. External ports are those set on the logical operator and link to other ports from other logical operators. ### New InputPort and OutputPort The `InputPort` has been updated (generated by protobuf): ``` final case class InputPort( id: PortIdentity = PortIdentity(), displayName: String = "", allowMultiLinks: Boolean = false, dependencies: Seq[PortIdentity] = Seq.empty ) ``` Notably, we moved `dependencies` from the PhysicalOp into InputPort. It is now a sequence of PortIdentities which denotes the ports that this port is depending on. The `OutputPort` has minor changes and we will adding the `blocking` property into it in the future. ### LogicalLink and PhysicalLink Both links are updated to use PortIdentities instead of port index (Int) accordingly: ``` case class LogicalLink( @JsonProperty("fromOpId") fromOpId: OperatorIdentity, fromPortId: PortIdentity, @JsonProperty("toOpId") toOpId: OperatorIdentity, toPortId: PortIdentity ) ``` ``` final case class PhysicalLink( fromOpId: PhysicalOpIdentity, fromPortId: PortIdentity, toOpId: PhysicalOpIdentity, toPortId: PortIdentity, ) ``` ### Frontend ports plan For now, we add some adapt functions to help hookup with the new backend design, and fully supports backward compatibility. Future plan: - The frontend has other redundant definitions such as `PortDescription` and `PortProperty` which can be further merged and refactored. - Also, we need to hookup `dynamic port` with the new backend. ### In-worker ports plan Workers (both Python and Java) are using ports as well. Right now they are still using port index, or `PortOrdinal`. We plan to unify them in the future as well.
Commit: | 465b6b9 | |
---|---|---|
Author: | Shengquan Ni | |
Committer: | GitHub |
Integrating Interaction marker propagation into Amber (#2271) This PR: 1. Added support for interaction(chandy-lamport) marker propagation to existing fries(epoch) marker propagation. Interaction marker propagates through both control and data channels. Unlike reconfiguration markers, alignment for interaction markers is optional. 2. Added a new control called `RetrieveWorkflowState` as an interaction that utilizes the new marker propagation logic. Later the checkpoint implementation will also use this mechanism. 3. Marked the processing step for interaction with a special entry so that the worker can replay its processing till a specific interaction. 4. Removed the support for replaying to an arbitrary step number since we don't need this in the replay framework. ### Illustration of marker propagation process:  --------- Co-authored-by: Yicong Huang <17627829+Yicong-Huang@users.noreply.github.com>
Commit: | db3ad25 | |
---|---|---|
Author: | Shengquan Ni |
resolve comments
Commit: | 3560833 | |
---|---|---|
Author: | Yicong Huang | |
Committer: | GitHub |
Merge PhysicalLink and PhysicalLinkIdentity (#2297) As those two objects now contain equivalent information, we merge them as one. We will use `PhysicalLink` in the future.
Commit: | 02f6d0f | |
---|---|---|
Author: | Yicong Huang | |
Committer: | GitHub |
Refactor execution related identities (#2264) In this PR we unify job and executions. - Rename all "job" related services and entities to "execution". We will not use the term "job" in the future. - Pass workflow id as `WorkflowIdentity` and explicitly create `ExecutionIdentity` for identifying an execution. - In dev mode, both identities are set to "1L" by default. - Pass `WorkflowContext` (which contains all the ids) in each stage of the workflow compilation logic, so that each stage has access to the context information.
Commit: | 448ae0a | |
---|---|---|
Author: | Yicong Huang | |
Committer: | GitHub |
Rename physical level entities (#2261) ## This PR refactors the PhysicalPlan with code clean up and renamings, without functionality changes. ### Renames the physical-plan-level entities (old -> new): - `OpExecConfig` -> `PhysicalOp` - `LinkStrategy` -> `PhysicalLink` - `LinkIdentity` -> `PhysicalLinkIdentity` - Removes `PartitioningPlan` and merges the contents into `PhysicalPlan`. The partitionings are now properties of each `PhysicalLink`. - Removes the term `layer` and unifies it to `PhysicalOp`. Each `PhysicalOpIdentity` has a field `layerName` to specify the name of the PhysicalOp. ### Redesigns : - `PhysicalOp` 1. Changes the port map from ``` inputToOrdinalMapping: Map[PhysicalLinkIdentity, Int] = Map(), outputToOrdinalMapping: Map[PhysicalLinkIdentity, Int] = Map() ``` to ``` inputPortToLinkMapping: Map[Int, List[PhysicalLink]] = Map(), outputPortToLinkMapping: Map[Int, List[PhysicalLink]] = Map() ``` - `PhysicalPlan` 1. Changes the content: from ``` operators: List[PhysicalOp], links: List[PhysicalLinkIdentity] ``` to ``` operators: List[PhysicalOp], links: List[PhysicalLink] ```
Commit: | 5d512ea | |
---|---|---|
Author: | Yicong Huang |
change to PhysicalLinkIdentity
Commit: | 955e4e5 | |
---|---|---|
Author: | Yicong Huang |
clean up
Commit: | 570bb3e | |
---|---|---|
Author: | Yicong Huang |
change to PhysicalLink
Commit: | f85df6b | |
---|---|---|
Author: | Yicong Huang |
fix proto gen
Commit: | 586f235 | |
---|---|---|
Author: | Yicong Huang |
change names to Physical-prefix
Commit: | 13cb1fd | |
---|---|---|
Author: | Yicong Huang | |
Committer: | GitHub |
Rename logical level entities (#2257) ## This PR refactors the LogicalPlan with code clean up and renamings, without functionality changes. ### Renames the logical-plan-level entities (old -> new): 1. `OperatorDescritor` -> `LogicalOp` 2. `OperatorLink` -> `LogicalLink` 3. `OperatorPort` -> `LogicalPort` ### Redesignes Identities (old -> new): 1. `OperatorIdentity(workflow: String, operator: String)` -> `OperatorIdentity(id: String)`: We have been using the `workflow` in `OperatorIdentity` as the executionId. It is redundant as we also pass executionId from `WorkflowIdentity` (see later). This change removes the redundant field. 2. `WorkflowIdentity(id: String)` -> `WorkflowIdentity(executionId: Long)`: We have been using `id` in `WorkflowIdentity` as the executionId, this rename reflects the actual usage of the field. 3. `LayerIdentity(workflow: String, operator: String, layerID: String)` -> `LayerIdentity(operator: String, layerID: String)`: This change belongs to the physical layer, we removed `workflow: String` due to the removal of that from `OperatorIdentity`. ### As a result, the Logical Plan is changed to the following: Old: ``` operatorMap: Map[String, OperatorDescriptor] // where String is the operator Id; jgraphtDag: DirectedAcyclicGraph[String, OperatorLink] // where the String is the operator Id. ``` New: ``` operatorMap: Map[OperatorIdentity, LogicalOp] jgraphtDag: DirectedAcyclicGraph[OperatorIdentity, LogicalLink] ```
Commit: | d76684a | |
---|---|---|
Author: | Yicong Huang |
rename to executionId
Commit: | 5414b36 | |
---|---|---|
Author: | Yicong Huang |
extract out executionId
Commit: | cde854c | |
---|---|---|
Author: | Yicong Huang | |
Committer: | GitHub |
Use ActorMessage for flow control (#2237) ### Introducing a new type of message: `ActorMessage` This message is in parallel to `DataMessage` and `ControlMessage`. ActorMessage each contains an `ActionCommand`, whose handler has the following properties: 1. It will be handled by the receiver (e.g., `NetworkReceiver` on the Python side). 2. It does not access or alter any data processor's state. It can only access, including read and write, the input queue. The operations are: - enable/disable the input queue; - read from the input queue (e.g., get credit); - create and put a headless Control Message to the input queue. ### Flow control with ActorMessage: 1. Backpressure implementation: `Backpressure` is now an `ActorCommand`. The handler will disable the data subqueue of the input queue to realize the operator being pressured. This implementation also pressures the source operators (e.g., PythonSourceOp) correctly. We introduced a `NoOp` Control command to resume the Python main loop when backpressure is lifted. 2. PollCredit implementation: `CreditUpdate` is now an `ActorCommand`. The handler does nothing but return the latest queue credit back in its ack, in a synchronized manner. ### Other applications of ActorMessage: ActorMessage is designed for messages that are to be handled by the actor without modifying or altering the Data Processor's state. Besides flow control, it can also be used to implement TimeService messages in which the actor needs to send itself a message to trigger some actions. This is to not be confused with `ControlMessage` since control messages will be logged to ensure fault tolerance. ### ActorMessage's fault tolerance and replay. ActorMessage should not be logged and does not need to be replayed. --------- Co-authored-by: Shengquan Ni <13672781+shengquan-ni@users.noreply.github.com>
Commit: | fb3030c | |
---|---|---|
Author: | Shengquan Ni | |
Committer: | GitHub |
Enhance frontend error reporting (#2195) This PR improves the front-end error reporting process. Here are a few important notes: ### We now differentiate two types of errors: 1. (Editing-time) Static Errors: shown during the editing time of a workflow. - Compilation Error: **Unfixable**, an error happens during 1) operator compilation, 2) workflow compilation, and 3) schema propagation. Those errors can be updated while user edits a workflow. 2. Runtime Errors: describes errors that happen during an execution. - Runtime Exception: **Fixable**, can retry, debug, etc. The original Breakpoints for local operator exceptions are refactored as runtime exceptions. - Execution Failure: **Unixable**, the runtime is dead. This includes errors that happen during the initiating of a worker and its operator, e.g., Python code compilation, PVM handshake failure, actor crash, etc. ### Error view on the UI: - This PR adds a new global view of the errors for the entire workflow when the user unhighlights any specific operator. - For each specific operator, its errors will be displayed in different components: 1. Unfixable errors, including compilation errors and execution failures, will be displayed in the Static Error Frame of each operator. 2. Fixable errors, currently only include runtime errors, will be displayed as a console message in the Console Frame of each operator. ### Demo: - The global Static Error Frame, and each specific operator's Static Error Frame. <p align="center"><img src="https://github.com/Texera/texera/assets/17627829/748c998c-3395-469f-a5fe-1cd32d352014" width="600" height="350" /></p> - The runtime exceptions are displayed in the corresponding operator's Console Frame. <p align="center"><img src="https://github.com/Texera/texera/assets/17627829/908a2cc6-c45b-4150-9472-df41ef1cb2e9" width="600" height="350" /></p> --------- Co-authored-by: Yicong Huang <17627829+Yicong-Huang@users.noreply.github.com>
Commit: | bbd400a | |
---|---|---|
Author: | Shengquan Ni |
update
Commit: | a719181 | |
---|---|---|
Author: | Shengquan Ni |
update
Commit: | a34bce4 | |
---|---|---|
Author: | Shengquan Ni |
update
Commit: | 4b2c091 | |
---|---|---|
Author: | Shengquan Ni | |
Committer: | GitHub |
Update core/amber/src/main/protobuf/edu/uci/ics/amber/engine/architecture/worker/controlcommands.proto Co-authored-by: Yicong Huang <17627829+Yicong-Huang@users.noreply.github.com>
Commit: | ae22dee | |
---|---|---|
Author: | Shengquan Ni |
update