package yggdrasil_decision_forests.model.gradient_boosted_trees.proto

Mouse Melon logoGet desktop application:
View/edit binary Protocol Buffers messages

message GradientBoostedTreesSerializedModel

gradient_boosted_trees.proto:124

message GradientBoostedTreesTrainingConfig

gradient_boosted_trees.proto:27

Training configuration for the Gradient Boosted Trees algorithm.

Next ID: 39

Used in: distributed_gradient_boosted_trees.proto.DistributedGradientBoostedTreesTrainingConfig

message GradientBoostedTreesTrainingConfig.DART

gradient_boosted_trees.proto:243

Used in: GradientBoostedTreesTrainingConfig

enum GradientBoostedTreesTrainingConfig.EarlyStopping

gradient_boosted_trees.proto:146

Decision Trees are trained sequentially. Training too many trees leads to training dataset overfitting. The "early stopping" policy controls the detection of training overfitting and halts the training (before "num_trees" trees have be trained). The overfitting is estimated using the validation dataset. Therefore, "validation_set_ratio" should be >0 if the early stopping is enabled. The early stopping policy runs every "validation_interval_in_trees" trees: The number of trees of the final model will be a multiple of "validation_interval_in_trees".

Used in: GradientBoostedTreesTrainingConfig

message GradientBoostedTreesTrainingConfig.GradientOneSideSampling

gradient_boosted_trees.proto:311

"Gradient-based One-Side Sampling" (GOSS) is a sampling algorithm proposed in the following paper: "LightGBM: A Highly Efficient Gradient Boosting Decision Tree.' The paper claims that GOSS speeds up training without hurting quality by way of a clever sub-sampling methodology. Briefly, at the start of every iteration, the algorithm selects a subset of examples for training. It does so by sorting examples in decreasing order of absolute gradients, placing the top \alpha percent into the subset, and finally sampling \beta percent of the remaining examples.

Used in: GradientBoostedTreesTrainingConfig

message GradientBoostedTreesTrainingConfig.Internal

gradient_boosted_trees.proto:360

Used in: GradientBoostedTreesTrainingConfig

message GradientBoostedTreesTrainingConfig.MART

gradient_boosted_trees.proto:241

Used in: GradientBoostedTreesTrainingConfig

(message has no fields)

message GradientBoostedTreesTrainingConfig.SampleInMemory

gradient_boosted_trees.proto:101

Used in: GradientBoostedTreesTrainingConfig

(message has no fields)

message GradientBoostedTreesTrainingConfig.SampleWithShards

gradient_boosted_trees.proto:103

Used in: GradientBoostedTreesTrainingConfig

message GradientBoostedTreesTrainingConfig.SelectiveGradientBoosting

gradient_boosted_trees.proto:296

Selective Gradient Boosting (SelGB) is a method proposed in the SIGIR 2018 paper entitled "Selective Gradient Boosting for Effective Learning to Rank" by Lucchese et al. The algorithm always selects all positive examples, but selects only those negative training examples that are more difficult (i.e., those with larger scores). Note: Selective Gradient Boosting is only available for ranking tasks. This method is disabled for all other tasks.

Used in: GradientBoostedTreesTrainingConfig

message GradientBoostedTreesTrainingConfig.StochasticGradientBoosting

gradient_boosted_trees.proto:330

Stochastic Gradient Boosting samples examples uniformly randomly.

Used in: GradientBoostedTreesTrainingConfig

gradient_boosted_trees.proto:24

Header for the gradient boosted trees model.

Next ID: 10

Used in: GradientBoostedTreesSerializedModel

enum Loss

gradient_boosted_trees.proto:52

Used in: GradientBoostedTreesTrainingConfig, Header

message LossConfiguration

gradient_boosted_trees.proto:133

Used in: Header

message LossConfiguration.BinaryFocalLossOptions

gradient_boosted_trees.proto:175

Used in: GradientBoostedTreesTrainingConfig, LossConfiguration

message LossConfiguration.LambdaMartNdcg

gradient_boosted_trees.proto:134

Used in: GradientBoostedTreesTrainingConfig, LossConfiguration

message LossConfiguration.XeNdcg

gradient_boosted_trees.proto:151

Used in: GradientBoostedTreesTrainingConfig, LossConfiguration

enum LossConfiguration.XeNdcg.Gamma

gradient_boosted_trees.proto:152

Used in: XeNdcg

message TrainingLogs

gradient_boosted_trees.proto:81

Log of the training. This proto is generated during the training of the model and optionally exported (as a plot) in the training logs directory.

Used in: Header

message TrainingLogs.Entry

gradient_boosted_trees.proto:97

Used in: TrainingLogs