Get desktop application:
View/edit binary Protocol Buffers messages
Used in:
Indices refer to 'original_text' of a question or 'text' of a cell. Inclusive begin byte.
Exclusive end byte.
An identifier for example a Wikipedia URL.
For each entity that appears in the interaction, the map has a textual description of the entity, like the first section of its Wikipedia page.
Used in:
Coordinates of cells that contain the answers.
Answers in text format.
Present if the answer can be represented as a single float value, for example produced by an aggregation ('the average population of all countries').
If true, this answer can be used to construct training/test examples. If false some error were triggered during parsing of this answer.
Present if the answer can be represented as a single integer value, for example when it's a classification or entailment task.
A function that is applied to the answer cells in order to obtain the final answer.
Used in:
Sums all cell values. Numeric cells only.
Averages all cell values. Numeric cells only.
Counts the number of answers.
Used in:
Uses the average cosine similarity to score the tokens.
Used in:
Enables the use of positional embeddingins to compute the average cosine similarity.
The loss used to learn the weights.
Used in:
,Used in:
Used in:
Select the k first tokens up to max_num_tokens. If max_num_tokens = tapas_max_num_tokens no table pruing is used.
Used in:
(message has no fields)
An interaction represents a sequences of question answerable from a single table.
The loss used to learn the weights.
Used in:
,Enables the pruning model to use a loss similar to the tapas model.
The hard selection strategy used for train and/or for test.
Used in:
Used in:
No hard selection is used. Select all the tokens.
Selects the best tokens up to max_num_tokens. Returns the TOP_K scores.
Selects the best tokens up to max_num_tokens. Returns the TOP_K mask values in 0,1.
Uses an unsupervised model to learn the required columns. The back probagation is always activated.
Used in:
Used in:
No Regularization is used.
Computes L1 over all the tokens scores.
Computes L2 over all the tokens scores.
Computes l1 on tokens sequence then l2 on the batch.
Used in:
One negative table
If table was retrieved from Baseline its rank.
The similarity score between the negative table and the question. A positive score represent a high similarity.
Used in:
The examples correspend to the negative tables.
Used in:
,Used in:
Used in:
Used in:
The question string after normalization.
The original raw question string.
Numeric value spans in 'text'.
Uses a TAPAS model to score the columns or the tokens.
Used in:
The loss used to learn the weights.
Specifies the use of the columns scores or tokens scores.
Used in:
Represents a simple table with m rows and n columns.
Used in:
,The names of the n columns.
m rows containing n cells each.
Some unique identifier of this table.
The title of the document the table appears in.
Title or caption of the table.
The URL the table was found on.
Other versions of the same document that the table occurs on.
Other versions of the same table.
Heading of the table on the document.
Options for table pruning models.
Used in:
Used in:
Index of the column.
The score assigned by some scorer.
True if the column was selected as relevant by some column selector.
True if the column is needed to find the final answer (gold data).
Used in:
Model predictions of the umodified inputs.
Model 2, 3 and 4 didn't answer the question correctly even when running on the whole table. When column 0 was removed from the input, model 1 answered the question correctly and model 5 incorrectly.
Used in:
Column index.
Used in:
,Identifier of the model that produced this result.
Whether the model predictions is correct.
Tokens that should be added to the TF example. Must not be empty!
Used in:
The header row has index 0, the first data row index 1.