FeatHub is a stream-batch unified feature store that simplifies feature development, deployment, monitoring, and sharing for machine learning applications.

Introduction

FeatHub is an open-source feature store designed to simplify the development and deployment of machine learning models. It supports feature ETL and provides an easy-to-use Python SDK that abstracts away the complexities of point-in-time correctness needed to avoid training-serving skew. With FeatHub, data scientists can speed up the feature deployment process and optimize feature ETL by automatically compiling declarative feature definitions into performant distributed ETL jobs using state-of-the-art computation engines of their choice, such as Flink or Spark.

Checkout Documentation for guidance on compute engines, connectors, expression language, and more.

Core Benefits

Similar to other feature stores, FeatHub provides the following core benefits:

In addition to the above benefits, FeatHub provides several architectural benefits compared to other feature stores, including:

Usability is a crucial factor that sets feature store projects apart. Our SDK is designed to be Pythonic, declarative, intuitive, and highly expressive to support all the necessary feature transformations. We understand that a feature store's success depends on its usability as it directly affects developers' productivity. Check out the FeatHub SDK Highlights section below to learn more about the exceptional usability of our SDK.

What you can do with FeatHub

With FeatHub, you can:

Architecture Overview

The architecture of FeatHub and its key components are shown in the figure below.

The workflow of defining, computing, and serving features using FeatHub is illustrated in the figure below.

See Basic Concepts for more details about the key components in FeatHub.

Supported Compute Engines

FeatHub supports the following compute engines to execute feature ETL pipeline:

FeatHub SDK Highlights

The following examples demonstrate how to define a variety of features concisely using FeatHub SDK. See FeatHub SDK for more details.

See NYC Taxi Demo to learn more about how to define, generate and serve features using FeatHub SDK.

f_price = Feature(
    name="price",
    transform=JoinTransform(
        table_name="price_update_events",
        feature_name="price"
    ),
    keys=["item_id"],
)
f_total_payment_last_two_minutes = Feature(
    name="total_payment_last_two_minutes",
    transform=OverWindowTransform(
        expr="item_count * price",
        agg_func="SUM",
        window_size=timedelta(minutes=2),
        group_by_keys=["user_id"]
    )
)
f_total_payment_last_two_minutes = Feature(
    name="total_payment_last_two_minutes",
    transform=SlidingWindowTransform(
        expr="item_count * price",
        agg_func="SUM",
        window_size=timedelta(minutes=2),
        step_size=timedelta(minutes=1),
        group_by_keys=["user_id"]
    )
)
f_trip_time_duration = Feature(
    name="f_trip_time_duration",
    transform="UNIX_TIMESTAMP(taxi_dropoff_datetime) - UNIX_TIMESTAMP(taxi_pickup_datetime)",
)
f_lower_case_name = Feature(
    name="lower_case_name",
    dtype=types.String,
    transform=PythonUdfTransform(lambda row: row["name"].lower()),
)

User Guide

Checkout Documentation for guidance on compute engines, connectors, expression language, and more.

Prerequisites

You need the following to run FeatHub installed using pip:

Install FeatHub Nightly Build

To install the nightly version of FeatHub and the corresponding extra requirements based on the compute engine you plan to use, run one of the following commands:

# Run the following command if you plan to run FeatHub using a local process
$ python -m pip install --upgrade feathub-nightly

# Run the following command if you plan to use Apache Flink cluster
$ python -m pip install --upgrade "feathub-nightly[flink]"

# Run the following command if you plan to use Apache Spark cluster, or to use
# Spark-supported storage in a local process. 
$ python -m pip install --upgrade "feathub-nightly[spark]"

Quickstart

Quickstart using Local Processor

Execute the following command to compute features defined in nyc_taxi.py in the given Python process.

$ python python/feathub/examples/nyc_taxi.py

Quickstart using Flink Processor

You can use the following quickstart guides to compute features in a Flink cluster with different deployment modes:

Quickstart using Spark Processor

You can use the following quickstart guides to compute features in a standalone Spark cluster.

Examples

The following examples can be run on Google Colab.

Name Description
NYC Taxi Demo Quickstart notebook that demonstrates how to define, extract, transform and materialize features with NYC taxi-fare prediction sample data.
Feature Embedding Demo FeatHub UDF example showing how to define and use feature embedding with a pre-trained Transformer model and hotel review sample data.
Fraud Detection Demo An example to demonstrate usage with multiple data sources such as user account and transaction data.

Examples in this this repo can be run using docker-compose.

Developer Guide

Prerequisites

You need the following to build FeatHub from source:

Install Development Dependencies

  1. Install the required Python libraries.
$ python -m pip install -r python/dev-requirements.txt
  1. Start docker engine and pull the required images.
$ docker image pull redis:latest
$ docker image pull confluentinc/cp-kafka:5.4.3
  1. Increase open file limit to be at least 1024.
$ ulimit -n 1024

Build and Install FeatHub from Source

$ mvn clean package -DskipTests -f ./java
$ python -m pip install "./python[flink]"
$ python -m pip install "./python[spark]"

Run Tests

Please execute the following commands under Feathub's root folder to run tests.

$ mvn clean package -f ./java
$ pytest --tb=line -W ignore::DeprecationWarning ./python

While the commands above cover most of Feathub's tests, some FlinkProcessor's python tests, such as tests related to Parquet format, have been ignored by default as they require a Hadoop environment to function correctly. In order to run these tests, please install Hadoop on your local machine and set up environment variables as follows before executing the commands above.

export FEATHUB_TEST_HADOOP_CLASSPATH=`hadoop classpath`

You may refer to Flink's document for Hive connector for supported Hadoop & Hive versions.

Format Code Style

FeatHub uses the following tools to maintain code quality:

Before uploading pull requests (PRs) for review, format codes, check code style, and check type annotations using the following commands:

# Format python code
$ python -m black ./python

# Check python code style
$ python -m flake8 --config=python/setup.cfg ./python

# Check python type annotation
$ python -m mypy --config-file python/setup.cfg ./python

Roadmap

Here is a list of key features that we plan to support:

Contact Us

Chinese-speaking users are recommended to join the following DingTalk group for questions and discussion. You need to join the "Apache Flink China" DingTalk organization via this link first in order to join the following DingTalk Group.

English-speaking users can use this invitation link to join our Slack channel for questions and discussion.

We are actively looking for user feedback and contributors from the community. Please feel free to create pull requests and open Github issues for feedback and feature requests.

Come join us!

Additional Resources