Hatchet Logo

Run Background Tasks at Scale

Docs License: MIT Go Reference NPM Downloads

Discord Twitter GitHub Repo stars

Hatchet Cloud · Documentation · Website · Issues

What is Hatchet?

Hatchet is a platform for running background tasks, built on top of Postgres. Instead of managing your own task queue or pub/sub system, you can use Hatchet to distribute your functions between a set of workers with minimal configuration or infrastructure.

When should I use Hatchet?

Background tasks are critical for offloading work from your main web application. Usually background tasks are sent through a FIFO (first-in-first-out) queue, which helps guard against traffic spikes (queues can absorb a lot of load) and ensures that tasks are retried when your task handlers error out. Most stacks begin with a library-based queue backed by Redis or RabbitMQ (like Celery or BullMQ). But as your tasks become more complex, these queues become difficult to debug, monitor and start to fail in unexpected ways.

This is where Hatchet comes in. Hatchet is a full-featured background task management platform, with built-in support for chaining complex tasks together into workflows, alerting on failures, making tasks more durable, and viewing tasks in a real-time web dashboard.

Features

📥 Queues

Hatchet is built on a durable task queue that enqueues your tasks and sends them to your workers at a rate that your workers can handle. Hatchet will track the progress of your task and ensure that the work gets completed (or you get alerted), even if your application crashes.

This is particularly useful for:

Read more ➶

🎻 Task Orchestration

Hatchet allows you to build complex workflows that can be composed of multiple tasks. For example, if you'd like to break a workload into smaller tasks, you can use Hatchet to create a fanout workflow that spawns multiple tasks in parallel.

Hatchet supports the following mechanisms for task orchestration:

🚦 Flow Control

Don't let busy users crash your application. With Hatchet, you can throttle execution on a per-user, per-tenant and per-queue basis, increasing system stability and limiting the impact of busy users on the rest of your system.

Hatchet supports the following flow control primitives:

📅 Scheduling

Hatchet has full support for scheduling features, including cron, one-time scheduling, and pausing execution for a time duration. This is particularly useful for:

🚏 Task routing

While the default Hatchet behavior is to implement a FIFO queue, it also supports additional scheduling mechanisms to route your tasks to the ideal worker.

⚡️ Event triggers and listeners

Hatchet supports event-based architectures where tasks and workflows can pause execution while waiting for a specific external event. It supports the following features:

🖥️ Real-time Web UI

Hatchet comes bundled with a number of features to help you monitor your tasks, workflows, and queues.

Real-time dashboards and metrics

Monitor your tasks, workflows, and queues with live updates to quickly detect issues. Alerting is built in so you can respond to problems as soon as they occur.

https://github.com/user-attachments/assets/b1797540-c9da-4057-b50f-4780f52a2cb9

Logging

Hatchet supports logging from your tasks, allowing you to easily correlate task failures with logs in your system. No more digging through your logging service to figure out why your tasks failed.

https://github.com/user-attachments/assets/427c15cd-8842-4b54-ab2e-3b1cabc01c7b

Alerting

Hatchet supports Slack and email-based alerting for when your tasks fail. Alerts are real-time with adjustable alerting windows.

Quick Start

Hatchet is available as a cloud version or self-hosted. See the following docs to get up and running quickly:

Documentation

The most up-to-date documentation can be found at https://docs.hatchet.run.

Community & Support

Hatchet vs...

Hatchet vs Temporal

Hatchet is designed to be a general-purpose task orchestration platform -- it can be used as a queue, a DAG-based orchestrator, a durable execution engine, or all three. As a result, Hatchet covers a wider array of use-cases, like multiple queueing strategies, rate limiting, DAG features, conditional triggering, streaming features, and much more.

Temporal is narrowly focused on durable execution, and supports a wider range of database backends and result stores, like Apache Cassandra, MySQL, PostgreSQL, and SQLite.

When to use Hatchet: when you'd like to get more control over the underlying queue logic, run DAG-based workflows, or want to simplify self-hosting by only running the Hatchet engine and Postgres.

When to use Temporal: when you'd like to use a non-Postgres result store, or your only workload is best suited for durable execution.

Hatchet vs Task Queues (BullMQ, Celery)

Hatchet is a durable task queue, meaning it persists the history of all executions (up to a retention period), which allows for easy monitoring + debugging and powers a bunch of the durability features above. This isn’t the standard behavior of Celery and BullMQ (and you need to rely on third-party UI tools which are extremely limited in functionality, like Celery Flower).

When to use Hatchet: when you'd like results to be persisted and observable in a UI

When to use task queue library like BullMQ/Celery: when you need very high throughput (>10k/s) without retention, or when you'd like to use a single library (instead of a standalone service like Hatchet) to interact with your queue.

Hatchet vs DAG-based platforms (Airflow, Prefect, Dagster)

These tools are usually built with data engineers in mind, and aren’t designed to run as part of a high-volume application. They’re usually higher latency and higher cost, with their primary selling point being integrations with common datastores and connectors.

When to use Hatchet: when you'd like to use a DAG-based framework, write your own integrations and functions, and require higher throughput (>100/s)

When to use other DAG-based platforms: when you'd like to use other data stores and connectors that work out of the box

Hatchet vs AI Frameworks

Most AI frameworks are built to run in-memory, with horizontal scaling and durability as an afterthought. While you can use an AI framework in conjunction with Hatchet, most of our users discard their AI framework and use Hatchet’s primitives to build their applications.

When to use Hatchet: when you'd like full control over your underlying functions and LLM calls, or you require high availability and durability for your functions.

When to use an AI framework: when you'd like to get started quickly with simple abstractions.

Issues

Please submit any bugs that you encounter via Github issues.

I'd Like to Contribute

Please let us know what you're interesting in working on in the #contributing channel on Discord. This will help us shape the direction of the project and will make collaboration much easier!