Sifflet implements data quality checks ranging from the simple (detecting null values) to the very complex (time series forecasting models to validate that the distribution of a set of columns hasnβt changed in an unexpected way, taking into account seasonality, one-off eventsβ¦). This foundation is used by many features, such as automatically merging related alerts into incidents.
The monitoring team is responsible for evolving the data quality checks performed by Sifflet, and all related features. As a member of this team, you will:
design and implement new types of data quality checks
build features to allow users to efficiently monitor their entire data stack, such as automated monitor suggestions
design advanced solutions to cut the alerting noise, such as automated incident root cause analysis
scale our monitoring engine to support more and more customers. Some customers require monitoring massive data sets.
add support for automated data profiling (understanding the expected distribution of values in all columns), and build a statistical root cause analysis feature on top of that
build an API to allow users to submit their own asset quality metrics, and surface unmonitored assets to users
design and implement automated quality check suggestions
the monitoring engine is built with Python 3 and its large data/ML ecosystem.
the web API is written in (modern) Java with Spring Boot 3, the web frontend is a VueJS application written in Typescript. You may occasionally need to make minor changes to this code base.
infrastructure: Kubernetes (AWS EKS clusters), MySQL (on AWS RDS), Temporal for job orchestration.
and a few supporting services: Gitlab CI, Prometheus/Loki/Grafana, Sentryβ¦
While not directly part of our stack, expect to gain a lot of knowledge on many products in the modern data ecosystem. The subtleties of BigQuery or Snowflake will soon be very familiar to you.