Data Engineer
We’re building a Realtime Data Platform – a platform that ingests data from multiple external sources, normalizes it into canonical models, and serves it for analytics, dashboards, and client-facing reports.
As a BI Data Engineer, you’ll design and operate data models and transformation pipelines, ensure data quality and reliability for decision-making, and collaborate with analysts and business stakeholders to deliver trusted insights.
Responsibilities
- Data Ingestion & Transformation
- Design and maintain ingestion & transformation flows (dbt, SQL, Python).
- Build reliable pipelines from ETLs, CDC, APIs, CSVs, and event streams into analytical models.
- Ensure deduplication and data integrity for BI reporting.
- Data Modeling & Warehousing
- Develop semantic and analytical models (fact/dimension tables, star/snowflake schemas).
- Optimize query performance in BI warehouse (currently Athena with Iceberg but in the future maybe ClickHouse, Redshift, BigQuery or Starrocks).
- Implement versioned entities and audit trails for business data.
- Analytics Enablement
- Build pre-aggregations and semantic layers (Cube, dbt metrics and models).
- Support dashboards and client-facing reports in Grafana.
- Design KPIs and metrics with product, risk, and operations teams.
- Data Quality & Observability
- Implement validation checks (dbt tests, Pydantic).
- Monitor data freshness, pipeline SLAs, and dashboard accuracy.
- Set up alerts and documentation for business users.
- Collaboration & Stakeholder Support
- Work closely with analysts, product managers, and operations to understand BI needs.
- Expose data via APIs/semantic layers for self-service exploration.
- Contribute to the design of rules for reporting policies and metric definitions.
Requirements
- Experience: 3+ years in BI engineering, data engineering, or analytics engineering.
- Data Warehouses: strong SQL skills, experience with at least one warehouse (Athena, Redshift, ClickHouse, Snowflake, BigQuery).
- Modeling: hands-on with dimensional modeling, dbt, star/snowflake schemas, metric layers.
- Programming: Python for transformations .
- Data Quality: experience with validation, deduplication, audit logs.
- CI/CD & Infra: GitHub Actions or similar, Terraform/CDK basics, containerized pipelines.
- Mindset: detail-oriented, reliable, collaborative with business users.
Nice to Have
- Knowledge of semantic layers (Cube, dbt metrics).
- Experience with AWS and cloud Infrastructure.
- Experience with client-facing dashboards.
- Experience with Grafana or another BI visualization tool.
- Familiarity with real-time analytics and pre-aggregations.
- Esports fan.
What We Offer
- Opportunity to shape the BI architecture from the ground up.
- Modern realtime analytics stack (dbt, Cube, Grafana, Postgres, Iceberg).
- Collaborative, engineering-led environment with strong focus on analytics.
- Experience immense potential for personal growth while contributing to an innovative team.
- Work remotely or in a hybrid style – whatever suits you best.
- Enjoy our luxurious Norse-myth-inspired offices in Prague.
- Improve your English with company-sponsored language courses.
- Participate in quarterly team-building activities to bond with your team.