Data Engineer – Data warehouse, ETL
We’re looking for a Data Engineer who’s excited to work behind the scenes of global giants like McDonald’s, Dunkin’, and Arby’s and many more — helping run data infrastructure for over 30,000 locations across the globe.
This isn’t a back-office reporting job. You’ll be building core data systems that fuel real-time digital menu boards, drive-thrus, and internal analytics platforms used by tens of thousands of people every day — from franchise owners to C-level execs.
You’ll:
Design and maintain data pipelines that move billions of records daily
Own the architecture for scalable, bulletproof data systems
Work closely with engineers, consultants, and product teams to shape what real-world data products look like
When you deploy something, it gets used immediately — on thousands of screens, in live restaurants, by real people making real decisions. The stakes (and the scale) are real.
What will you do?
Design and build data infrastructure for the analytics platform
Develop ETL/ELT pipelines to collect data from POS systems, CMS, devices, and logs
Work in real time – ensuring data arrives on time and in the right format
Design data warehouses, optimize performance, and manage monitoring and alerting
Manage backend services supporting the analytics workflows
Maintain high system quality, reliability, and readiness for scale
What are we looking for?
Data Infrastructure & Architecture
Data warehouse implementation (e.g. Snowflake, Clickhouse, BigQuery)
ETL/ELT pipeline development and orchestration
Nice to have - Real-time data streaming (Kafka, Kinesis)
Data Transformations
Advanced SQL query optimization
API development and integration
Pipeline monitoring and alerting
Nice to have - Python, Java, or Scala
Nice to have - Experience with Node.js – we use it for several software integrations
Data Management
Data model and schema design
Data quality management and governance
Database administration
Nice to have - Security and compliance implementation
Integration & Processing
3rd-party API synchronization
Optimization of batch and stream processing
Data aggregation from multi-location sources
Nice to have - POS system integration
Nice to have - Cloud & DevOps
Nice to have - Familiarity with AWS Cloud is important
Nice to have - CI/CD and containerization (Docker, Kubernetes) are a plus
Nice to have - System monitoring and performance tuning
What do we offer?
A global project with real operational data
The chance to build things from scratch and influence their design
Opportunities for growth – in engineering, analytics, and product thinking
Freelance contract / 60-100k CZK
Freedom in how you approach your work – results matter, not screen time
Great colleagues who know their craft and enjoy sharing knowledge
This position is with an international company where the candidate will be working with colleagues from the US and Australia. Therefore, they should be comfortable working in the afternoon or evening hours.