About the Company
Our client is pioneering the field of AI strategy and applied AI, partnering with large-scale, ambitious businesses worldwide to ideate, design, and build AI products that transform operations. They don’t just help organizations imagine what’s possible—they develop the software that turns that vision into reality.
Operating across various sectors and functions, our client has assembled a world-class team that unites commercial expertise, smart strategy, and technology. If you’re passionate about building enterprise AI products from 0→1 and leading projects at the forefront of AI innovation, this could be the perfect opportunity for you!
About the Role
Our client is hiring a Lead Data Engineer to design, build, and manage scalable data pipelines supporting AI-powered tools and applications, including agentic tools that adapt to user behaviors. This role involves harmonizing and transforming data from disparate sources to ensure its readiness for foundational model integrations. Candidates with prior experience in a consulting or agency environment who thrive in project-based settings are highly valued.
This is a hands-on position where you will be responsible for building and implementing systems from the ground up. You’ll write production-level code while defining processes and best practices to support future team growth.
Responsibilities
* Develop and manage ETL pipelines to extract, transform, and load data from various internal and external sources into harmonized datasets.
* Design, optimize, and maintain databases and data storage systems (e.g., PostgreSQL, MongoDB, Azure Data Lake, AWS S3).
* Collaborate with AI Application Engineers to prepare data for use in foundational model workflows (e.g., embeddings and retrieval-augmented generation setups).
* Ensure data integrity, quality, and security across all pipelines and workflows.
* Monitor, debug, and optimize data pipelines for performance and reliability.
Requirements
* Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field.
* A minimum of 6 years of professional experience in data engineering.
* Proven experience working in a consulting or agency environment on project-based work.
* Experience in Python, SQL, and data transformation libraries like pandas or PySpark.
* Hands-on experience with data pipeline orchestration tools like Apache Airflow or Prefect.
* Solid understanding of database design and optimization for relational and non-relational databases.
* Familiarity with API integration for ingesting and processing data.
* Advanced English skills, both written and verbal, with the ability to communicate effectively in an international team.
Benefits
* 25 days of annual holiday allowance
* Company car or mobility budget
* Work-from-home allowance
* Company laptop & phone
* Meal vouchers
* Social allowance
* Ecocheques
* Sports and culture cheques
* Annual bonus