Your Scope
Within the Ardo Analytics & Reporting Team, the Data Engineer plays a pivotal role in designing, developing, and maintaining data pipelines to transform raw data into structured, high-quality datasets. These datasets will support decision-making processes for both end-users and the Analytics & Reporting team.
Ardo has taken great steps in the last year to set up a modern data platform on Azure as a basis to support the goal of becoming a data-driven organization. This role will be critical to expand our Ardo Data Lake and unlock more data sources for usage.
Your Key Responsibilities Include The Following
1. Based on your deep understanding of Ardo’s IT infrastructure and data models, you ensure that business and client objectives are at the forefront of all data handling activities, aligning technical solutions with organizational goals.
2. You own the ETL framework, guidelines, and processes, ensuring that the Ardo Data Lake follows best practices for scalability, efficiency, and data quality.
3. You analyze needs and design, build, and optimize data pipelines, automating processes from raw data ingestion to curated data models.
4. You efficiently connect diverse data sources to Ardo's modern data platform on Azure and maintain and develop the IoT architecture for sensor data from the Ardo plants.
5. You develop, enforce, and guard the ETL frameworks, guidelines, and processes.
6. You create and maintain data layers that support reporting and data visualization tools for both internal BI experts and self-service business users.
7. You monitor and troubleshoot all ETL processes, data pipelines, and data-related issues to ensure data quality, reliability, and performance.
8. You collaborate with all stakeholders, including the Analytics & Reporting team.
9. You stay updated on industry trends and new technologies to improve the data engineering framework.
Your Profile
1. You have a master’s degree in a relevant field and proficiency with the Azure ecosystem (Azure Data Factory, Azure Event Hubs, …).
2. You have strong experience in SQL and PySpark programming and with Azure DevOps or similar code versioning and deployment tools. Familiarity with Python, Databricks, and/or SAP databases is a plus.
3. You have experience in developing ETL frameworks and coding standards as well as a strong understanding of data modeling concepts and best practices.
4. You enjoy analyzing complex datasets and timely solving problems to ensure they lead to meaningful insights, setting the right priorities along the way.
5. You have a proactive and collaborative mindset with a focus on accuracy as well as continuous improvement.
6. Above all, you combine excellent communication skills in English with strong troubleshooting skills to address data pipeline and ETL issues.
We Offer
We offer you the opportunity to be part of an authentic and sustainable international company, with real growth opportunities and the freedom to actively participate in shaping the business and the opportunity to develop professionally. You will receive a full remuneration package in line with the level of this position. We care for our people and create family-friendly surroundings by offering you the flexibility to work 2 days/week from home.
#J-18808-Ljbffr