Data Engineer {#data-engineer}
----------------------------------
As a Data Engineer at DKL, you will be critical in designing, building, and optimizing our data infrastructure. Working alongside cross-functional teams, you’ll develop reliable data pipelines and maintain the integrity of large datasets used for analysis and reporting, directly impacting data-driven decision-making across the company.
* SALARY: 40.000 - 60.000 EUR
* REMOTE: 100% (Spain-based candidates only)
* SCHEDULE: Flexible
* Growth opportunities
* 500€/year for educational purposes
You can work remotely from anywhere within Spain. DKL is a fully remote company with no designated headquarters. While we have team members across Spain and internationally, this position is only open to candidates who are legally authorized to work and reside in Spain.
You will work 40 hours per week, with the flexibility to arrange your schedule in a way that works best for you. The only requirement is to be available for daily meetings or client appointments. We understand that personal wellness is crucial for maintaining focus and achieving results.
What’s the Role?
------------------
As a Data Engineer at DKL, you will be responsible for developing, operating, and maintaining scalable data architectures that support analysis, reporting, and machine learning applications. Your role will involve managing ETL processes, creating and running data warehouses, and ensuring the high performance and reliability of data systems. You will collaborate closely with product owners, data scientists, and analysts to translate business requirements into effective technical solutions while maintaining data quality and accessibility. As one of the primary contributors to DKL's data infrastructure, you will ensure our data solutions are efficient, accurate, and aligned with client goals.
Responsibilities
Your responsibilities will encompass a wide range of tasks, including but not limited to:
* Designing, building, and optimizing data pipelines to handle large volumes of data from various sources and various frequencies, including real-time data.
* Developing and maintaining data warehouse architecture, ensuring scalability and performance, and considering organizational requirements to determine the optimal architecture.
* Implementing ETL/ELT processes to extract, transform, and load data for reporting and analytics.
* Collaborating with data scientists and analysts to support machine learning workflows and advanced analytics.
* Monitoring and troubleshooting data systems to ensure high availability and reliability.
* Ensuring data quality and compliance with company data governance standards.
* Documenting data processes and infrastructure for internal use and continuous improvement.
How will you work?
You’ll be part of DKL’s data team, working remotely and collaborating with data scientists, analysts, and software engineers to support DKL’s data-driven goals. Daily check-ins and regular project meetings are held online, ensuring open communication and alignment. Our data tools include Google Cloud Platform (GCP) and Microsoft Azure for cloud services, Databricks and Snowflake for big data processing and Data Warehousing, and Airflow for workflow orchestration. GitHub is used for version control and collaboration, while Jira and Confluence help with project management and documentation.
Who will you work with?
You will collaborate closely with the data team, working alongside data scientists and analysts to build, optimize, and maintain DKL's data infrastructure. You will report directly to the Head of Data and CTO, receiving guidance on data strategy and infrastructure development. Together, you’ll ensure that our data-driven insights align with business objectives and remain accessible across the organization. You’ll also work alongside the Project Manager to align on project timelines and deliverables, collaborating with engineering leads from Backend, DevOps, and Frontend teams to ensure smooth data integration and effective data utilization across all projects.
What Makes You a Fit?
------------------------
Requirements
* Bachelor’s degree in Computer Science or a related field.
* Proven experience in data engineering, including designing and maintaining data pipelines.
* Strong Python programming and Software Engineering skills.
* Strong SQL and analytical skills.
* Proficiency with at least one of the leading cloud platforms (AWS, GCP, or Azure) and data warehousing tools (Snowflake, Databricks, Redshift, or BigQuery).
* Proficiency with a workflow orchestration tool, preferably Airflow.
* Familiarity with data governance and security best practices.
* Excellent problem-solving skills and the ability to work independently and collaborate with a larger team remotely.
Nice-to-Have
* Experience with data streaming technologies, such as Kafka or Kinesis.
* Experience with machine learning pipelines and MLOps.
* Experience implementing a data mesh architecture.
* Experience with functional data engineering.
* Experience with Apache Spark.
* Experience with a Data Quality framework such as Great Expectations.
* Experience using DBT to orchestrate SQL transformations in a Data Warehouse.
* Cloud or data engineering certifications.
* Previous experience in a fast-paced, agile environment.
What will the First 6 Months be Like?
---------------------------------------
Your first six months will be structured to support your learning, integration, and progression as you settle into your role. This period aligns with our review checkpoints at 1, 3, and 6 months, ensuring you have a clear pathway to success during your probation period.
Month 1
Your first month will focus on onboarding and getting grounded in our data platforms, engineering practices, and team workflows. You’ll have access to comprehensive technical documentation and training resources, meet key stakeholders across data, analytics, and product teams, and start familiarizing yourself with our data architecture, pipelines, and development tools. This phase is all about building a strong foundation—setting up your local environment, understanding our deployment processes, and reviewing active projects. At the end of the month, we'll have a check-in to reflect on your experience, answer any technical or process-related questions, and ensure you have the support you need to move forward confidently.
Months 2-3
By month two, you'll start taking on defined responsibilities within our data engineering projects, collaborating closely with your team to plan deliverables, estimate workloads, and coordinate progress across stakeholders. During this phase, you'll begin owning smaller data pipelines or components within larger initiatives—whether that's building new data ingestion processes, optimizing existing workflows, or contributing to infrastructure improvements. This hands-on experience will help you build confidence with our tech stack and development practices. At the three-month mark, we’ll have a dedicated review to reflect on your progress, discuss any technical or operational challenges, and identify growth opportunities as you continue to deepen your impact on the team.
Month 4-6
With solid experience under your belt, by month four, you'll be ready to lead your own data engineering projects more independently. During this stage, you'll take ownership of end-to-end delivery—designing, building, testing, and deploying scalable data solutions that support our business needs. You'll also focus on refining your technical skills, improving system performance, and contributing to best practices within the team. The six-month review will serve as a key milestone to evaluate your overall impact, technical growth, and collaboration while closing out the probation period and setting clear goals for your continued development within the team.
What’s the Selection Process?
--------------------------------
We aim to make our selection process smooth, informative, and enjoyable, ensuring it’s a two-way street where we get to know each other.
1/ Initial Meet & Greet
A casual video call to introduce ourselves, discuss the role at a high level, and get to know each other’s backgrounds and motivations. This call is all about seeing if we're a mutual fit.
2/ Role-Focused Interview
A more focused discussion, diving into the role’s specifics and exploring key data engineering scenarios you might encounter with us. This is where we’ll go over some example cases, discuss your experience, and answer any questions you have about the day-to-day.
3/ Meet the Team Leads
In this call, you’ll meet some of our key team leads. This conversation helps you understand the company culture, our team dynamics, and the kind of cross-functional work you’ll be doing. It’s also a chance to talk more about the projects we’re passionate about.
4/ Decision & Offer
After the final discussion, we’ll circle back with a decision. If we’re a match, we’ll be excited to extend an offer and welcome you aboard! If it turns out this isn’t the right fit, we’ll let you know as well and share our feedback, wishing you all the best in your career journey.