Please note that this position is open only to applicants from SK, UA, PL, CZ, BG
Location: Europe
Status: Full-Time - Traditional Employee Contract ( from 3500EUR brutto) or B2B Contract.
Salary expectation depends on the seniority level and we are open to speak about it.
The job will offer you opportunities to participate in wide variety of international projects.
We are looking for people who share with us the passion that working with data and people is fun and a “mission”, not a “job”.
Key Responsibilities
Design & Develop Data Pipelines: Build and maintain ETL/ELT pipelines using Databricks and Apache Spark to process large datasets efficiently.
Optimize Data Processing: Improve performance, scalability, and cost-efficiency of data workflows on Databricks.
Cloud Data Integration: Work with AWS (Glue, Redshift, S3) or Azure (ADF, Synapse, ADLS), or GCP (BigQuery, Dataflow) to implement robust data solutions.
Data Modeling & Warehousing: Develop data models, star/snowflake schemas, and data warehouse solutions to support business analytics.
Streaming & Batch Processing: Implement both real-time and batch data processing solutions using Databricks streaming and structured streaming.
Personality requirements and skills
5+ years of experience in data engineering with a strong focus on Databricks and Apache Spark.
Hands-on experience with Python, SQL, Scala, and Spark optimizations (partitioning, caching, etc.).
Strong knowledge of cloud data services (AWS, Azure, or GCP) and experience with Databricks on cloud environments.
Expertise in ETL/ELT pipeline design, performance tuning, and data orchestration tools like Apache Airflow.
Experience with data lake architectures, Delta Lake, and lakehouse concepts.
Knowledge of CI/CD for data pipelines, including version control (Git) and automation tools (Terraform, Jenkins).
Familiarity with machine learning pipelines and MLOps on Databricks is a plus.
Strong problem-solving skills and ability to work in a fast-paced, collaborative environment
Databricks certifications (e.g., Databricks Certified Data Engineer Associate/Professional).