What You’ll Work On
- Designing, developing, and maintaining data pipelines and workflows in AWS.
- Working daily with Python (Conda environments), PySpark, Git, Airflow, and JupyterHub.
- Collaborating with data scientists and engineers to ensure efficient and scalable data processing.
Required Skills
- Strong understanding and hands-on experience with:
- Python (Conda environments)
- PySpark
- Git
- Airflow
- JupyterHub
- These are tools you’ll be using on a daily basis. Strong skills in Python and Git are essential.
- Nice to Have
- Experience with CI/CD tools such as Bitbucket, Jenkins, Nexus, or Concourse
- (Alternatively, experience with GitLab CI/CD or a similar system is also valuable.)
- Familiarity with Pandas and Polars
Basic Understanding
- Exposure to Docker, Kubernetes, CloudWatch, and AWS Lambda
- You’ll come across these technologies occasionally, but deeper expertise can be developed on the job.
Preferred Background
- Experience working on large-scale projects where you developed with PySpark and Airflow and understand how these tools work under the hood
Employee perks, benefits
• You will have an immense opportunity and our support to grow professionally, working on leading data analytics projects and technologies in the region
• You will work in an environment which will stand behind you, support you in your growth, and which will value your work through competitive salary and a bonus structure
Information about the selection process
We look forward to working with you, and you becoming an Adastran!
In case of interest, do not hesitate to send us your Curriculum Vitae in English language. We will contact only candidates who match the profile we are looking for. Thank you for your understanding.