OBS! Ansökningsperioden för denna annonsen har
passerat.
Arbetsbeskrivning
Technologies you will get to work with: -
1.Azure Data-bricks
2.Azure Data factory
3.Azure DevOps
4.Spark with Python & Scala and Airflow scheduling.
What You will Do:-
Build large-scale batch and real-time data pipelines with data processing frameworks like spark, Scala on Azure platform.
Collaborate with other software engineers, ML engineers and stakeholders, taking learning and leadership opportunities that will arise every single day.
Use best practices in continuous integration and delivery.
Sharing technical knowledge with other members of the Data Engineering Team.
Work in multi-functional agile teams to continuously experiment, iterate and deliver on new product objectives.
You will get to work with massive data sets and learn to apply the latest big data technologies on a leading-edge platform.
Required Technical and Professional Expertise:
Professional programming experience and hands-on experience in building modern data platforms/pipelines.
Programming languages: Scala, Java, Python is plus
Frameworks: Spark, Hadoop, Hive, Kafka
Databases: Azure Synapse, NoSQL, Strong in Relational databases
Data pipelining: AZURE Data factory, ELT
Scheduling: Airflow
Cloud: Azure Stack, Azure DevOps , Azure Databricks
Experience Kubernetes, Docker, and Virtualisation
DevOps: experience in Agile, CI/CD
Previous experience gained in mid-size/large, international companies