OBS! Ansökningsperioden för denna annonsen har
passerat.
Arbetsbeskrivning
CEVT is an innovation centre for the future cars of the Geely Group with the purpose of being at the forefront of new developments in the automotive industry. Since 2013 CEVT has grown rapidly and now keeps some 2000 people busy. China Euro Vehicle Technology AB is a subsidiary of Zhejiang Geely Holding Group.
Data Engineer – Cloud & Data Architecture
Mission:
The software industry is evolving at furious speed. Change in industry is inevitable and brand-new business areas are continuously emerging, CEVT is on transformation journey and new technology will work as an enabler for the transition, support our core values and discover new opportunities.
Job Context:
The Cloud & Data Architecture team belongs to the Intelligent Platform department with the mission to enable the organization with capabilities within Data Management, Continuous Integration / Development and Cloud Engineering. As Data Engineer you will be part of our organizations core deliveries such as connectivity, innovation and architecture.
As Data Engineer your focus will be to support our Data Architecture function. The Data Architecture function responsibility mainly consist of Data Management, Data Platforms and Data Governance. As Data Engineer you will work closely with Product Owners, Data Scientists, Developers and Dev/Ops Engineers.
Responsibilities
Source data, process data, cleanse data, expose it to the Data Scientists and ML algorithm developers, then bring the results of their analysis to users via products.
Build real-time and big data processing pipelines
Optimizing for scalability and performance
Proficient in building large scale ETL jobs leveraging big data infrastructure (Hadoop, Spark, Kafka)
Building, running, testing and shipping data pipelines to serve ML/AI workloads
Work with multi-functional teams translating their data needs into solutions
Create data tooling that assists data scientists and analysts in building low latency, scalable and resilient pipelines for machine learning and optimization workloads
Helping establish a Data centric culture within engineering teams
Deploy, upgrade and configure solutions
Translate technical requirements
Develop interface and simulators
Trouble shoot various environments
Assist with planning activities and provide input to operational cost and risks in deliveries
Competence
Previous experience in Data Engineering and processes
Experience of creation of CI/CD pipelines
Experience from working with Infrastructure-as-Code
Working knowledge of container technologies is a must (Docker, Kubernetes)
Experience from Google Cloud Platform is preferred
Experience from connectivity area is preferred
Experience of ML algorithm development is preferred
Familiarity with monitoring tools
Holistic perspective on deliveries and technology roadmaps
Curios mindset
Passion for technology
Last application date: 2021-05-14.
Apply today, we will perform ongoing selection during the application period.
Contact:
Kristina Larsson, Senior Recruiter, 072-9888544.