AI Ops Data Scientist

AI Ops Data Scientist

Arbetsbeskrivning

Are you passionate about using data to unlock the potential of science, develop life-changing medicines and bring them to patients quicker?



AstraZeneca Science & Enabling Units IT is building an outstanding organization at the forefront of the digital revolution in healthcare. We're applying technologies such as AI, machine learning, software and data engineering and analytics to provide mission-critical insights. Come and thrive in our culture of excellence, engagement and development - and make a positive difference to patient outcomes.



Main Duties and Responsibilities

We are looking for an experienced data scientist to join our new AI Ops platform team in Science IT, to use your skills and applying them to some of the most exciting data & prediction problems in drug discovery.



You will get the opportunity to work in a range of different cloud environments with devised and deployed large-scale production infrastructure and platforms for data science.

The successful candidate will belong to a new, close-knit team of deeply technical experts and together have the chance to create tools that will advance the standard of healthcare, improving the lives of millions of patients across the globe. Our data science environments will support major AI initiatives such as clinical trial data analysis, knowledge graphs, imaging & omics for our therapy areas. You will also have responsibility to help provide the frameworks for colleagues to develop scalable machine learning and predictive models with our growing data science community, in a safe and robust manner.



As a strong software leader and an expert in building complex systems, you will be responsible for inventing how we use technology, machine learning, and data to enable the productivity of AstraZeneca. You will help envision, build, deploy and develop our next generation of data engines and tools at scale. This role will be bridging the gap between science and engineering and functioning with deep expertise in both worlds.


Responsibilities

* Liaise with R&D data scientists to understand their challenges and work with them to help productionise models and algorithms.
* Be part of the development roadmap to build and operationalise our data science environment, platforms and tooling.
* Support any external opportunities, through close partnership and engagement such as Benevolent.AI collaboration.
* Adapt standard machine learning methods to best exploit modern parallel environments (e.g. distributed clusters, multicore SMP, and GPU).
* Implementing custom machine learning code and developing benchmarking capabilities to monitor drift of any analyses over time.
* Liaise with other teams to enhance our technological stack, to enable the adoption of the latest advances in Data Processing and AI
* Being an active member of the Data Science team, you will benefit from, and contribute to, our expanding bank of algorithms and work efficiently with our data science infrastructure.


The Profile

First, you just get things done. Second, you got a demonstrable track record and deep technical skills in one or more of the following areas: machine learning, recommendation systems, pattern recognition, natural language processing or computer vision. You have a creative and collaborative approach with experience of managing an enterprise platform and service, handling new customer demand and feature requests with product focus in mind. Since we work with senisitive data, you need to understand the necessary guardrails required for different use cases and data sensitivities.


Candidate Knowledge, Skills & Experience

* BSc/MSc/Ph.D degree in Computer Science or related quantitative field.
* Understanding of the latest AI webservices and data science tools, from DataBricks to citizen tools like Dataiku, C3.AI and Domino
* Strong software coding skills, with proficiency in Python and Scala preferred.
* Significant experience with AWS cloud environments, working knowledge of Google and Azure platforms.
* Knowledge of Kubernetes, S3, EC2, Sagemaker, Athena, RDS and Glue is essential.
* Certification in appropriate areas will be viewed favourably.
* Experience with best practice of data transport and storage within cloud system.
* Experience building large scale data processing pipelines. e. g. Hadoop/Spark and SQL.
* Experience provisioning computational resources in a variety of environments.
* Experience with containers and microservice architectures e.g. Docker and serverless approaches.
* Experience with automation strategies e.g. CI/CD, gitops.
* Use of Data Science modelling tools e.g. R, Python, SAS and Data Science notebooks (e.g. Jupyter).

Kontaktpersoner på detta företaget

AstraZeneca

AstraZeneca

AstraZeneca

Sammanfattning

  • Arbetsplats: AstraZeneca AB
  • 1 plats
  • Tillsvidare
  • Heltid
  • Fast månads- vecko- eller timlön
  • Publicerat: 30 mars 2020
  • Ansök senast: 16 april 2020

Besöksadress

Pepparedsleden 1
None

Postadress

43120
1480, 43120

Liknande jobb


22 november 2024

22 november 2024

22 november 2024