OBS! Ansökningsperioden för denna annonsen har
passerat.
Arbetsbeskrivning
We’re looking for a Machine Learning Engineer for the Credit Infrastructure Team to join our progressive organization that cares about empowering individuals, personal development, and delegating responsibility to impact the way our 3 million users handle their payments. Every time a customer makes a purchase, applies for a loan or use any other of Qliro’s services, the Decision Engine is running through credit risk models, fraud detection algorithms, AML and CTF checks in real-time.
The Decision Engine is a core system built in house by a highly proficient development team in close cooperation with the Credit department. The Credit department creates the ecosystem, through data and technology, which enables Qliro to learn and deliver quickly, while safely and easily scaling to millions of customers, creating a great experience for our customers and merchants.
As a Machine Learning Engineer, you will take a key role in the Credit Infrastructure Team and contribute to our analytical capabilities and real-time decision making. You will be a driving force in automation and setting up scalable machine learning frameworks, while also improving data structures and data flows. The Credit Infrastructure Team is a part of the Credit Department and acts as the glue between analytics, big data, and our development teams. Through the consulting-nature of the team, you will work cross-functionally with analysts, developers, data scientists, and data engineers daily.
What you will do:
Collaborate with infrastructure, data and software engineers to improve service and data reliability, scalability and tooling.
Build and maintain framework solutions for data scientists to train and deploy machine learning models written in R with minimum effort.
Design and layout relational and non-relational data structures and flows for real-time credit decisions and reporting.
Take ownership for any technical or data-related questions arising in the Credit organization.
We believe that you:
Have a degree in Engineering, IT, Computer Science, Mathematics, or equivalent by experience.
Would enjoy being in the interface between data and software development.
Believe in automation, dev-ops, and docker.
Understand the importance of data.
We believe that with a curious mind about most things like orchestration tools, CI/CD, cluster setup and database management, and a passion for new technologies like Docker, Scala, Spark, and Kafka, almost everything is possible.