OBS! Ansökningsperioden för denna annonsen har
passerat.
Arbetsbeskrivning
Nordea is a leading Nordic universal bank. We are helping our customers realise their dreams and aspirations - and we have done that for 200 years.
Job ID: 27376
Are you ready to help shape the future of AI & Generative AI at Scandinavia's largest bank, working alongside a dynamic, fast-paced team?
The Group AI Center of Excellence is an established unit within Group Data Management focusing on delivering and enabling generative AI use cases across the organisation in a secure and compliant way. The unit provides reusable AI Assets that are developed in conjunction with business stakeholders to be leveraged by a wide variety of use cases.
We are now looking for an AI Risk Lead who is interested in Risk and Responsible AI topics, to apply in the field of Generative AI with use-cases across the bank. You will work closely with data scientists, ML engineers, and business stakeholders to ensure all AI and GenAI applications and use cases are development and deployed in compliance with the AI EU Act.
About this opportunity
Welcome to the Applied Data Science team. Your passion for working with people from both a technical and a business background, coupled with your curiosity for AI and new innovations like Generative AI, makes you well-suited for this position. As Nordea pioneers its way into an AI and Gen AI driven future, you will thrive by embracing a dynamic environment, fostering collaboration, and maintaining an open and adaptable mindset.
What you will be doing:
* Support the design and ensure EU AI Act implementation plan across the organization
* Disseminate best practices, serving as a point of contact to all AI efforts, guaranteeing best practices on responsible AI are interestingly adopted in the Bank
* Educate use case owners and other relevant stakeholders in bank-wide risk protocols and external regulations
* Conduct research and ensure commitment in ethical standards of AI governance, such as fairness, accountability, transparency, privacy, and human rights
Our AI team culture is built on collaboration, continuous learning and innovation. We foster an open environment where diverse ideas thrive, encouraging everyone to push the boundaries of technology while supporting each other´s growth. With a strong focus on well-being, we prioritize work-life balance and encourage open communication.
Who you are
Collaboration. Ownership. Passion. Courage. These are the values that guide us in being at our best – and that we imagine you share with us.
To succeed in this role, we believe that you:
* Embrace a growth mindset, where you enjoy taking initiative and bringing solutions
* Enjoy problem solving, and tackling challenges
* Are an awesome team member and you have a strong ability to inspire people to take action
Your experience and background:
* BSc, MSc or PhD in Computer Science, Data Science, Risk Management, or a related discipline
* 10+ years of experience in AI Risk, Model Risk Management and related activities, with strong proven experience and familiarity with EU AI Act and other relevant AI regulations (GDPR)
* Strong communication skills, with the ability to prepare and drive presentations in multiple stakeholders meetings incl. clear alignment with C-level executives, as well as the ability to lead organization wide communication and deliver trainings to educate multiple stakeholders on responsible AI
* Extensive knowledge of GenAI and AI development and deployment process e.g., data exploration, feature engineering, production testing, and output validation
* Proven experience in managing AI-related risks projects, including identifying potential risks and designing mitigation strategies
* Professional experience in developing and implementing Data Management and Governance frameworks, including data quality, security, privacy and compliance processes and procedures
* Knowledge of AI & GenAI model mitigation processes for associated risks e.g. hallucination, fairness, explainability
* Knowledge of in AI/ML product development and working with MLOps best practices
If this sounds like you, get in touch!
Next steps
We kindly ask you to submit your application as soon as possible, but no later than 31/12/2024. We will ongoingly be reviewing applications and conducting interviews as we receive them, and might close the recruitment process before the posting end date. Any applications or CVs sent by email, direct messages, or any other channel than our application forms, will not be accepted or considered.
If you have any questions about the role or this recruitment process, please reach out to our tech recruiter and main point of contact Anna Dahlström, anna.dahlstrom@consult.nordea.com.
For union information, please contact Finansförbundet at finansforbundet@nordea.com or SACO at SacoNordea@nordea.com.