Principal AI Engineer
Mastercard Voir toutes les offres
- Toronto, ON
- Permanent
- Temps-plein
- Architects, designs, develops, and maintains advanced AI and Machine Learning systems to address specific business challenges.
- Oversees the deployment of models into production, ensuring scalability, reliability, and compliance with ethical guidelines.
- Develops and implements the contract or engagement between the Data Engineering team and the AI Engineering team to ensure that data ingestion, preprocessing, and feature engineering workflows support model training and inference.
- Develops and refines the technical strategy for AI/ML model optimization and deployment, ensuring alignment with business objectives and industry best practices.
- Undertakes research and prototyping efforts for deploying and managing scalable, maintainable AI/ML models, leveraging cutting-edge industry techniques.
- Supports high-impact AI/ML projects, providing technical mentorship and ensuring adherence to quality standards.
- Collaborates with product teams and stakeholders to translate business needs into effective AI/ML engineering solutions.
- Stays abreast of emerging AI/ML engineering trends and incorporates innovative techniques into Mastercard’s AI ecosystem.
- Mentors team members by sharing best practices, innovative techniques, and emerging trends to develop expertise and capabilities around their discipline.
- Years of progressive leadership experience in AI and Data Science roles.
- Be energized by solving complex problems and delivering innovative solutions in collaborative, fast-paced environments.
- Possess deep expertise in modern software engineering principles, system design, and architectural best practices.
- Maintain a strong commitment to code quality, maintainability, and engineering excellence.
- Consistently demonstrate initiative and the ability to take on high-impact, ambiguous challenges.
- Exhibit exceptional written and verbal communication skills, with the ability to influence and collaborate across technical and non-technical stakeholders.
- Be highly motivated, driven, and a trusted team contributor and technical leader.
- Operate autonomously while mentoring and guiding other engineers, making sound technical decisions, and solving complex problems independently.
- Distinguished expertise in architecting and building Agentic AI systems using frameworks such as LangGraph, CrewAI, and AutoGen, with deep command of Agentic AI design patterns, Context Management, LLMOps, AgentOps, Guardrails, Agent Validation, and Evaluation.
- Advanced mastery of prompt engineering and extensive experience working with both closed-source and open-source LLMs.
- Proven leadership in guiding teams through experimentation and strategic decision-making across RAG, few-shot prompting, LLM fine-tuning, and hybrid approaches to improve model context and reasoning.
- Strong command of MLOps practices and platforms, including MLflow.
- Demonstrated expertise in LLM fine-tuning techniques, including model quantization and distillation.
- Extensive hands-on experience implementing and optimizing multiple RAG paradigms, including graph-based RAG, vector database search, and performance tuning.
- Deep understanding of LLM cost structures, architectural trade-offs, and scalability considerations, with experience conducting ROI modeling and cost simulations.
- Extensive experience designing, implementing, and sustaining enterprise-grade CI/CD pipelines that enable automated integration, testing, and deployment with high reliability and speed.
- High proficiency in Python and the broader data science ecosystem, including NumPy, pandas, sklearn, spaCy, Keras, PyTorch, Transformers, and LangGraph.
- Strong practical experience applying Machine Learning, Deep Learning, and NLP models across supervised and unsupervised learning to solve complex, real-world business problems.
- Hands-on experience designing and developing Python-based APIs using frameworks such as FastAPI, with strong fluency in JSON-based integrations.
- Solid experience working with PySpark and a strong conceptual grasp of distributed and parallel processing for large-scale data workloads.
- Extensive experience using Unix/Linux environments to access systems, interact with databases, and deploy, operate, and manage services and APIs.
- Experience designing and deploying solutions on cloud platforms such as Azure, leveraging cloud-native services at scale.
- Familiarity with Databricks platforms is a plus.