
Data Engineer
- Halifax, NS
- Permanent
- Temps-plein
- Hands-on data engineering and technical oversight in the development of batch and real-time data pipelines, DevOps pipelines and end-to-end processes supporting data ingestion, curation, virtualization and delivery at scale leveraging Microsoft Azure Data & Analytics PaaS Services.
- Lead the agile software development process, including roadmap planning, cross-team coordination, task assignments, code reviews, sprint demos, and design review and sign-off.
- Translate business requirements to technical solutions working with development teams, architects and business analysts conformant to enterprise standards, architecture and technologies.
- Delivery of modernization of enterprise data solutions using Azure cloud data technologies.
- Design and Build Modern Data Pipelines and Data Streams.
- Design and Build Data Service APIs.
- Develop and maintain data warehouse schematics, layouts, architectures, and relational/non-relational databases for data access and Advanced Analytics.
- Expose data to end-users using PowerBI, Azure API Apps, or any other modern visualization platform or experience.
- Playing a key role in mentoring other staff members and a proven track record in knowledge transfer to other team members and departments.
- Identifies opportunities for new architectural initiatives makes recommendations on the increasing scalability and robustness of platforms and solutions.
- 10+ years of professional experience with designing and development of data warehouses and ETL tools and technologies.
- Strong experience as an Azure Data Engineer and must have Azure Databricks experience.
- Expert level understanding of Azure Data Lake, Azure SQL databases, Azure Data Factory and Azure Databricks is required.
- Hands-on experience on designing and developing scripts for ETL processes and automation in Azure data factory, Azure databricks and PySpark.
- Experience preparing data for use in Data science/Advanced Analytics using Azure ML.
- Experienced with the Hadoop ecosystem and toolset – HDFS, MapReduce, Spark, Python, Scala, Hive and Oozie.
- Experience with CI/CD tools such as GitHub, Jenkins and Azure DevOps to configure and build CI/CD pipelines
- Knowledge of BI tools such as PowerBI
- Strong interpersonal, written and oral communication skills
- Proven ability to effectively prioritize and execute tasks in a team-oriented, collaborative work place
- Self-reliant and comfortable with a rapidly changing environment
- Customer service orientation
- Ability to assimilate new skills to support
- Ability to adapt to new service needs as they arise