
Data Engineer
- Brampton, ON
- Permanent
- Temps-plein
- Design and maintain high-performance SQL-based data transformation pipelines.
- Build reusable, modular SQL code using software engineering best practices.
- Develop Python applications for data ingestion, transformation, and pipeline orchestration.
- Optimize complex SQL queries for performance, scalability, and reliability.
- Implement robust data quality checks and maintain metadata and documentation.
- Automate ETL/ELT workflows using Python and cloud-native tools.
- Work with analytics and business teams to translate logic into SQL data models.
- Implement version control (Git) and CI/CD workflows for testing and deployment of pipelines.
- Monitor and optimize data workflows and identify opportunities for performance improvement.
- Mentor junior team members on SQL optimization and Python scripting practices.
- Bachelor's degree in Computer Science, Engineering, Data Science, or a related field.
- 2+ years of experience in data engineering roles, with strong emphasis on SQL and Python.
- Expert-level SQL skills: CTEs, window functions, query optimization, analytical queries.
- Solid Python programming experience: data processing, scripting, automation, APIs.
- Hands-on experience with modern cloud data warehouses (Snowflake, BigQuery, Redshift, or Databricks).
- Strong understanding of data warehouse design, dimensional modeling, and ELT/ETL pipelines.
- Experience with version control systems like Git and collaborative development workflows.
- Knowledge of data quality frameworks and testing strategies using SQL and Python.
- Experience with cloud data platforms and native data services.
- Familiarity with workflow orchestration tools such as Airflow, Prefect, or Dagster.
- Knowledge of data visualization tools (Looker, Tableau, Power BI).
- Exposure to real-time data processing and streaming architectures.
- Understanding of DataOps and analytics engineering best practices.
- Experience with Infrastructure as Code tools like Terraform or CloudFormation.
- SQL: Advanced querying, performance tuning, data modeling, optimization.
- Python: pandas, requests, sqlalchemy, API integration, ETL development.
- Data Warehouses: Snowflake, BigQuery, Redshift, Databricks (or similar platforms).
- Tools: Git, Docker, CI/CD pipelines, orchestration tools (Airflow, Prefect).
- Concepts: Dimensional modeling, data testing, DataOps, analytics engineering.
- Efficient, scalable SQL pipelines that transform raw data into analytics-ready datasets.
- Python-based ETL pipelines for data ingestion, transformation, and automation.
- Automated data quality checks and monitoring systems.
- Modular and reusable SQL components for consistent data logic.
- CI/CD-enabled workflows for reliable and maintainable data pipeline deployments.
- Competitive Salary
- Healthcare Benefits Package
- Career Growth