Data Engineer

Charger Logistics

  • Brampton, ON
  • Permanent
  • Temps-plein
  • Il y a 2 jours
Job Description:Charger logistics Inc. is a world- class asset-based carrier with locations across North America. With over 20 years of experience providing the best logistics solutions, Charger logistics has transformed into a world-class transport provider and continue to grow.Charger logistics invests time and support into its employees to provide them with the room to learn and grow their expertise and work their way up. We are entrepreneurial-minded organization that welcomes and support individual idea and strategies. We are seeking a skilled Data Engineer with strong SQL and Python expertise to join our modern data team. The successful candidate will build scalable, maintainable data transformation pipelines using SQL and Python that power our analytics and business intelligence initiatives.Responsibilities:
  • Design and maintain high-performance SQL-based data transformation pipelines.
  • Build reusable, modular SQL code using software engineering best practices.
  • Develop Python applications for data ingestion, transformation, and pipeline orchestration.
  • Optimize complex SQL queries for performance, scalability, and reliability.
  • Implement robust data quality checks and maintain metadata and documentation.
  • Automate ETL/ELT workflows using Python and cloud-native tools.
  • Work with analytics and business teams to translate logic into SQL data models.
  • Implement version control (Git) and CI/CD workflows for testing and deployment of pipelines.
  • Monitor and optimize data workflows and identify opportunities for performance improvement.
  • Mentor junior team members on SQL optimization and Python scripting practices.
Requirements:Required Qualifications:
  • Bachelor's degree in Computer Science, Engineering, Data Science, or a related field.
  • 2+ years of experience in data engineering roles, with strong emphasis on SQL and Python.
  • Expert-level SQL skills: CTEs, window functions, query optimization, analytical queries.
  • Solid Python programming experience: data processing, scripting, automation, APIs.
  • Hands-on experience with modern cloud data warehouses (Snowflake, BigQuery, Redshift, or Databricks).
  • Strong understanding of data warehouse design, dimensional modeling, and ELT/ETL pipelines.
  • Experience with version control systems like Git and collaborative development workflows.
  • Knowledge of data quality frameworks and testing strategies using SQL and Python.
Preferred Qualifications:
  • Experience with cloud data platforms and native data services.
  • Familiarity with workflow orchestration tools such as Airflow, Prefect, or Dagster.
  • Knowledge of data visualization tools (Looker, Tableau, Power BI).
  • Exposure to real-time data processing and streaming architectures.
  • Understanding of DataOps and analytics engineering best practices.
  • Experience with Infrastructure as Code tools like Terraform or CloudFormation.
Technical Skills:
  • SQL: Advanced querying, performance tuning, data modeling, optimization.
  • Python: pandas, requests, sqlalchemy, API integration, ETL development.
  • Data Warehouses: Snowflake, BigQuery, Redshift, Databricks (or similar platforms).
  • Tools: Git, Docker, CI/CD pipelines, orchestration tools (Airflow, Prefect).
  • Concepts: Dimensional modeling, data testing, DataOps, analytics engineering.
What You'll Build:
  • Efficient, scalable SQL pipelines that transform raw data into analytics-ready datasets.
  • Python-based ETL pipelines for data ingestion, transformation, and automation.
  • Automated data quality checks and monitoring systems.
  • Modular and reusable SQL components for consistent data logic.
  • CI/CD-enabled workflows for reliable and maintainable data pipeline deployments.
Benefits:
  • Competitive Salary
  • Healthcare Benefits Package
  • Career Growth

Charger Logistics