Senior Data Engineer - Big Data & Analytics Platforms
Parent Organization
- Canada
- Permanent
- Full-time
Job Location: Hybrid (3 days remote), Toronto, Canada
Experience: 4 to 7 Years
Rate: 65 to 72 CAD PHRole Summary: NearSource is looking for a Senior Data Engineer to design and scale enterprise-grade data platforms supporting high-volume analytics and reporting. The role focuses on building resilient data pipelines and enabling data-driven decision-making for a Fortune 500 product company.Key Responsibilities
- Architect and implement scalable data and analytics solutions aligned with business objectives
- Design and optimize batch and real-time data pipelines processing large-scale datasets
- Drive development of robust data infrastructure to ensure reliability, scalability, and performance
- Execute ETL workflows and data transformation processes using modern data engineering tools
- Automate cloud infrastructure, monitoring, and observability to enhance operational efficiency
- Develop and maintain pipelines to improve deployment quality and velocity
- Collaborate with cross-functional teams to gather requirements and deliver data-driven solutions
- Analyze complex datasets to identify trends, insights, and optimization opportunities
- Deliver dashboards and reporting solutions for product, marketing, and business performance tracking
- Define and maintain key metrics, KPIs, and data models to support analytics initiatives
- Present technical insights and recommendations to both technical and non-technical stakeholders
- Enable self-service analytics through well-designed dashboards and data access patterns
- Proficient in programming for data engineering and pipeline development
- Strong experience with Apache Spark, Spark SQL, and PySpark for large-scale data processing
- Expertise in SQL and analytical data modeling, including dimensional modeling techniques
- Hands-on experience with data warehousing platforms such as Snowflake and Hive
- Experience with ETL orchestration tools such as Apache Airflow
- Strong experience building dashboards using BI tools such as Power BI, Tableau, or Qlik
- Experience with version control systems and pipelines, including Git and
- Hands-on experience working with notebook environments such as Jupyter or Apache Zeppelin
- Familiarity with Looker for data visualization and reporting
- Experience with Presto for distributed querying
- Exposure to services for cloud-based data infrastructure
- Experience working with large-scale, real-time data streaming systems
We are sorry but this recruiter does not accept applications from abroad.