
Data Engineer / Full Stack Engineer – Databricks & Data Platform
- Toronto, ON
- Permanent
- Full-time
- Design and build data pipelines (batch-first, with increasing move toward stream processing) on Databricks Lakehouse
- Support Unity Catalog migration from legacy Databricks setups
- Integrate data services with front-end applications using APIs
- Participate in end-to-end delivery of data products (ingestion, transformation, storage, and consumption)
- Collaborate with platform, app, and AI teams to deliver reusable frameworks for use cases like call center transcript analytics
- Strong experience with Databricks (including Lakehouse, Delta Lake, Unity Catalog)
- Hands-on experience with batch data pipelines, and working knowledge of streaming frameworks (e.g., Spark Structured Streaming)
- Proficient in PySpark, Python, SQL, and data transformation logic
- Experience with API development and integration (to connect data layers with apps)
- Ability to collaborate across data engineering, full-stack dev, and AI/ML teams
- Understanding of data product principles and platform-as-a-product models
- Good understanding of data governance, access control, and security models
- Solid written, verbal, and presentation communication skills
- Strong team and individual player
- Maintains composure during all types of situations and is collaborative by nature
- High standards of professionalism, consistently producing high quality results
- Self-sufficient, independent requiring very little supervision or intervention
- Demonstrate flexibility and openness to bring creative solutions to address issues