
Job description
About us: iconic brand, tiny company.
Polaroid was founded in 1937 by one of the most seminal innovators of the 20th century, Edwin Land. His motto was, “don't undertake a project unless it is manifestly important and nearly impossible.” In 2008, Polaroid shuttered its last factory, but a group of diehard fans came together as The Impossible Project to save instant films. Over 10 years later that startup acquired what was left of Polaroid and today we’re again a small group of people passionate about changing the world through great products.
Job Summary
As a Senior Data Engineer, you will be responsible for architecting and designing data solutions, developing data models, and building scalable API-driven data platforms using Microsoft Fabric's Lakehouse architecture. You will lead the design of our data infrastructure, establish data modeling standards, and create robust API strategies to support data-driven decision making across our creative and business operations. You will leverage Python, Spark, and other tools within the Fabric ecosystem to build enterprise-grade ETL processes, design and implement various data communication protocols including GraphQL and REST APIs, and architect comprehensive data solutions across our modern data lakehouse infrastructure.
The ideal candidate will be an experienced data engineering professional with expertise in data modeling, system architecture, and API development. You will have excellent problem-solving skills and proven leadership capabilities in a fast-paced, creative environment. You will mentor junior engineers, collaborate with analysts and senior stakeholders, and drive technical excellence in Polaroid's data engineering practices.
Key Responsibilities:
Lead and action the architectural design, implementation and maintenance of enterprise-wide data platforms and solutions using Microsoft Fabric Lakehouse architecture.
Design and implement comprehensive dimensional and relational data models following industry best practices.
Architect, develop, maintain and document RESTful and GraphQL APIs for data consumption across the organization.
Design, develop, and optimize complex data pipelines using Microsoft Fabric Lakehouse, Data Factory, Synapse Data Engineering and ELT processes using Python and Apache Spark.
Mentor junior and mid-level data engineers on best practices, design patterns, and technical skills.
Qualifications:
Bachelor’s or master’s degree in computer science, Data Engineering, Computer Engineering, or related field.
5+ years of hands-on experience in data engineering with at least 2 years in a senior or lead role.
Expert-level proficiency in Python with deep experience in data engineering frameworks (PySpark, pandas, numpy, Airflow).
Advanced SQL skills including complex query optimization, window functions, CTEs, and stored procedures.
Extensive experience with Apache Spark including performance tuning, custom transformations, and distributed computing patterns.
Strong experience with Microsoft Fabric ecosystem (Lakehouse, Data Factory, Synapse, Power BI) or equivalent cloud platforms (Azure, AWS, GCP).
Proven experience in designing enterprise-scale data models (dimensional modeling, data vault, normalized schemas).
Hands-on experience in designing and implementing RESTful and GraphQL APIs.
Soft Skills:
Excellent problem-solving and communication skills
Leadership and mentoring capabilities
Read more about our applicant privacy policy here:
https://www.polaroid.com/legal/careers-job-applicant-privacy-policy
or
All done!
Your application has been successfully submitted!