OneMain Financial Jobs

Job Information

Shake Shack Analytics Engineer – Data Quality Lead in New York, New York

Our secret to leading the way in hospitality? We put our people first!

At Shake Shack, our mission is to Stand For Something Good in all that we do. From our teams to our neighborhoods, we're committed to always doing the right thing. As one of the fastest-growing hospitality brands, we're all about crafting unforgettable experiences for our guests. We offer endless learning opportunities and the chance to make a lasting impact on our business, restaurants, and communities. As a member of the #ShackFam, you’ll have access to hands-on mentorship, training, and growth potential, all in a fun and inclusive environment.

Join us and Be a Part of Something Good.

Job SummaryWe are seeking an Analytics Engineer with a quality-first mindset to join our Data & Analytics team. This role is responsible for designing, building, and maintaining robust data models, pipelines, and analytics infrastructure across a broad, multi-domain portfolio, while simultaneously serving as the internal technical quality gate for a delivery model that includes our internal team and external service partners. The Analytics Engineer: Data Quality Lead bridges hands-on engineering with oversight and standards-setting, ensuring that what gets built is not only functional but reliable, documented, and trustworthy. This role operates within a modern data stack environment and is expected to leverage AI tooling as a core part of day-to-day workflow, accelerating both personal output and broader team capability.

The ideal candidate has 3+ years of analytics engineering experience with strong dbt and SQL proficiency, a track record of working in vendor or offshore delivery models, and the technical judgment to review others' code with precision and confidence. They understand that data quality is not a phase at the end of a project but a discipline embedded in every model, every test, and every deployment decision. They get energy from making systems more reliable, not just shipping their own work. They are genuinely AI-native, using tools like GitHub Copilot, Cursor, or Claude not occasionally, but as a primary accelerant. And they have the communication skills to hold service partners to a high standard while remaining a collaborative, trusted partner to the broader team.

Responsibilities:

Hands-On Build & Engineering

  • Design, develop, and maintain dbt models, SQL transformations, and data pipelines that produce clean, analytics-ready datasets supporting reporting, analysis, ML/AI, and strategic initiatives across multiple business domains

  • Build and optimize dimensional data models that enable self-service analytics and support advanced use cases including machine learning feature engineering and AI model training

  • Own high-complexity internal workstreams such as semantic layer definitions, cross-domain data models, and metrics standardization where internal technical ownership is critical

  • Support query performance optimization and data warehouse efficiency to reduce cost and improve end-user experience

  • Develop and maintain clear documentation of data models, business logic, and data lineage to promote transparency and enable knowledge sharing across the team

Technical Quality & Service Partner Oversight

  • Serve as the internal technical quality gate for service partner deliverables, reviewing pull requests and outputs against established data modeling standards, testing requirements, and documentation expectations

  • Use AI-assisted code review tooling to conduct scalable first-pass analysis of service partner code, focusing human judgment on highest-risk decisions and architectural patterns

  • Own and continuously improve the team's data observability posture by deploying and tuning monitoring tools (e.g., Elementary, re_data, Monte Carlo, or Soda) to detect anomalies, freshness failures, and quality regressions before they surface in dashboards or downstream systems

  • Build and enforce pre-deployment checklists and release gate criteria, including automated downstream impact assessments so no change ships without a known blast radius

  • Define and maintain data contracts between data producers and consumers, creating explicit, documented agreements about what each dataset guarantees to reduce silent failures and undocumented assumptions

  • Provide technical guidance and mentorship to service partner resources and extended team members, raising overall delivery quality across the ecosystem

Cross-Functional Collaboration & Stakeholder Partnership

  • Partner with the Business Analyst, Data Product Lead, and Product Manager to translate business requirements into scalable, well-scoped data solutions

  • Collaborate with Data Engineering to ensure reliable upstream pipelines and with analytics consumers across Operations, Finance, Marketing, and other business units to understand data needs and validate that delivered solutions drive intended outcomes

  • Act as a trusted technical voice in program and project delivery conversations, flagging quality or capacity risks before they become deployment failures

Minimum Qualification:

  • 3+ years of experience in analytics engineering, data modeling, or a closely related data delivery role focused on transforming raw data into analytics-ready datasets

  • Strong proficiency in SQL and hands-on experience with dbt, including testing frameworks, documentation standards, and model governance

  • Experience working with cloud data warehouses (Snowflake, BigQuery, Redshift, or similar)

  • Demonstrated understanding of data modeling concepts including dimensional modeling, star/snowflake schemas, and normalization

  • Experience working in a delivery model that includes offshore, vendor, or service partner resources, with direct accountability for reviewing and accepting their technical output

  • Familiarity with data quality monitoring concepts including row count validation, freshness checks, null rate monitoring, and referential integrity testing

  • Active, daily use of AI coding assistants (GitHub Copilot, Cursor, Claude, or similar) as a core part of engineering workflow rather than occasional experimentation

  • Familiarity with version control (Git) and software engineering best practices including branching, code review, and CI/CD concepts

  • Strong analytical thinking and attention to detail, with a demonstrated ability to identify and resolve data quality issues before they reach end users

  • Effective communication and stakeholder management skills, with the ability to give direct technical feedback to partners and translate data concepts for non-technical audiences

  • Bachelor's Degree in Computer Science, Information Systems, Data Analytics, Statistics, or a related field, or equivalent practical experience

Preferred Qualifications:

  • Hands-on experience with data observability platforms such as Elementary, re_data, Monte Carlo, or Soda Core

  • Experience defining or enforcing data contracts or SLA-style quality agreements between data producers and consumers

  • Experience with CI/CD pipeline concepts applied to dbt projects, including automated testing gates and deployment workflows

  • Knowledge of Python for data transformation, automation, or pipeline scripting

  • Experience with data orchestration tools such as Airflow, Dagster, or Prefect

  • Familiarity with business intelligence and data visualization tools (Tableau, Looker, Power BI, or similar), with an understanding of how upstream model decisions affect downstream reporting

  • Understanding of machine learning workflows and feature engineering requirements, including how to structure data models for ML model training and validation

  • Experience in retail, hospitality, QSR, or restaurant operations environments

  • Exposure to event tracking, product analytics, or marketing analytics data domains

  • Experience collaborating with Data Science teams on analytical or ML projects, translating modeling requirements into data product specifications

  • 5+ years of total experience in analytics engineering, data analysis, or a related field

Benefits at Shake Shack:

A work environment where you can come as you are, share your ideas, have fun, and work collaboratively:

  • Weekly Pay and Performance bonuses

  • Shake Shack Meal Discounts

  • Exclusive corporate discounts for travel, electronics, wellness, leisure activities and more

  • Medical, Dental, and Vision Insurance*

  • Employer Paid Life and Disability Insurance*

  • 401k Plan with Company Match*

  • Paid Time Off*

  • Paid Parental Leave*

  • Access to Employee Assistance Program on Day 1

  • Pre-Tax Commuter and Parking Benefits

  • Flexible Spending and Dependent Care Accounts*

  • Development and Growth Opportunities

*Eligibility criteria applies

Pay Range - $121,650.00 - $159,650.00

Click the "Apply" button above to apply for this opening.

About Us

Beginning as a hot dog cart in New York City’s Madison Square Park, Shake Shack was created by Danny Meyer, Founder and CEO of Union Square Hospitality Group and best-selling author of Setting the Table. Shack Fans lined up daily, making the cart a resounding success, and donating all proceeds back to the park beautification efforts. A permanent stand was eventually built…and the rest is Shack history! With our roots in fine dining and giving back to the community, we are committed to high quality food served with a high level of hospitality. Our team members enjoy a positive work environment that is deeply committed to the philosophy that we "Stand for Something Good."

Shake Shack is an Equal Opportunity Employer 

All qualified applicants will receive consideration for employment without regard to any protected characteristic, including race, color, ancestry, national origin, religion, creed, age, disability (mental and physical), sex, gender identity, sexual orientation, gender expression, medical condition, genetic information, marital, military and veteran status.

DirectEmployers