OneMain Financial Jobs

Job Information

Henry Schein Director, Data Engineering in Seattle, Washington

We are open to this role working 100% remote within the United States. Unfortunately, Henry Schein One is unable to hire individuals residing in Alaska, North Dakota, Hawaii, West Virginia, Maryland, Delaware, Puerto Rico or other US Territories at this time. This role is a W2 role and will not consider C2C candidates. Do you like creating new paradigms? Are you an innovator? Are you a leader? Do you love Big Data? If your answers are emphatically yes, consider joining us! As a Director or Executive Director and technical leader of data, you and your team will develop and deliver a robust, scalable, and performant data platform for our customers, subsidiaries, and developers. At Henry Schein One data provides key competitive advantages and it is critical to our future growth. Our Data Platform Team is made up of key individuals who are responsible for implementing and using a modern data mesh from the ground up. We are looking for an incredibly talented hands-on leader with a background in large scale back-end engineering. Our data platform must support near-real-time ingestion, processing, and access to data for customers, subsidiaries, and internal development teams. We need someone who is passionate about processing Big Data in scalable and performant ways. This position is responsible for leading the data platform team, developing, and refining data strategy and architecture, and building data pipelines, data structures, transformations, and final output datasets for consumption by various customers, subsidiaries, and dev teams. You will also do some advanced research and analysis to help perform data ingestion, cleansing, and exploration to drive actionable insights. What You Will Do Lead and grow the data platform team Build highly scalable and performant data pipelines for our Big Data, event sourced, and traditional OLTP Database architecture at Henry Schein One Support the ongoing growth, design, and expansion of our data platform Evaluate new tools and approaches to continuously evolve the team's data sourcing, transformation, data lineage, and scaling approaches Lead the company's data validation and governance Use your knowledge of star and snowflake modeling to influence our approaches to perform near-real-time queries (Spark/Kafka), Big Data batch queries, and traditional OLAP SQL queries Use your knowledge and experience with the global compliance ecosystem (PHI/PII, HIPAA, CCPA, GDPR, etc.) to ensure our work respects the rights, regulations, and consent preferences of all stakeholders Challenge the status quo and help push our organization forward Qualifications What You Will Have 10+ years of professional experience working with Big Data in both batch and streaming at a minimum on the multiple terabyte level 5+ years of managing and leading small to medium teams 5+ years of developing and maintaining data pipelines 5+ years of working with Java 4+ years hands-on experience with Big Data eco-systems such as Spark, HDFS, Kafka, Elastic Search, or NoSQL (such as Graph Databases, Cassandra, etc.) 2+ years hands-on experience with Kafka Experience with event driven architecture/design of highly resilient systems with microservices, event sourcing, and CQRS Experience with API Gateways, and the related levers available (e.g., throttling, rate limiting, etc.) Comfortable with solving unknown unknowns Understanding of Data Lineage Strong understanding of traditional OLAP Databases (SQL) and data modeling strategies Experience with data validation; automation is preferred Experience in operationalizing data pipelines with reproducibility, audit-tracking, and reconciliation of data. Experience with stream processing engines (Kafka Streams API, Spark, Beam, etc.) Strong knowledge of Linux-based operating systems K

DirectEmployers