OneMain Financial Jobs

Job Information

Sedgwick Senior Data Engineer- Data Science & AI in Tucson, Arizona

By joining Sedgwick, you'll be part of something truly meaningful. It’s what our 33,000 colleagues do every day for people around the world who are facing the unexpected. We invite you to grow your career with us, experience our caring culture, and enjoy work-life balance. Here, there’s no limit to what you can achieve.

Newsweek Recognizes Sedgwick as America’s Greatest Workplaces National Top Companies

Certified as a Great Place to Work®

Fortune Best Workplaces in Financial Services & Insurance

Senior Data Engineer- Data Science & AI

Role Overview

As a Senior Data Engineer within the Transformation Office, you are the hands-on architect of the data supply chain for our most advanced initiatives. You will be responsible for the "heavy lifting" required to fuel Data Science models and AI applications with high-fidelity data. Your mission is to build the pipelines that bridge our legacy on-prem systems (Mainframes, SQL Server, DB2) with our modern Snowflake environment and AWS/Azure AI stacks. You are a "day-one" builder who ensures that data is not just moved, but engineered for the specific requirements of model training, feature stores, and RAG-based AI systems.

Key Responsibilities

• Hybrid Data Pipeline Execution: Design and implement robust ETL/ELT pipelines to ingest data from legacy on-prem sources, AWS (S3/RDS), and Azure (Blob/SQL), centralizing it for consumption in Snowflake and AI services.

• Engineering for Data Science: Build and maintain Feature Stores and specialized datasets optimized for machine learning, ensuring Data Scientists have immediate access to clean, versioned, and statistically valid data.

• Engineering for AI (RAG & LLMs): Develop the data pipelines required for Generative AI, including the automated extraction, chunking, and loading of unstructured data into vector stores across AWS and Azure.

• Snowflake Power-User Execution: Act as the technical lead for our Snowflake data warehouse, implementing sophisticated data modeling, Snowpipe automation, and compute optimization to support high-concurrency AI workloads.

• Legacy "Back-Reach" Engineering: Execute non-invasive data extraction patterns to unlock mission-critical data from decades-old on-premise systems without disrupting core business operations.

• Multi-Cloud Orchestration: Manage complex, cross-platform data workflows using Airflow, Step Functions, or Azure Data Factory, ensuring the synchronization of data across our multi-cloud AI posture.

• IT & Security Diplomacy: Partner directly with central IT, Database Administrators, and Security teams to solve connectivity hurdles (PrivateLink, IAM, firewalls) and secure "license to operate" for new data flows.

• Data Quality for Model Integrity: Implement automated validation and observability layers to detect data drift and quality issues that could compromise the accuracy of production AI and Data Science models.

• Cost & Performance Management: Drive the efficiency of our data stack by optimizing storage and query performance in Snowflake, AWS, and Azure to manage the ROI of the Transformation Office.

• Direct Stakeholder Collaboration: Work as a dedicated engineering partner to MLOps and Data Science teams to rapidly iterate on data requirements for evolving AI use cases.

Qualifications

• Education: Bachelor’s degree in Computer Science, Data Engineering, or a related field is required. A Master’s degree is highly desirable.

• Proven Execution: 6+ years of hands-on data engineering experience, with a track record of building production-grade pipelines for Data Science and AI in multi-cloud environments.

• Snowflake Mastery: Expert-level proficiency in Snowflake architecture, including data sharing, performance tuning, and the integration of Snowflake with external cloud AI services.

• Multi-Cloud Proficiency: Advanced, hands-on knowledge of AWS (S3, Glue, Lambda) and Azure (Data Factory, Synapse) data services.

• Technical Stack: Mastery of Python, SQL, and PySpark. Deep experience with data orchestration and containerization (Docker).

• Legacy Expertise: Proven ability to interface with "old world" tech (on-premise SQL, Mainframe extracts, flat files) and transform it for modern cloud consumption.

• AI/DS Fluency: A strong understanding of the specific data needs for Machine Learning (feature engineering) and Generative AI (vectorization and embedding pipelines).

• Execution Mindset: A "get-it-done" attitude, capable of navigating enterprise bureaucracy and technical debt to ship code at the speed required by a Transformation Office.

#LI-TS1 #remote

Sedgwick is an Equal Opportunity Employer and a Drug-Free Workplace.

If you're excited about this role but your experience doesn't align perfectly with every qualification in the job description, consider applying for it anyway! Sedgwick is building a diverse, equitable, and inclusive workplace and recognizes that each person possesses a unique combination of skills, knowledge, and experience. You may be just the right candidate for this or other roles.

Sedgwick is the world’s leading risk and claims administration partner, which helps clients thrive by navigating the unexpected. The company’s expertise, combined with the most advanced AI-enabled technology available, sets the standard for solutions in claims administration, loss adjusting, benefits administration, and product recall. With over 33,000 colleagues and 10,000 clients across 80 countries, Sedgwick provides unmatched perspective, caring that counts, and solutions for the rapidly changing and complex risk landscape. For more, see sedgwick.com

DirectEmployers