OneMain Financial Jobs

Job Information

ThermoFisher Scientific Data Steward in Tijuana, Mexico

Work Schedule

Standard (Mon-Fri)

Environmental Conditions

Office

Job Description

Job Summary

We are seeking an experienced and detail-oriented Data Steward to drive the execution of enterprise data governance and data quality initiatives. This role operates as an independent contributor , responsible for ensuring that data assets are trusted, well-governed, and accessible across the organization.

The Data Steward will own the governance, lineage, and quality of data assets across the full data lifecycle—from ERP source systems through RAW, consumable, and KPI layers . Leveraging data.world as the enterprise data catalog, this role will ensure strong metadata management, lineage transparency, and data discoverability.

A key focus of this role is to establish a consistent, trusted semantic layer that supports analytics and AI use cases , ensuring data is clearly defined, standardized, and ready for downstream consumption. The role requires a balance of governance expertise and hands-on technical capability (SQL and Python) to validate data, enforce quality, and support metadata and lineage automation.

Key Responsibilities

Data Governance, Lineage & Stewardship

  • Own and manage end-to-end data lineage from ERP → RAW → consumable → KPI layers , ensuring traceability, transparency, and alignment with governance standards.

  • Maintain and curate data assets within data.world , ensuring datasets are accurately classified, documented, and contextually enriched.

  • Define, implement, and enforce metadata standards , including business definitions, lineage, and transformation logic across the data pipeline.

  • Partner with business and technical stakeholders to identify and steward critical data elements (CDEs) across all layers.

  • Establish and standardize business definitions, metrics, and KPIs , contributing to a governed semantic layer for analytics and AI .

  • Improve data discoverability, lineage visibility, and contextual clarity to enable trusted data usage.

Data Quality Management (End-to-End Pipeline)

  • Own data quality across the full data pipeline (ERP → RAW → consumable → KPI) , ensuring consistency, accuracy, and completeness at each stage.

  • Develop and maintain data quality rules, validations, controls, and scorecards , aligned to transformation layers.

  • Utilize SQL and Python scripting to perform data profiling, validation, reconciliation, and anomaly detection.

  • Identify, analyze, and lead resolution of data quality issues , performing root cause analysis across upstream and downstream systems.

  • Collaborate with engineering and business teams to validate transformation logic and ensure reliability of KPI outputs and AI datasets .

  • Establish proactive monitoring and automated checks to detect and prevent data defects early in the pipeline.

Data Catalog & Metadata Enablement (data.world)

  • Serve as a primary steward of the data.world platform , ensuring high-quality metadata, lineage mapping, and usability of cataloged assets.

  • Document and maintain end-to-end lineage relationships within data.world , connecting ERP sources to downstream datasets and KPI layers.

  • Leverage data.world APIs and integrations to support automation of metadata ingestion, lineage updates, and catalog curation .

  • Enable semantic consistency within the data catalog , ensuring alignment between technical data and business meaning.

  • Drive adoption of data.world by enabling self-service data discovery and trusted data usage.

  • Provide training, guidance, and support to stakeholders on catalog usage, lineage interpretation, and governance best practices.

Cross-Functional Collaboration & Influence

  • Collaborate with business, analytics, data engineering, and AI/ML teams to align data definitions, transformations, and KPI logic .

  • Translate business requirements into governed data models, semantic definitions, and quality controls .

  • Work independently while influencing stakeholders to adopt standardized definitions, governance practices, and trusted data sources .

  • Apply analytical thinking and domain expertise to resolve inconsistencies and improve data processes.

  • Maintain comprehensive documentation of data flows, lineage, semantic definitions, and governance controls .

  • Contribute to the continuous improvement and maturity of data governance practices , particularly in support of AI and advanced analytics.

Preferred Experience

  • Hands-on experience with data.world or similar modern data catalog platforms.

  • Strong understanding of data governance frameworks (e.g., DAMA-DMBOK).

  • Experience managing data lineage and quality across multi-layered architectures (ERP, data lakes, transformation layers, KPI/reporting).

  • Experience supporting or building semantic layers for BI and/or AI use cases .

  • Proficiency in SQL and Python for data analysis, profiling, and automation of data quality checks .

  • Experience working with APIs or programmatic interfaces for metadata and catalog automation.

  • Familiarity with modern data platforms (Databricks, Redshift, Athena).

  • Experience with data visualization tools (e.g., Power BI).

  • Knowledge of data privacy and regulatory standards (e.g., GDPR, CCPA).

Qualifications

  • Bachelor’s degree in computer science, Information Systems, Data Science, or related field.

  • 3+ years of experience in data stewardship, data governance, or data quality roles .

  • Demonstrated experience working with data catalogs, metadata management, and lineage .

  • Strong problem-solving skills with the ability to work independently and manage moderately complex data challenges .

  • Excellent communication and stakeholder management skills, with the ability to influence and drive adoption of governance practices .

  • Comfortable working hands-on with data using SQL and Python to validate data, enforce quality rules, and support governance processes.

What Success Looks Like

  • Trusted, well-documented data assets across ERP → KPI pipeline

  • High adoption and effective use of data.world

  • Consistent and governed business definitions and semantic layer

  • Measurable improvements in data quality KPIs

  • Reliable, AI-ready datasets supporting analytics and decision-making

Thermo Fisher Scientific is an EEO/Affirmative Action Employer and does not discriminate on the basis of race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability or any other legally protected status.

DirectEmployers