Job Information
The Hartford IND - Senior Staff Engineer, Reliability in Hyderabad, India
IND - Senior Staff Engineer, Reliability - GCC071
We’re determined to make a difference and are proud to be an insurance company that goes well beyond coverages and policies. Working here means having every opportunity to achieve your goals – and to help others accomplish theirs, too. Join our team as we help shape the future.
Key Responsibilities
Data Reliability & Quality: Establishand enforce Data Service Level Objectives (SLOs) focused on data freshness, completeness, and accuracy across critical data products.
Data Observability: Implement advanced data observability tools tomonitorthe entire data journey —from ingestion to consumption—detecting data quality anomalies, schema drifts, and pipeline delays in real-time.
Pipeline Resiliency & Automation: Collaborate with Data Engineering to embed reliability patterns into data pipelines built using Informatica , Python/ Pyspark , and running on platforms like Amazon EMR/Hadoop , Informatica and cloudnativeservices.
Toil Elimination in Data Operations: Automate data validation, data reprocessing, data backfilling, and other manual operational tasks within the data lifecycle to reduce toil and improve operational efficiency.
Incident and Problem Management (Data Focus): Lead the response and resolution for data-related incidents (e.g., corrupt data, delayed reporting), ensuring fast recovery and effective post-incident reviews (blameless post-mortems).
Runbook Creation & Automation (Data Focus): Develop and automate sophisticated, data-aware runbooks for common data pipeline failures, data quality issues, and data recovery scenarios.
Required Skills & Experience
10+year’soverall experience in an Infrastructure,Dataor related technology organization with increasing responsibilities as a hands-on technologist.
4-5+yearexperience in Data Engineering, Data Quality, or a specializedSRErole within an enterprisedataenvironment.
Expertlevel,hands-on experience with data warehousing and data lake technologies, including Snowflake , and cloud environments ( AWS/GCP ).
Expertlevel experiencein pipeline development and support using technologies like Informatica , Python/ Pyspark , and distributedcompute(EMR/Hadoop).
Experience in designing and implementing data quality checks, data validation frameworks, and data governance standards.
Hands onexperience in software or cloud engineering. Familiarity with cloud service providersand their core capabilities(compute, containers, databases,APIsetc.).
In depth andhands onexperiencewith data observability concepts and tools for monitoring data in motion and at rest (e.g., Monte Carlo,Bigeye,Astro Observe,Datafold, custom solutions).
A strong understanding of the "data journey" and the impact of data issues on business outcomes.
Expertiseimplementing AIOps tomonitor, manage and self-heal data pipelines, using machine learning principles for anomaly detection.
Experience with prompt engineering, implementing AWS or Google AI services, AI enabled automation for data quality,reliabilityand pipeline performance management.
Expertiseimplementing and managing Apache Airflow workflows.
Expertisedefining and implementing ofDataOpspractices