OneMain Financial Jobs

Job Information

IBM Senior Apache Iceberg Developer in Bangalore, India

Introduction

At IBM Software, we transform client challenges into solutions. Building the world’s leading AI-powered, cloud-native products that shape the future of business and society. Our legacy of innovation creates endless opportunities for IBMers to learn, grow, and make an impact on a global scale. Working in Software means joining a team fueled by curiosity and collaboration. You’ll work with diverse technologies, partners, and industries to design, develop, and deliver solutions that power digital transformation. With a culture that values innovation, growth, and continuous learning, IBM Software places you at the heart of IBM’s product and technology landscape. Here, you’ll have the tools and opportunities to advance your career while creating software that changes the world.

Your role and responsibilities

  • Design and enhance Apache Iceberg table implementations and integrations.

  • Architect scalable lakehouse solutions on object stores (S3, ADLS, GCS, HDFS).

  • Implement and optimize metadata handling, schema/partition evolution, and time travel.

  • Improve query performance across engines (Spark, Flink, Trino, Presto).

  • Contribute to Apache Iceberg open-source development.

  • Analyze query execution plans and optimize read/write paths.

  • Collaborate on best practices for ingestion, governance, and lifecycle management.

  • Lead technical design discussions and resolve complex production issues.

  • Mentor engineers and conduct code reviews.

Required technical and professional expertise

  • Bachelor’s or Master’s in Computer Science or related field.

  • 12+ years of software development experience.

  • 5+ years of hands-on experience with Apache Spark and Scala.

  • Strong knowledge of distributed computing and cluster frameworks.

  • Proficiency in Scala and functional programming principles.

  • Expertise in Spark tuning, partitions, joins, and optimization techniques.

  • Experience with cloud platforms (AWS, Azure, GCP) and tools like EMR, Databricks, HDInsight.

  • Familiarity with Kafka, Hive, HBase, NoSQL databases, and data lake architectures.

  • Knowledge of CI/CD, Git, Jenkins, and automated testing.

  • Strong problem-solving and collaboration skills.

Preferred technical and professional experience

  • Experience with Databricks, Delta Lake, or Apache Iceberg.

  • Exposure to machine learning pipelines using Spark MLlib or integration with ML frameworks.

  • Open-source contributions in big data projects.

  • Excellent communication and leadership abilities.

IBM is committed to creating a diverse environment and is proud to be an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, gender, gender identity or expression, sexual orientation, national origin, caste, genetics, pregnancy, disability, neurodivergence, age, veteran status, or other characteristics. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.

DirectEmployers