Job Information
IBM Data Engineer (Python/SQL) in Heredia, Costa Rica
Introduction
A career in IBM Consulting is built on long-term client relationships and close collaboration worldwide. You’ll work with leading companies across industries, helping them shape their hybrid cloud and AI journeys. With support from our strategic partners, robust IBM technology, and Red Hat, you’ll have the tools to drive meaningful change and accelerate client impact. At IBM Consulting, curiosity fuels success. You’ll be encouraged to challenge the norm, explore new ideas, and create innovative solutions that deliver real results. Our culture of growth and empathy focuses on your long-term career development while valuing your unique skills and experiences.
Your role and responsibilities
What we are looking for:
We are in search of a skilled Consultant Data Engineer to join our expanding team of experts. This role will be pivotal in the design and development of Snowflake Data Cloud solutions, encompassing responsibilities such as constructing data ingestion pipelines, establishing sound data architecture, and implementing stringent data governance and security protocols.
Key Responsibilities
Data Pipeline & Engineering Execution
Develop and maintain scalable data pipelines and ingestion processes using SQL, Python, dbt, and cloud-native tools.
Implement ELT/ETL patterns across batch, incremental, and CDC pipelines.
Build data models using best practices in Dimensional modeling, Data Vault, or Lakehouse approaches.
Support the development of data transformations, data quality validations, and business logic implementations.
Platform & Cloud Engineering
Work with cloud platforms (Snowflake, Databricks, AWS, Azure, or GCP) to build high-performance, cost-efficient data solutions.
Implement warehouse/lakehouse structures, storage standards, schema management, and governance rules.
Assist with orchestration using Airflow, dbt Cloud, or similar tools.
Client Delivery & Consulting Support
Collaborate with client stakeholders to clarify requirements and translate them into actionable engineering tasks.
Communicate progress, technical decisions, and blockers clearly and professionally.
Participate in client demos, stand-ups, and architecture discussions, representing Hakkoda’s engineering excellence.
Document technical solutions and contribute to handover materials.
Data Quality, Governance & Security
Implement data validation and monitoring logic to ensure accuracy and reliability.
Follow security best practices including RBAC, data masking, encryption, and compliance requirements.
Contribute to documentation, standards, and engineering templates.
Collaboration & Team Growth
Work closely with senior engineers and architects to design and refine solutions.
Participate in code reviews and adopt Hakkoda’s engineering standards.
Mentor junior engineers as needed and support internal knowledge-sharing sessions.
Contribute to internal accelerators, playbooks, and reusable assets.
Continuous Learning & Innovation
Stay current on cloud, data engineering, Snowflake, Databricks, and modern data stack technologies.
Explore new tools, patterns, and methodologies that improve engineering efficiency.
Bring new ideas to the team and participate in innovation initiatives at Hakkoda.
Required technical and professional expertise
Required technical and professional expertise
4+ years of experience in Data Engineering or similar roles.
Strong SQL skills and hands-on experience building data pipelines.
Proficiency with Python or a similar programming language.
Experience with at least one cloud data platform (Snowflake, Databricks, BigQuery, Redshift, or Synapse).
Familiarity with data modeling concepts (Dimensional, Data Vault, Lakehouse).
Hands-on experience with version control (GitHub, Bitbucket) and CICD workflows.
Experience working in agile, collaborative environments.
Preferred technical and professional experience
• Cloud Integration Knowledge: Exposure to integrating cloud computing concepts and technologies with Snowflake platforms, enhancing data and AI use case implementation.
• Advanced Data Engineering: Experience working with data engineering principles and practices to deliver high-quality solutions on Snowflake platforms, leveraging expertise in Snowflake and cloud computing.
• Technical Solution Optimization: Experience applying technical expertise to optimize solutions on Snowflake platforms, ensuring seamless integration and optimal performance for data and AI use cases.
IBM is committed to creating a diverse environment and is proud to be an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, gender, gender identity or expression, sexual orientation, national origin, caste, genetics, pregnancy, disability, neurodivergence, age, veteran status, or other characteristics. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.