3-4 years of hands-on experience in data engineering, data science, or ML engineering.
Proven leadership or mentorship experience, guiding technical teams to deliver successful projects.
Expert proficiency with Python (for data science & development) and strong SQL skills.
Strong experience with the modern data stack: workflow orchestration (e.g. Airflow), event streaming (e.g. Kafka), and analytical databases (e.g. ClickHouse).
Solid track record in developing, deploying, and maintaining ML models in production using MLOps tools (e.g. MLFlow).
Fundamentals of software engineering: version control (Git), CI/CD, and automated testing.
Nice-to-Have
Experience with Java + Spring Boot, especially for REST APIs or integrating data services.
Familiarity with Retrieval-Augmented Generation (RAG) and large language models (LLMs).
Knowledge of data governance/catalog tools such as OpenMetadata.
Experience in cloud environments (GCP, AWS, Azure).
Background in the telecommunications (Telco) domain.
Lead and mentor data engineers & scientists, instilling excellence and innovation.
Oversee full lifecycle of data & AI projects: from requirements to data engineering, model building, and machine learning applications.
Architect scalable data pipelines, AI-driven features; ensure they align with the data strategy.
Enforce code quality, peer review, and QA standards for dependable data products.
Produce reusable artifacts; catalog and document them via Cakra Data Fabric and OpenMetadata.
Act as technical bridge between Data & AI and backend teams; integrate Python models/data services with Java Spring Boot apps.
Translate business needs into technical/data solutions in collaboration with stakeholders.
Sync weekly with Shared Services to share knowledge, refine roadmaps, and evolve core data platforms/tools (e.g. Kafka, Airflow, ClickHouse, MLFlow)
Please click APPLY to submit your CV
The interview sessions will be held during the ITB Career Days on October 31 – November 1, 2025.