Da Nang I Senior Data Engineer

Location Vietnam, Danang
Category
Information Technology
Working Model
Hybrid

Overview

Role Purpose

 

The Senior Engineer is part of the Data Platform Group in the Data Engineering team. They will be responsible for building and operating the core systems that move data from source to warehouse — ensuring our platforms remain reliable, scalable, and secure. The Senior Engineer will work in a DevOps-style environment, where we deploy, support, and run everything we build.

 

The Data Platform group includes two sub-teams:

  • Data Engineering – responsible for infrastructure, ingestion, orchestration, and foundational data systems (this is where you’ll be)
  • Analytics Engineering – focused on data modeling, transformation pipelines, and enabling reporting and analysis

Together, we provide a stable and scalable foundation for data across the business — and this role is critical to making that possible. The focus of this role will be on the platform side of data engineering: infrastructure, orchestration, and ingestion. Collaborating with Analytics Engineers and other stakeholders, but not responsible for business-facing data modeling or analytics work. This role does not require deep SQL or reporting expertise — it’s a great fit for someone who enjoys building the systems and pipelines that enable others to work with data.

 

The Senior Engineer works across a modern data stack — AWS, Terraform, Airflow, Redshift/Snowflake, and Cloud/Core — to ensure data flows reliably and securely from a variety of internal and external sources into our platform

Responsibilities

Software Development

  • Write high-quality, well-tested, and well-documented infrastructure code using Terraform
  • Build and maintain internal tooling, Docker images, and shared utilities for ingestion and orchestration workflows
  • Use Git-based version control and CI/CD tooling (e.g. GitHub Actions, Terraform Cloud,Buildkite) to deliver platform changes safely
  • Provide constructive code reviews and contribute to team standards and documentation
  • Work across Python and SQL as needed to support data workflows and platform tooling

Data Integration and Processing

  • Own and develop the ingestion layer end-to-end, from source systems to structured data layers, using Fivetran, Airbyte, and custom Python pipelines
  • Configure and maintain Airflow DAGs, ensuring reliable scheduling, observability, and error handling
  • Contribute to the deployment and operation of cloud data warehouses like Redshift and Snowflake
  • Manage ingestion logic and staging models in dbt, supporting reliable downstream consumption (without owning business-facing transformations)
  • Collaborate with Analytics Engineers on schema changes and pipeline interfaces to ensure seamless integration

Delivery

  • Translate stakeholder needs into actionable infrastructure and ingestion solutions
  • Communicate trade-offs, constraints, and delivery plans clearly across technical and non-technical stakeholders
  • Participate in kickoffs, planning sessions, and retrospectives to align on scope, risks, and delivery priorities
  • Identify and address blockers, inefficiencies, or technical debt early
  • Maintain up-to-date documentation for platform components and engineering
  • workflows

Problem Solving

  • Debug and resolve issues across infrastructure, orchestration, and ingestion systems
  • Evaluate solutions thoughtfully, articulating trade-offs and long-term maintainability
  • Handle ambiguity methodically and work across systems to uncover root causes
  • Raise risks early and propose sustainable fixes
  • Participate in incident response and contribute to post-incident reviews
  • Continuously improve the performance and cost-effectiveness of platform components and data workflows

Team/ Collaboration

  • Collaborate closely with Analytics Engineering, Product, Security, and other engineering teams
  • Share knowledge through RFCs, docs, pairing, and informal mentorship
  • Review peer code and promote platform best practices
  • Support team health, growth, and clarity through thoughtful communication and feedback
  • Be aware and accountable to your responsibilities in relation to workplace health and safety obligations, both as an employee and manager.

Qualifications

We expect senior-level capability across both hands-on engineering and platform operations:

  • 5+ years of experience in data or software engineering, with a focus on infrastructure, ingestion, and orchestration
  • Proven experience working in a modern data stack, including Terraform, Airflow, AWS, Redshift/Snowflake, and dbt Cloud/Core
  • Experience operating production-grade data pipelines, including both managed connectors and custom code
  • Comfortable owning platform components end-to-end — including deployment, monitoring, debugging, and cost management
  • Experience contributing to technical designs (e.g. RFCs, ADRs) and evaluating trade-offs in architecture or tooling decisions
  • Demonstrated ability to work across teams and engage constructively with technical and non-technical stakeholders
  • A thoughtful, growth-oriented mindset with a commitment to system quality, knowledge sharing, and team improvement
  • Strong English communication skills (both verbal & written), especially in the global software development environment.
  • Proficiency with Infrastructure as Code (Terraform) and cloud-native environments (AWS)
  • Hands-on experience with Airflow for orchestration and Python for ingestion tooling and automation
  • Comfort working in SQL within data warehouses like Redshift or Snowflake
  • Familiarity with dbt (Core or Cloud) and how it fits into structured data workflows
  • A deep understanding of data engineering fundamentals — including ingestion, orchestration, platform reliability, and system observability
  • An approach that prioritises automation, maintainability, and operational excellence
  • Strong communication and collaboration skills, with experience working across engineering, analytics, and product
  • A desire to contribute to shared documentation, team practices, and engineering culture
  • An ability to balance short-term delivery with long-term sustainability in infrastructure and platform design

#LI-MT1

Options

Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.
Share on your newsfeed