Back to Jobs

Data EngineerApply Now

Sydney

IT & Telecomms DBA / Database Developer

  • Cutting-edge AWS data platform
  • Own critical systems
  • Your ETL code keeps global revenue flowing

An innovative and rapidly growing SaaS company in the travel industry is looking for a talented Data Engineer to join their agile data team. This role offers an excellent opportunity to work with a cutting-edge cloud-based data platform used by global enterprise clients. The ideal candidate will have experience with AWS, Python, SQL, and Spark, and a strong interest in building reliable and scalable data pipelines.
 
Key Responsibilities

  • Maintain, monitor, and enhance existing AWS Glue-based ETL pipelines.
  • Develop scalable data ingestion, transformation, and validation workflows using PySpark and SQL.
  • Work closely with product, analytics, and engineering teams to deliver clean, validated data sets.
  • Build APIs and workflows using AWS API Gateway and Lambda for triggering ETL processes.
  • Perform root cause analysis and resolve data issues in production environments.
  • Support continuous improvement of data platform reliability, performance, and maintainability.

Tech Stack You’ll Work With

  • Cloud: AWS (Glue, S3, Lambda, API Gateway, IAM)
  • Programming: Python, SQL, PySpark
  • Data Processing: AWS Glue, Apache Spark
  • Monitoring & CI/CD: CloudWatch, GitHub Actions, Terraform (desirable)
  • Other Tools: Athena, RDS, REST APIs, JSON

You’ll have

  • 2–3 years of experience as a Data Engineer or in a similar role.
  • Proficiency in Python and SQL with hands-on experience using Spark (preferably PySpark).
  • Strong experience building and maintaining data pipelines on AWS (Glue, S3, API Gateway).
  • Familiarity with CI/CD tools and cloud-based monitoring/logging solutions.
  • A problem-solving mindset, strong attention to detail, and clear communication skills.
Apply Now