Job Title: Data Engineer
Role Overview
We are looking for an experienced Data Engineer with 5+ years of expertise to design, build, and maintain scalable data systems that power data-driven decision-making across the organization.
In this role, you will develop robust data pipelines, optimize data architectures, and ensure high data quality and availability. You will collaborate closely with data scientists, analysts, and cross-functional teams to deliver reliable and efficient data solutions. The ideal candidate is detail-oriented, highly analytical, and passionate about working with large-scale data systems.
Key Responsibilities
- Pipeline & ETL: Design, develop, and maintain scalable data pipelines and ETL workflows
- Data Storage: Build, manage, and optimize data warehouses and data lakes
- Collaboration: Partner with data scientists and analysts to support data access and analytics requirements
- Data Integrity: Ensure accuracy, consistency, and reliability of data across systems
- Data Modeling: Design and maintain efficient data models and schema structures
- Security: Implement data governance, security controls, and access management
- Performance Optimization: Improve SQL queries and optimize data processing and retrieval performance
- Innovation: Research and integrate new tools, technologies, and best practices in data engineering
- Leadership: Mentor junior engineers and contribute to technical direction and standards
- Business Support: Work with cross-functional teams to enable data-driven insights and decision-making
Technical Qualifications
- Education: Bachelor's degree or higher in Computer Science or a related field
- Experience: 5+ years of professional experience in data engineering
- Programming: Proficiency in Python, Java, or Scala
- Big Data Technologies: Hands-on experience with tools such as Hadoop, Spark, and Hive
- Databases: Strong expertise in SQL, with experience in both relational and NoSQL databases
- Cloud Platforms: Experience with AWS or Azure and their data services
- Best Practices: Solid understanding of data modeling, data warehousing, and ETL design principles
Preferred Qualifications
- Experience with stream processing frameworks (e.g., Kafka, Flink, Delta Live Tables)
- Familiarity with data governance and regulatory compliance standards
- Experience with containerization tools such as Docker and Kubernetes
- Relevant certifications or contributions to open-source projects
- Experience with Tencent Big Data platform
- Proficiency in Power BI for data visualization and reporting
Professional & Soft Skills
- Strong analytical thinking and problem-solving skills
- High attention to detail and commitment to data accuracy
- Excellent communication and collaboration skills
- Ability to manage multiple priorities in a fast-paced environment
- Team-oriented mindset with leadership and mentoring capabilities
Benefits & Perks
- Allowance: ₱5,000 (taxable and non-taxable)
- Paid Leaves:
- Year 1: 12 Vacation Leaves (VL) and 10 Sick Leaves (SL)
- Year 2 onwards: 20 VL and 10 SL
- Leave Conversion: Up to 5 unused vacation leaves may be converted or carried over
- HMO Coverage: Up to ₱250,000 per illness per year, with 2 free dependents
- Referral Program: Incentives for successful talent referrals
- Pantry Perks: Complimentary coffee, milk, chocolate, and snacks
- Birthday Gift: Token gift (item or food)
- Company Engagement: Quarterly company-sponsored meals (Happy Hour)
- Marriage Gift: ₱3,000