About the Role
We are seeking a skilled Data Engineer to design, build, and maintain scalable data pipelines and infrastructure. The ideal candidate will have strong expertise in data modeling, ETL processes, and cloud-based data platforms, enabling the organization to leverage data for analytics, reporting, and advanced machine learning initiatives.
Key Responsibilities
- Design, develop, and maintain data pipelines for ingestion, transformation, and storage.
- Build and optimize ETL/ELT workflows to support analytics and reporting needs.
- Collaborate with data scientists, analysts, and business stakeholders to deliver reliable datasets.
- Implement data models, schemas, and warehouse/lake architectures.
- Ensure data quality, consistency, and governance across systems.
- Monitor and optimize pipeline performance and scalability.
- Work with cloud platforms (AWS, Azure, GCP) for data storage and processing.
- Automate workflows and support CI/CD for data engineering solutions.
- Troubleshoot and resolve data-related issues in production environments.
Required Skills & Qualifications
- Bachelor's/Master's degree in Computer Science, Engineering, or related field.
- 4+ years of experience in data engineering or related roles.
- Strong proficiency in SQL and relational databases.
- Hands-on experience with Python/Scala/Java for data processing.
- Expertise in ETL tools (e.g., Apache Airflow, Talend, Informatica).
- Experience with big data technologies (Hadoop, Spark, Kafka).
- Familiarity with cloud data services (AWS Redshift, Azure Synapse, GCP BigQuery).
- Knowledge of data modeling, warehousing, and governance practices.
- Strong problem-solving and communication skills.