Key Responsibilities:
- Design and implement robust ETL/ELT pipelines using AWS services (Glue, Lambda, Step Functions, etc.)
- Develop and maintain data lakes and data warehouses (e.g., Amazon S3, Redshift)
- Optimize data workflows for performance and cost-efficiency
- Ensure data quality, governance, and security across platforms
Technical Requirements:
- Bachelor's degree in computer science, Engineering, or related field
- 35 years of experience in data engineering with AWS
- Proficiency in Python, SQL, and Spark
- Hands-on experience with AWS services: Glue, S3, Redshift, Lambda, CloudWatch, IAM
- Experience with CI/CD tools and infrastructure-as-code (e.g., Terraform, CloudFormation)
- Experience with Git for version control and collaborative development
- Familiarity with data modeling and warehousing concepts
Certifications:
Required: AWS Certified Data Analytics Specialty (or willingness to obtain within 6 months)