Job Summary
We are seeking a mid-level Data Engineer with hands-on experience in Databricks to support the design, development, and optimization of scalable data solutions. The ideal candidate will have strong experience in data engineering principles, working with modern data platforms, and collaborating within cross-functional teams in a fast-paced environment.
Job Responsibilities
- Design, develop, and maintain data pipelines using Databricks and related technologies
- Build and optimize ETL/ELT processes for large-scale data processing
- Work with structured and unstructured data across multiple data sources
- Develop and manage data models to support analytics and reporting requirements
- Collaborate with stakeholders including data analysts, developers, and business teams to understand data needs
- Ensure data quality, integrity, and governance across data platforms
- Monitor and troubleshoot data workflows and performance issues
- Contribute to continuous improvement of data engineering practices and standards
Job Qualifications
- Minimum 2+ years of experience in data engineering
- Hands-on experience with Databricks
- Strong proficiency in Python and/or SQL
- Experience with data pipeline development and data transformation frameworks
- Familiarity with cloud platforms (e.g., Azure, AWS, or GCP)
- Understanding of data warehousing concepts and modern data architectures
- Experience with version control tools (e.g., Git)
- Strong problem-solving skills and attention to detail