Roles & responsibilities
- Participate in the design, development, and delivery of Azure data engineering solutions for large-scale data platforms
- Work directly with clients to gather requirements for data ingestion, integration, transformation, storage, and processing
- Develop and maintain ETL/ELT pipelines using Azure Data Factory, Synapse, Databricks, and other Azure-native services
- Implement highly scalable data lake and data warehouse architectures aligned with best practices
- Contribute to data modernization, cloud migration, and integration programs across industries
- Ensure data platform performance, reliability, security, and governance in alignment with organizational and compliance standards
- Collaborate closely with cloud architects, BI teams, and data scientists to support end-to-end data lifecycle needs
- Continuously learn new Azure data services and engineering frameworks, adopting industry best practices
- Optimize Azure resource consumption through cost management and performance tuning
- Support documentation, design reviews, sprint planning, and technical delivery discussions
This role is for you if you have the below
Educational qualifications
- A bachelor's or master's degree in Engineering, Computer Science, or equivalent
- Fabric Data Engineer Certificate (DP-700)
- Work experience
- Hands-on experience (6–8 years) in Azure data engineering, data integration, and data platform development
- Experience working on ETL/ELT pipelines, data lakes, relational and NoSQL databases, and real‑time/batch ingestion
- Exposure to data migration, modernization, governance, and cloud-native data architectures
Mandatory technical & functional skills
- Strong hands-on expertise in Azure Data Factory, Azure Synapse Analytics, Azure Data Lake Storage Gen2, and Azure SQL / T-SQL, Azure Fabric
- Experience with Azure Databricks, PySpark, and Python for large-scale data processing
- Good understanding of data modeling, star/snowflake schema, and modern analytical design patterns
- Strong proficiency in SQL, query tuning, and working with structured and semi-structured data
- Experience with stream processing using Event Hubs, Kafka, or similar technologies
- Familiarity with REST APIs and integration patterns
- Understanding of data security, RBAC, IAM, encryption, and compliance in cloud environments
- Experience working with DevOps and CI/CD pipelines for data engineering workloads
- Practical understanding of Agile methodologies and data engineering best practices
- Strong communication skills with the ability to collaborate across teams and contribute to client discussions
Preferred technical & functional skills
- Experience with serverless components such as Azure Functions or Logic Apps
- Knowledge of Microsoft Purview for data governance, lineage, and cataloging
- Familiarity with implementing data quality frameworks
- Exposure to advanced analytics, machine learning enablement, or Power BI dataflows
- Understanding of Generative AI concepts, data preparation for LLMs, and integration with Azure OpenAI services