Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems.
Key Responsibilities
- :Develop high-quality, scalable ETL/ELT pipelines using Databricks technologies including Delta Lake, Auto Loader, and DL
- TExcellent programming and debugging skills in Pytho
- nStrong hands-on experience with PySpark to build efficient data transformation and validation logi
- cMust be proficient in at least one cloud platform: AWS, GCP, or Azur
- eCreate modular dbx functions for transformation, PII masking, and validation logic reusable across DLT and notebook pipeline
- sImplement ingestion patterns using Auto Loader with checkpointing and schema evolution for structured and semi-structured dat
- aBuild secure and observable DLT pipelines with DLT Expectations, supporting Bronze/Silver/Gold medallion layerin
- gConfigure Unity Catalog: set up catalogs, schemas, user/group access, enable audit logging, and define masking for PII field
- sEnable secure data access across domains and workspaces via Unity Catalog External Locations, Volumes, and lineage trackin
- gAccess and utilize data assets from the Databricks Marketplace to support enrichment, model training, or benchmarkin
- gCollaborate with data sharing stakeholders to implement Delta Sharing both internally and externall
- yIntegrate Power BI/Tableau/Looker with Databricks using optimized connectors (ODBC/JDBC) and Unity Catalog security control
- sBuild stakeholder-facing SQL Dashboards within Databricks to monitor KPIs, data pipeline health, and operational SLA
- sPrepare GenAI-compatible datasets: manage vector embeddings, index with Databricks Vector Search, and use Feature Store with MLflo
- wPackage and deploy pipelines using Databricks Asset Bundles through CI/CD pipelines in GitHub or GitLa
- bTroubleshoot, tune, and optimize jobs using Photon engine and serverless compute, ensuring cost efficiency and SLA reliability
.
Qualification
- s:At least 2 years of relevant work experience with cloud-based services relevant to data engineering, data storage, data processing, data warehousing, real-time streaming, and serverless computi
- ngHands on Experience in applying Performance optimization techniqu
- esUnderstanding data modeling and data warehousing principles is essenti
al
Good to ha
- ve:Certifications: Databricks Certified Professional or similar certificatio
- ns.Machine Learning: Knowledge of machine learning concepts and experience with popular ML librar
- iesKnowledge of big data processing (e.g., Spark, Hadoop, Hive,Kaf
- ka)Data Orchestration: Apache Airf
- lowKnowledge of CI/CD pipelines and DevOps practices in a cloud environme
- nt.Experience with ETL tools like Informatica, Talend, Matillion, or Fivetr
- an.Familiarity with dbt (Data Build To
ol)