Data Engineer with strong Databricks experience to join our Support Ops technology organization. Support Ops is a consolidated technology department within KBS, bringing together project management, product office, enterprise architecture, software engineering, data analysis, data engineering, and client platform engineering.
What We're Looking For:
- Proven experience with Databricks (PySpark, Delta Lake, MLflow, Unity Catalog, or similar).
- Strong SQL skills (T-SQL, PL/pgSQL) for data transformation and analytics.
- Hands-on experience designing and orchestrating ETL/ELT pipelines (Airflow, ADF, or Databricks Workflows).
- Familiarity with data governance and security in enterprise cloud environments (Azure, AWS).
- Prior work in client-facing or production support environments where timeliness and quality are critical.
- Working knowledge of Python, .NET, or Java for data engineering workflows.
- Bonus: Background with data warehousing, BI/reporting tools, or ML pipeline integration.
Required Experience
- 35 years of professional experience working in data operations, data analysis, or data engineering roles.
- Proven experience handling large data imports, cleansing, and standardization in a production environment.
- Hands-on SQL experience (required) able to write, troubleshoot, and optimize complex queries.
- Python scripting experience for data transformation and automation (preferred).
- Experience with ETL tools (such as SSIS, Talend, Alteryx, or equivalent).
- Exposure to Databricks or other cloud-based data processing platforms (preferred; strong SQL users will be trained if needed).
- Prior experience working in a fast-paced operations or ticket-based setting with tight turnaround expectations.