Design and implement scalable, resilient database and data movement solutions using AWS services, supporting PostgreSQL, MySQL, and Oracle workloads.
Build and maintain reliable data replication pipelines from source databases to Amazon S3, ensuring data is delivered in a timely, consistent, and ingestion-ready format for downstream consumption by the Central Data Engineering team.
Create and manage database replication and resolve replication failures or data quality issues.
Apply and enforce data security and compliance controls across databases and data pipelines, including encryption at rest and in transit, access controls, secrets management, and data masking where required.
Automate operational and administrative tasks through scripting and infrastructure-as-code to reduce manual effort, improve repeatability, and increase platform stability.
Implement and maintain monitoring, alerting, and logging for databases, replication tools, and supporting AWS resources to proactively detect performance, availability, and data integrity issues.
Act as the Databricks platform administrator for the organization, responsible for workspace configuration, cluster policies, access management, compute governance, and cost controls.
Manage and optimize Databricks compute resources to ensure secure, cost-efficient, and reliable platform usage in alignment with organizational standards and workload requirements.
Collaborate closely with Central Data Engineering and other stakeholders to align on data delivery expectations, platform standards, operational handoffs, and continuous improvements.
DISPLAYED SKILL MASTERY
Foundational AWS Cloud knowledge with hands-on experience supporting database workloads, and a background in Database Administration or Cloud Infrastructure operations.
Working experience with relational databases, including PostgreSQL, MySQL, and/or Oracle, covering day-to-day operations, basic performance tuning, and troubleshooting.
Practical experience with ETL tools and data replication techniques, supporting reliable movement of data from source databases to Amazon S3 for downstream ingestion.
Ability to analyze database structures and data behavior to identify common replication, ingestion, and data consistency issues, and resolve them with guidance when needed.
Strong problem isolation and root cause analysis skills, particularly for database availability, replication delays, and data quality incidents.
Familiarity with automation and scripting tools (e.g., shell scripting or Python) to streamline routine operational and administrative tasks.
Understanding of data security fundamentals, including encryption, access controls, and basic data masking concepts, and the ability to apply established standards and patterns.
Ability to collaborate effectively within cross‑functional teams, particularly with Central Data Engineering, while also working independently on assigned tasks and operational responsibilities.
Demonstrates initiative and ownership by proactively addressing issues, following through on assigned responsibilities, and continuously building technical skills within the role's scope.
EXPECTED RESULTS
Fulfill access, configuration, and resource requests in AWS and Databricks within the agreed service‑level targets, ensuring requests are completed accurately and securely.
Deliver reliable data replication outputs by successfully moving data from database or S3 sources to target S3 buckets within a standard timeframe of 1–3 days, meeting data quality and consistency expectations.
Contribute effectively to Data Engineering and DBA initiatives, completing assigned project and operational tasks while continuously improving technical skills relevant to database, AWS, and data platform operations.
Support and enforce security and compliance requirements by implementing approved controls, remediating identified gaps, and participating in initiatives that align systems with organizational and regulatory standards.
Monitor, manage, and optimize AWS and Databricks resource usage to prevent unnecessary spend, support cost‑efficiency goals, and operate within allocated budgets.
Maintain high availability and performance of databases, replication pipelines, and managed platforms by proactively monitoring systems, responding to incidents, and resolving issues within defined operational expectations.
REQUIRED QUALIFICATIONS
At least 3 years experience in a role relevant to Database Engineering/Database Administration /DevOps.
Medium to advanced proficiency in SQL, with the ability to write, analyze, and troubleshoot queries for operational and data replication use cases.
Hands‑on experience with scripting languages such as Shell and/or Python to support automation, monitoring, and routine administrative tasks.
Experience administering services running in AWS, including foundational knowledge of cloud infrastructure, security controls, and operational best practices.
Working knowledge of Data Engineering or Data Warehousing concepts, including data pipelines, ingestion patterns, and analytical data platforms.
Experience with Databricks is a strong advantage, particularly in supporting platform administration, access management, or compute usage.