Description:
Duties and Responsibilities:
- Create an optimal data pipeline architecture and recommend ways to improve data reliability, efficiency, and quality.
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
- Work with data and analytics experts to strive for greater functionality in our data systems
Qualifications:
- 1-2 Years in working experience
- Knowledgeable in Linux Operating System.
- Knowledgeable in Microsoft PowerBI OR Amazon Quicksight.
- (Good to have) Knowledge in automation.
- Plus if knowledgeable in Amazon Data Analytics & Visualization.
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Experience building and optimizing big data data pipelines, architectures, and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Strong analytic skills related to working with unstructured datasets.
- Ability to work with deadlines and under time constraints
- Good communication skills to present information to the stakeholders
Work Set up: Hybrid
Requirements:
Skills:
(Not indicated)