Perform data exploration, data cleaning, data imputation, and feature engineering on unstructured and structured data
Design, Develop and maintain report used for daily operation, management and analytics
Build the infrastructure for optimal extraction, transformation, and loading (ETL) of data from a wide variety of data sources.
Develop and maintain all data flows and prepare all ETL processes according to business requirements and incorporate all business requirements into all design specifications
Identifying, designing and implementing internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes
Use analytics tools that utilize the data pipeline to provide actionable insights
Document all test procedures for systems and processes
Requirements
Bachelor's degree from computer science or related fields, or equivalent software engineering experience
Good at SQL knowledge and experience working with relational databases, query authoring (SQL). Preferably on MS SQL.
Experience in monitoring, operation, and development of tools such as ELK Stack, Grafana, Confluence, CDSW, SSIS, SSMS
Understanding service management process related to service operation area such as Release, Inventory, Service Request, Event, Incident and Problem Management.
Knowledge in ETL / data pipeline applications. Preferably Microsoft SSIS
Experience in managing Database such as Ms SQL, MySQL. Knowledge in other databases will be additional value: PostgreSQL, MongoDB, IBM DB2.
Experienced in performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
Have an understanding and experience regarding the reporting dashboard and data warehouse concepts
Have troubleshooting capabilities, understand system logs, analyse error messages and perform minor changes or hotfixes as workarounds.
Have the ability to compile and manage SOP Platform.