Job Location:
Singapore
Job Description :
- Understand business processes, applications and how data is stored and gathered.
- Develop and manage streaming data pipelines at enterprise scale.
- Build expertise on the data. Own data quality for various data flows.
- Design, build and manage data marts to satisfy our growing data needs.
- Support data marts to provide intuitive analytics for internal customers.
- Design and build new framework and automation tools to enable teams to consume and understand data faster.
- Use your expert coding skills across a number of languages like SQL, Python and Java to support data scientists.
- Interface with internal customers to understand data needs.
- Collaborate with multiple teams and own the solution end-to-end.
- Maintain infrastructure for our data pipelines.
- Any other ad-hoc duties.
Key skills/Experience:
- BS degree in Computer Science or a related technical field. MS or PhD degree is a plus.
- More than 2 years of advanced Python or Java development is necessary. Scala or
- Kotlin experience is a plus.
- More than 2 years of SQL (such as PostgreSQL, Oracle, AWS Redshift, or Hive) experience is required. NoSQL experience is a plus.
- More than 2 years working with Linux OS. Knowledge of networks and cybersecurity is a plus.
- Experience with modern MapReduce/workflow distributed systems, especially
- Apache Spark. Experience with Apache Kafka is a plus.
- Experience working with infrastructure-as-code systems like AWS CloudFormation.
- DevOps experience is a plus.
- Experience in custom ETL pipeline design, implementation and maintenance.
- Experience working with visualization tools like Tableau or Apache Superset.
- Ability in analyzing data to identify deliverables, gaps and inconsistencies.
- Ability in managing and communicating data mart plans to internal customers.
Job Category: Analyst
Job Type: Full Time
Job Location: Singapore