Advert enquiry : ajirayako@gmail.com

Cloud Data Engineer Job at Vodacom Tanzania

Job Overview

Cloud Data Engineer 

Mwanza
Vodacom Tanzania Plc
Vodacom Tanzania Limited is Tanzania’s leading cellular network company.

Purpose

To be confir The Data Engineer delivers through self and others to:

  • Load the data from local and group sources onto the shared platforms that are necessary for insight, analysis and for commercial actions;
  • Building applications that make use of large volumes of data and generate outputs that allow commercial actions that generate incremental value
  • Support local markets and group functions in obtaining business value from the datamed.

Key accountabilities and decision ownership:

  • Design and develop highly performant, scalable and stable Big Data cloud native applications
  • Source data from a variety of different sources, in the correct format, meeting data quality standards and assuring timeous access to data and analytical insights
  • Build batch and real-time data pipelines, using automated testing and deployment
  • Build transformations to produced enriched data insights
  • Integrate applications with business systems to enable value from analytic models and enable decision making
  • Define and implement best practices relating to cloud economics, software engineering and data engineering to ensure a well-architected cloud framework with data management practices built in to each design.
  • Work with the architecture team to evolve the Big Data capabilities (reusable assets/patterns) and components to support the business requirements/objectives
  • Research, investigate and evaluate new technologies and methods to improve delivery and sustainability of data applications and services
  • Make contributions to the process of defining best practice for the agile development of applications to run on the Big Data Platform

Core competencies, knowledge and experience:

  • Experience managing the development life-cycle for agile software development projects
  • Expert level experience in designing, building and managing data pipelines for batch and streaming applications
  • Experience with performance tuning for batch based applications like Hadoop, including working knowledge using Nifi, Yarn, Hive, Airflow and Spark
  • Experience with performance tuning streaming based applications for real-time data processing using Kafka,
  • Confluent Kafka, AWS Kenesis, GCP pub/sub or similar
  • Experience working with serverless services such as AWS Lambda
  • Relevant practical working development experience with AWS (or GCP) big data analytics services. This includes AWS Glue, AWS Athena, and AWS Step Functions.
  • Working experience with other distributed technologies such as CassandraDB, MongoDB, Elastic Search
  • Java and Python programming ability would be an advantage
  • Experience in metadata management, data modelling and schema management would be an added benefit.

Must have technical / professional qualifications:

  • 3 year IT or IS degree or related field is essential
  • Degree in Computer Science , IT or Information System
  • Relevant cloud certification at professional
  • 5+ years BI or related software development experience
  • Agile exposure, Kanban or Scrum

CLICK HERE TO APPLY

Apply for this job
Company Information

 JOB SCAM ALERT Never Pay to Get a Job. Legitimate Companies don’t Ask for Money, Job Openings with requests for Payment or Fees Should be Treated with Extreme Caution. Ajira Yako is not responsible for monies paid to Scammers.

Search Job Here