Data Engineer Intern at ENGIE Energy Access

Data Engineer Intern at ENGIE Energy Access

ENGIE Energy Access is the leading Pay-As-You-Go (PAYGo) and mini-grids solutions provider in Africa. The company develops innovative, off-grid solar solutions for homes, public services and businesses, enabling customers and distribution partners access to clean, affordable energy.

The PAYGO solar home systems are financed through affordable instalments from $0.19 per day and the mini-grids foster economic development by enabling electrical productive use and triggering business opportunities for entrepreneurs in rural communities.

With over 1,800 employees, operations in nine countries across Africa (Benin, Côte d’Ivoire, Kenya, Mozambique, Nigeria, Rwanda, Tanzania, Uganda and Zambia), over 1.9 million customers and more than 9 million lives impacted so far, ENGIE Energy Access aims to impact 20 million lives across Africa by 2025.

Job Purpose

This position will be part of the Global Data team. This is an incredible opportunity to join a high-performing team that is passionate about pioneering expanded financial services to off-grid customers at the base of the pyramid.

Key responsibilities will include supporting the data team’s functions that include:

  • Building and maintaining data pipelines between our main transactional and analytics databases, IoT data delivered from devices, PBX, and our in-house ticketing system.
  • You would also be responsible for building pipelines to deliver data in realtime to our field team mobile application to allow data-informed decisions to be made in the field, as well as working with members of the data team to ensure high code quality and database design.
  • Your work will make a meaningful impact by enabling Engie to continuously innovate on how we support our customers in their repayment journey.

Location: Kampala, Uganda/Lagos, Nigeria

Job Type: Full-time

Responsibilities 

Data Modelling and Extract, Transform and Load (ETL):

  • Implement robust data models to support analytics and reporting requirements.
  • Maintain scalable ETL processes from various sources, including multiple ERP systems, into a data warehouse.

Data Ingestion and Automation data pipelines:

  • Implement data validation and quality checks to ensure accuracy and completeness.
  • maintain automated data pipelines to streamline data processing and transformation.
  • Utilize orchestration tools to schedule and monitor pipeline workflows.
  • Collaborate with data analysts to understand data requirements and support their analysis needs.
  • Optimize data structures and queries to enhance performance and usability for analytical purposes.

Data Warehousing:

  • Optimize data warehousing solutions to support business intelligence and analytics needs.
  • Implement data modelling techniques to organize and structure data for optimal querying and analysis.

Optimization and Performance Tuning of Data Dashboards:

  • Troubleshooting and fixing issues on existing reports/dashboards while also continuously building improvements.
  • Optimize dashboard performance and ensure responsiveness for large datasets and complex queries.
  • Design, Data Visualization and Dashboards

Experience

  • 6 months+ of industry experience working on data engineering with a focus on data ingestion, data warehousing, pipeline automation, and ETL development
  • Programming experience in Python/Scala/Java
  • Experience with SQL in addition to one or more of Spark/Hadoop/Hive/HDFS
  • Working knowledge of databases, data systems, and analytics solutions, including proficiency in SQL, NoSQL, Java, Spark and Amazon Redshift  for reporting and dashboard building.
  • Experience with implementing unit and integration testing.
  • Ability to gather requirements and communicate with stakeholders across data, software, and platform teams.
  • Sense of adventure and willingness to dive in, think big, and execute with a team

Qualifications

  • Bachelors or master’s in computer science, mathematical science, machine learning, or related field

Language(s):

  • English
  • French is a plus.

Technology:

  • Python, Java, SQL, NoSQL, Amazon Redshift, Kafka, Apache Beam, Apache Airflow, Apache Spark.

Apply Here


Dixcover Hub

Dixcover Hub is a non-Governmental open access hub that publishes verified opportunities from both local and int’l organizations worldwide. This website is built to empower young people in Africa with the tools, information, skills, and leverage to help them find and pursue their dreams diligently.