Specialist Data Engineer - Absa Bank

eg. Accountant or Accounting or Kempinski



Our website is made possible by displaying online advertisements to our visitors.
Please consider supporting us by disabling your ad blocker.






A Must Read Article: 10 checks to identify fraudulent or scam job offers

Job Alerts: Click here to join us on Telegram

1. Patiently scroll down and read the job description below.

2. Scroll down and find how to apply or mode of application for this job after the job description.

3. Carefully follow the instructions on how to apply.

4. Always apply for a job by attaching CV with a Cover Letter / Application Letter.


With over 100 years of rich history and strongly positioned as a local bank with regional and international expertise, a career with our family offers the opportunity to be part of this exciting growth journey, to reset our future, and shape our destiny as a proudly African group.

Job Summary

The purpose of the role is to work embedded as a member of the squad or; across multiple squads to produce, test, document, and review algorithms & data-specific source code that supports the deployment & optimization of data retrieval, processing, storage, and distribution for a business area.

Job Description

Key Responsibilities

Accountability Data Architecture & Data Engineering

  • Understand the technical landscape and bank-wide architecture that is connected to or dependent on the

  • the business area supported in order to effectively design & deliver data solutions (architecture, pipeline, etc.)

  • Translate/interpret the data architecture direction and associated business requirements & leverage expertise in analytical & creative problem solving to synthesize data solution designs (build a solution from its components) beyond the analysis of the problem

  • Participate in design thinking processes to successfully deliver data solution blueprints

  • Leverage state-of-the-art relational and No-SQL databases as well as integration and streaming platforms to deliver sustainable business-specific data solutions.

  • Design data retrieval, storage & distribution solutions (and OR components thereof) including contributing to all phases of the development lifecycle e.g. design process

  • Develop high quality data processing, retrieval, storage & distribution design in a test-driven & domain-driven / cross-domain environment

  • Build analytics tools that utilize the data pipeline by quickly producing well-organized, optimized, and documented source code & algorithms to deliver technical data solutions

ADVERTISEMENT

CONTINUE READING BELOW

  • Create & Maintain Sophisticated CI / CD Pipelines (authoring & supporting CI/CD pipelines in Jenkins or similar tools and deploy to multi-site environments – supporting and managing your applications all the way to production)

  • Automate tasks through appropriate tools and scripting technologies e.g. Ansible, Chef

  • Debug existing source code and polish feature sets.

  • Assemble large, complex data sets that meet business requirements & manage the data pipeline

  • Build infrastructure to automate extremely high volumes of data delivery

  • Create data tools for analytics and data science teams that assist them in building and optimizing data sets for the benefit of the business

  • Ensure designs & solutions support the technical organization principles of self-service, repeatability, testability, scalability & resilience

  • Apply general design patterns and paradigms to deliver technical solutions

  • Inform & support the infrastructure build required for optimal extraction, transformation, and loading of data from a wide variety of data sources

  • Support the continuous optimization, improvement & automation of data processing, retrieval, storage & distribution processes

  • Ensure the quality assurance and testing of all data solutions aligned to the QA Engineering & broader architectural guidelines and standards of the organization

  • Implement & align to the Group Security standards and practices to ensure the undisputable separation, security & quality of the organization’s data

  • Meaningfully contribute to & ensure solutions align to the design & direction of the Group Architecture & in particular data standards, principles, preferences & practices. The short-term deployment must align to strategic long-term delivery.

  • Meaningfully contribute to & ensure solutions align to the design and direction of the Group Infrastructure standards and practices e.g. OLA’s, IAAS, PAAS, SAAS, Containerization, etc.

  • Monitor the performance of data solutions designs & ensure ongoing optimization of data solutions

  • Stay ahead of the curve on data processing, retrieval, storage & distribution technologies & processes

     (Global best practices & trends) to ensure best practice

 

People

  • Coach & mentor other engineers

  • Conduct peer reviews, testing, problem-solving within and across the broader team

  • Build data science team capability in the use of data solutions

ADVERTISEMENT

CONTINUE READING BELOW

Risk & Governance

  • Identify technical risks and mitigate these (pre, during & post-deployment)

  • Update / Design all application documentation aligned to the organization technical standards and risk governance frameworks

  • Create business cases & solution specifications for various governance processes (e.g. CTO approvals)

  • Participate in incident management & DR activity – applying critical thinking, problem-solving & technical expertise to find the underlying cause of major incidents

  • Deliver on time & on a budget (always)

 

Education and Experience required

  • Relevant NQF level 7 qualification in computer science, engineering, physics, mathematics or equivalent

  • Development and deployment of data applications

  • Design & Implementation of infrastructure tooling and work on horizontal frameworks and libraries

  • Creation of data ingestion pipelines between legacy data warehouses and the big data stack

  • Automation of application back-end workflows

  • Building and maintaining backend services created by multiple services framework

  • Maintain and enhance applications backed by Big Data computation applications

  • Be eager to learn new approaches and technologies

  • Strong problem-solving skills

  • Strong programming skills

  • Worked on Big Data platforms (Vanilla Hadoop, Cloudera or Hortonworks)

  • Preferred: Experience with Scala or other functional languages (Haskell, Clojure, Kotlin, Clean)

  • Preferred: Experience with some of the following: Apache Hadoop, Spark, Hive, Pig, Oozie, ZooKeeper, MongoDB, CouchbaseDB, Impala, Kudu, Linux, Bash, version control tools, continuous integration tools, SAS and SQL skills

  • At least three (3) years experience working in Big data environment (advantageous for all, a must for high volume environments) – optimizing and building big data pipelines, architectures, and data sets

Education

Bachelor's Degree: Information Technology

Absa Bank Limited is an equal opportunity, affirmative action employer. In compliance with the Employment Equity Act 55 of 1998, preference will be given to suitable candidates from designated groups whose appointments will contribute towards the achievement of equitable demographic representation of our workforce profile and add to the diversity of the Bank.

Absa Bank Limited reserves the right not to make an appointment to the post as advertised

ADVERTISEMENT

CONTINUE READING BELOW

ADVERTISEMENT

CONTINUE READING BELOW


How To Apply