Intermediate Data Engineer - Contract - Remote

Job Location

Sandton, South Africa

Job Description

We are looking for a highly experienced Data Engineer to design and implement robust data solutions across cloud and hybrid environments. This role involves building scalable ETL pipelines, integrating diverse data sources, and ensuring data quality and governance. The ideal candidate will have strong expertise in Azure technologies, data modelling, and enterprise data integration, with a proven ability to collaborate across technical and business teams. Responsibilities: Create and/or extend existing data models to include the data for consumption by the Analytics teams. Apply the relevant business and technical rules in the ETL jobs to correctly move data. Use the SDLC as defined including testing and aligning to release management Must produce design documents that can be reviewed Design Authority. Build must align to Standards as defined by Enterprise Architecture. Includes KT, Hypercare and PGLS of work delivered. Design, develop, and maintain ETL pipelines using Azure Data Factory and Databricks. Implement data movement and transformation across cloud, on-premises, and hybrid systems. Ensure seamless data exchange and integration using Azure Synapse Analytics, Azure Data Lake, and SQL Server. Develop and consume RESTful and SOAP APIs for real-time and batch data integration. Work with API gateways and secure authentication methods (OAuth, JWT, API keys, certificates). Apply data validation, cleansing, and enrichment techniques. Execute reconciliation processes to ensure data accuracy and completeness. Adhere to data governance and security compliance standards. Troubleshoot ETL failures and optimize SQL queries and stored procedures. Provide operational support and enhancements for existing data pipelines. Partner with data analysts, business analysts, and stakeholders to understand data needs. Document data workflows, mappings, and ETL processes for maintainability. Share best practices and mentor junior engineers. Experience: Matric and a tertiary qualification. Experience in large-scale enterprise data integration projects. 5-7 years in data engineering, ETL development, and SQL scripting. Strong expertise in Azure Data Factory, Databricks, Synapse, and Pentaho. Proficiency in SQL, Python, PySpark, and performance tuning. Experience with Git, Azure DevOps, and CI/CD pipelines. Solid understanding of data modelling, warehousing, and governance.

Location: Sandton, ZA

Posted Date: 8/14/2025
View More Jobs

Contact Information

Contact Human Resources

Posted

August 14, 2025
UID: 5349316693

AboutJobs.com does not guarantee the validity or accuracy of the job information posted in this database. It is the job seeker's responsibility to independently review all posting companies, contracts and job offers.