33 Data Engineer jobs in Kenya

Data Engineer

Nairobi, Nairobi KES1200000 - KES2400000 Y I&M Bank Uganda

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Purpose

  • The Data Engineer will design, develop, and maintain robust, scalable, and secure data pipelines and infrastructure that support data-driven decision-making across the bank.
  • The Data Engineer will ensure efficient data ingestion, transformation, and storage from multiple banking systems, enabling timely access to accurate data for analytics, regulatory compliance, and operational reporting.

Strategic
Key Responsibilities:

  • Enable Data-Driven Decision Making - Ensure reliable, scalable infrastructure to support analytics, machine learning, and business intelligence across the company.
  • Modernize Data Infrastructure - Transition to scalable, and real-time platforms.
  • Ensure Data Governance and Security - Design systems that comply with data privacy laws and internal governance policies.
  • Support Scalability and Future Growth - Build systems that support exponential growth in data volume, variety, and velocity.

Initiatives

  • Data Architecture - Build and maintain data models, data lakes, and data warehouses tailored to banking use cases.
  • Implement ETL/ELT Pipelines - Design, develop, and optimize robust ETL/ELT pipelines to ingest data from internal banking systems (core banking, CRM, transactions, etc.) and external source.
  • Establish Data Quality Frameworks - Introduce data validation, lineage, and anomaly detection to improve trust in data.

Operational

  • Maintain and Monitor Data Pipelines - Ensure pipelines are running reliably, on schedule, and are monitored for failures.
  • Optimize Query and Pipeline Performance - Tune SQL queries, job runtimes, and storage formats to reduce costs and latency.
  • Handle Data Issues and Incidents - Respond to pipeline failures, data discrepancies, and outages swiftly.
  • Document Data Architecture and Flows - Maintain updated documentation to assist new engineers, analysts, and stakeholders.
  • Collaborate with Stakeholders - Work closely with data analysts, data scientists, product teams, and DevOps for cross-functional initiatives.

Key Responsibilities

  • Data Pipeline Development - Design, build, and maintain ETL pipelines for ingesting and processing data and automate data workflows using orchestration tools.
  • Data Architecture & Modeling - Design scalable data architectures and define and enforce data standards and naming conventions.
  • Data Integration - Integrate data from multiple sources and build connectors to internal and external systems.
  • Security & Compliance - Ensure data security, encryption and compliance with data governance and regulations.
  • Support for Analytics Prepare and expose data to analysts and BI tools. Optimize queries and storage formats for analytical performance and build data marts and materialized views tailored for business reporting.
  • Monitoring & Optimization - Monitor pipeline performance, latency, and failures.
  • Troubleshoot and resolve data flow issues quickly.

Academic Qualifications

  • BS/BA Degree preferably Computer Science, Information Systems or related field.
  • Professional Qualifications / Membership to professional bodies/ Publication.
  • (Desirable) AWS Certified Data Analytics.
  • (Desirable) Azure Data Engineer Associate.

Work Experience Required

  • 5+ years of experience as a data engineer or in a similar role, preferably in financial services or banking.
  • Strong experience on database development and data model design.

Competencies

  • Strong in Python and SQL (core languages for data manipulation).
  • Familiarity with data warehousing solutions (star/snowflake schemas, normalization).
  • Hands-on with modern data warehouses.
  • Experience with both relational and NoSQL databases.
  • Familiarity with big data tools and platforms.
  • Strong understanding of data security, compliance, and banking regulations
  • Strong analytical and problem-solving skills.
  • Solid business and collaboration skills, and responsive to service needs and operational demands.
  • Attitude to thrive in a fun, fast-paced environment.
This advertiser has chosen not to accept applicants from your region.

Data Engineer

KES900000 - KES1200000 Y AFRICAWORK

Posted today

Job Viewed

Tap Again To Close

Job Description

We are recruiting a
freelance Data Engineer
on behalf of our client, a company operating in the
custom software development and artificial intelligence services
sector. The position is
fully remote
.

Missions :

  • Involving in the product development life cycle from brainstorming ideas to designing quality interactive data visualizations that tell a story with elegance
  • Engaging in every step of the development life cycle, from collecting requirements and communicating to users, to designing, developing, and testing the products
  • Working with data solutions, and data modeling, understanding ETL pipelines, and dashboard tools
  • Interacting with other team members to develop technical specifications, including documentation to support production code
  • Exploring and promoting strategies to improve our data modeling, quality, and architecture

Primary Technical Proficiencies:

  • SQL Server Product Suite: Strong expertise in SQL Server Management Studio (SSMS), SQL Server Integration Services (SSIS), SQL Server
  • Analysis Services (SSAS), and SQL Server Reporting Services (SSRS)
  • Microsoft Azure Cloud Resources: Solid experience with Azure Data Factory (ADF), Logic Apps, and CI/CD practices for smooth data and process integration Python and PySpark: Proficient in Python and PySpark for complex data processing and analysis

Main Technologies:

  • Azure Databricks and Spark Ecosystem: Strong knowledge of Azure

Databricks, Spark, Spark SQL, PySpark, Delta Tables, and Lakehouse

architecture Data Integration with

  • Azure Data Factory: Familiarity with data orchestration and transformation pipelines

Nice to Have:

  • Data Lake and Lakehouse Architecture: Experience with Azure Data Lake and a foundational understanding of the Lakehouse paradigm is a plus, with an eagerness to learn more DAX and Power BI for Data Visualization: Background in DAX for data modeling and Power BI for enhanced data insights
  • Azure Fabric and ADF Dataflows: Experience with Azure Fabric for integration and ADF dataflows to enable streamlined data transformations

Technical Environment:

  • Main Technologies or languages: SSIS, DAX, Azure SQL, T-SQL
  • Side Technologies: Azure Logic Apps, Azure Power Automate
  • Main skills: Data Modeling, ETL best practices (incl. incremental patterns, optimizing for huge data loads, coding patterns, DevOps), DW Data modeling, comfortable with connecting to multiple data sources (Oracle, SAP, REST API, Data Lake)

Experience : +5 years

Education Level Required : - Bachelor's Degree

Languages required
:
English – Advanced level

This advertiser has chosen not to accept applicants from your region.

Data Engineer

KES900000 - KES1200000 Y Gozem - Africa's Super App

Posted today

Job Viewed

Tap Again To Close

Job Description

Founded in 2018, Gozem is today a technological group of more than 400 people,
agile and ambitious
, which offers via its mobile application a very wide range of services including transport, delivery, financing, mobile money and many others, to African users. Present in Benin, Togo, Gabon and Cameroon, our group's ambition is to become "Africa's Super App" by establishing itself throughout French speaking Africa. Our solution is 100% focused on the African market, serving drivers, customers, traders and soon, SMEs in our areas of activity.

We are a multinational group, with a strong presence in several cities and pride ourselves on
helping to improve the lives of local communities
through the digitalisation of the market and the introduction of technology-based efficiencies.

Are you looking to grow in a challenging and friendly environment? Do you want to evolve and progress in dynamic and disruptive digital contexts?

Join us in building a new African digital ecosystem that improves the quality of life.
Together, let's make Africa smile
Website:

Google Play Store:

Apple App Store:

Glassdoor:

What is a Data Engineer at Gozem ?
Our Data Engineer will act as a bridge between our backend product & engineering teams and the wider global analytics organization. He/She is focused on building and maintaining scalable, secure, and high-performing data infrastructures and pipelines across multiple business lines (transport, ecommerce, fintech, and financial services).

This role requires strong expertise in
Google Cloud's data and analytics ecosystem
, complemented with hands-on skills in
open-source data engineering tools
to ensure interoperability, flexibility, and cost-effective solutions.

Your tasks

  • Own the end-to-end data lifecycle: ingestion, pipelining (collection, storage, access), transformation (cleansing, enrichment, feature creation/selection), and governance (documentation, monitoring, observability).
  • Design and implement data pipelines (batch and streaming) using BigQuery, Dataform, Dataflow, Pub/Sub, Cloud Storage, and open-source frameworks (e.g., Spark, Kafka, Airflow).
  • Partner with analytics squads leads and data scientists to build reusable, production-ready datasets and features that enable advanced modeling and reporting.
  • Support the development and optimization of Gozem's data lakehouse and data marts, ensuring scalability and performance across multiple verticals.
  • Evaluate user requests for analytics and reporting solutions, determining feasibility, time requirements, and architectural fit.
  • Establish and maintain data quality frameworks (validation, monitoring, alerting) to ensure trust in data assets across the organization.
  • Interact with stakeholders (Business Leads, Product Managers…) to clarify data availability, reliability and solutions requirements, ensuring insights are actionable and accurate.

You are the right person for this job if you have.

  • Strong data profiling and engineering skills, with hands-on experience in Google Cloud Platform (BigQuery, Dataform, Dataflow, Pub/Sub, Composer, Cloud Functions, Cloud Run) and open-source data processing frameworks (Spark, Kafka, Airflow, etc.).
  • Solid understanding of infrastructure and deployment (Linux OS, Docker, containerized environments, CI/CD workflows).
  • Experience with relational and NoSQL databases (PostgreSQL, MongoDB, ClickHouse, etc.).
  • Demonstrated ability to design and implement ETL/ELT pipelines, real-time data processing, and transformations of structured & unstructured data.
  • Proven experience in environments where documentation, testing, reproducibility, and data governance are valued.
  • Desire for clean, well-organized, and maintainable solutions that scale across teams and markets.

You have

  • Education: Master's degree recommended (but not required) in Computer Science, Data Engineering, or related fields.
  • Experience: Minimum 3 years of professional experience in data engineering or analytics engineering roles.
  • Proficiency in Python (mandatory) and SQL (advanced), with working knowledge of to support API integrations and backend data workflows.
  • Familiarity with workflow orchestration (Airflow, Cron, etc.) and version control (Git).
  • Experience with modern data stack practices: modular pipelines, CI/CD for data, data observability, cost optimization.
  • Tools to master: Asana, Google Suite, GitLab (mandatory), familiarity with Jira.
  • Languages: French & English (professional proficiency required).

What We Like Most

  • Self-motivated, independent learner, and eager to share knowledge with teammates.
  • Detail-oriented and efficient time manager in a dynamic, high-growth environment.
  • Strong communication skills with both technical and non-technical stakeholders.
  • A builder's mindset: excited to design systems that scale across Africa's Super App ecosystem and deliver impact.

Working Conditions & Benefits
We offer our employees a fair, friendly and intercultural working environment, in which we strive to develop the talents of each individual. To achieve this, we offer:

Flexible work location: Hybrid (on-site in our markets / remote days possibility).

Open-space offices and teleworking time possible (to be arranged with your

manager)

A gross monthly remuneration defined according to our internal salary grid as well as the relevance of your past experiences for the position.

An option to buy shares in Gozem

An annual bonus allowing you to receive between 0 and 4.5 months of additional salary the following year

Benefits on our Super App for your travel and deliveries

A health and IT insurance package

And above all, the opportunity to join a young, dynamic team that has a real social impact in French-speaking Africa
Interview process

  • An initial introductory meeting with the recruitment manager (15 minutes)
  • A business case to be completed within 07 days
  • A presentation of your business case with the head of the Data team (future n+1) and a time to discuss your professional experience followed by a referral request (you give us 4 professional contacts to whom we send a questionnaire)
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Nairobi, Nairobi KES90000 - KES120000 Y Living Goods

Posted today

Job Viewed

Tap Again To Close

Job Description

Reports to: Senior Manager - Data Engineering & Architecture.
Location(s): Nairobi, Kenya.
About Living Goods
Living Goods endeavours to improve access to essential healthcare services in underserved regions, particularly in sub-Saharan Africa. We empower Community Health Workers (CHWs) with digital tools that enable them to deliver door-to-door care. CHWs use a mobile app to track pregnancies, diagnose and treat common infectious diseases like malaria and pneumonia, monitor disease outbreaks, and follow up with families. Real-time data also supports performance monitoring and impact assessment. By integrating tech-based solutions into community health systems, Living Goods fosters better health outcomes, demonstrating the power of digital health in transforming healthcare delivery in resource-constrained settings.

Purpose Of The Role
We are looking for a
Data Engineer
to participate in the design, development, and management of our data warehousing infrastructure. This role involves building and maintaining data pipelines, managing both relational and non-relational databases, optimizing queries, and transforming raw data into structured data for analysis and reporting.

The successful candidate will collaborate closely with the program, MLE, product management, and software engineering teams to understand data requirements and oversee the implementation of suitable solutions to ensure stakeholders' needs are satisfied.

Embedded within the Digital Health Team, this role will report to the Senior Manager, Data Engineering & Architecture. The ideal candidate will bring extensive expertise in Transactional Databases, Data Warehouses, and BI reporting systems.

Roles And Responsibilities

  • Participate in the design, implementation, and maintenance of data pipelines by performing extraction, transformation, and loading activities from structured and unstructured data sources into a data warehouse.
  • Design and Build Data models – star schema, snowflake. Understand common analytical data models like Kimball. Build physical data models and align with best practice and requirements.
  • Design, develop, and optimize complex SQL queries to support data discovery, analysis, and reporting. Leverage SQL to ensure accurate data staging and transformation processes that align with business requirements.
  • Conduct thorough data discovery to identify and address data quality issues, ensuring that the data is accurate, complete, and accessible for business needs.
  • Monitor system logs for errors and performance issues; troubleshoot and resolve issues as they arise. Conduct day-to-day system checks and maintenance tasks to ensure system availability.
  • Develop BI technical documentation – data dictionaries, definitions, data flows, database schemas, data model diagrams, Entity Relationship Diagrams (ERDs), etc.
  • Collaborate with BI developers and users to understand business rules, capture requirements, develop user stories and write technical/functional specifications based on conceptual design and stated business requirements.
  • Assist across internal teams to define excellence in data governance, privacy, and security.

Skills & Competencies:

  • 3+ years experience as a Data Engineer.
  • Demonstrated experience in implementing data pipelines/ETL into data warehouses and data querying and analysis using cloud-based solutions such as AWS Redshift or Snowflake.
  • Proficiency in SQL & Python for advanced querying, data manipulation, and performance optimization.
  • Experience with ETL tools like AWS Glue, Airbyte or Talend for building automated data pipelines.
  • Strong knowledge of data modeling and transformation using tools like dbt.
  • Hands-on experience with workflow automation tools such as Airflow.
  • Expertise in managing CouchDB or PostgreSQL databases, including schema management, performance tuning, and complex queries.
  • Extensive data warehouse experience, with skills in performance tuning, query optimization, indexing, and data integrity management.
  • Proficiency in using BI tools such as Tableau, Power BI, or Superset for creating reports and dashboards.
  • Strong understanding of data governance, security, and compliance best practices in cloud environments.
  • Soft Skills: Teamwork, Collaboration, Problem - solving, Strong communication and presentation skills to effectively translate technical concepts to business stakeholders.

Minimum Qualifications:

  • A Bachelor's degree in computer science, Data Science, Statistics, Mathematics, or a related discipline.
  • Professional certifications or equivalent experience in data management, ETL processes, data warehousing, data visualization, and managing large and complex datasets.

Compensation
A competitive salary and benefits package commensurate with experience including health insurance and bonus opportunity. The opportunity to be your best while making lives better for those in need.

Living Goods is an equal opportunity employer and will consider every qualified applicant for employment. Living Goods does not discriminate based on race, ethnicity, national origin, ancestry, religion, gender, sexual orientation or disability.
Our current job openings are displayed on our website, where you can search for open positions and apply directly. Living Goods does not offer any positions without an interview and never asks candidates for money. If you are asked for money, we strongly recommend that you do not respond and do not send money or personal information.

This advertiser has chosen not to accept applicants from your region.

Senior Data Engineer

Nairobi, Nairobi KES900000 - KES1200000 Y Jumia Group

Posted today

Job Viewed

Tap Again To Close

Job Description

What you will be doing

  • Design and develop pipelines using Python, PySpark, and SQL
  • Use GitLab as the versioning control system
  • Utilize S3 buckets for storing large volumes of raw and processed data
  • Implement and manage complex data workflows using Apache Airflow (MWAA) to orchestrate tasks
  • Utilize Apache Iceberg (or similar) for managing and organizing data in the data lake
  • Create and maintain data catalogs using AWS Glue Catalog to organize metadata
  • AWS Athena for interactive querying
  • Familiarize with data modeling techniques to support analytics and reporting requirements, as well as knowledge of the data journey stages within a data lake (Medallion Architecture)

What we are looking for

  • Ideally, a degree in Information Technology, Computer Science, or a related field
  • Ideally, +5 years of experience within the Data Engineering landscape
  • Strong expertise in Python, PySpark, SQL, and the overall AWS data ecosystem
  • Strong problem-solving and analytical skills
  • Ability to explain technical concepts to non-technical users
  • Proficiency to work with GitHub
  • Terraform and CICD pipelines are a great 'nice-to-have'
This advertiser has chosen not to accept applicants from your region.

Senior Data Engineer

KES1200000 - KES2400000 Y plusoperator

Posted today

Job Viewed

Tap Again To Close

Job Description

Company Description

Plusoperator is a purpose-driven IT consultancy firm committed to making a sustainable impact. We bridge the gap between European companies and African IT talent, highlighting the critical role of data, technology, and diversity in achieving business success. Through our innovative approach, we help clients reach digital excellence while empowering individuals and communities for a brighter future.

Role Description

We're looking for a Senior Data Consultant with a passion for the latest data technologies to join our team on a contract basis. In this key role, you will design, implement, and optimize data pipelines and innovative data structures for our clients. You'll use your deep knowledge of Azure and Databricks, along with a minimum of 5 years of proven experience in the data domain, to drive digital transformations and help our clients achieve their business goals.

Your responsibilities will include:

  • Analyzing complex datasets and performing advanced data modeling.
  • Conducting data science and analytics tasks to provide valuable insights.
  • Designing and implementing efficient ETL (Extract, Transform, Load) processes.
  • Translating complex business and technical requirements into practical and effective data structures.
  • Working closely with clients to deliver solutions that drive business success.

Qualifications

  • A passion for and expertise in data processing, including storage, transformation, and pipeline development.
  • Proven experience with database design and structuring.
  • Strong knowledge of 
    Python
     and 
    SQL
    .
  • Proficiency in 
    Azure
     and 
    Databricks/Snowflake
    .
  • Experience with 
    CI/CD
     concepts and tools (e.g., Git, Azure DevOps, GitLab).
  • Experience with
    cloud
    (Azure, AWS, GCP).
  • Experience
    with infrastructure as code
    (Terraform).
  • Strong communication and collaboration skills to work effectively with international clients and team members.
  • Ability to work independently in a hybrid work environment.
  • A Bachelor's or Master's degree in Data Science, Computer Science, or a related field.
  • Experience in IT consultancy or relevant industries is a plus.

If you're an expert in the data domain and want to be part of a company that values technology, diversity, and social impact, we'd love to hear from you.

This advertiser has chosen not to accept applicants from your region.

Lead Data Engineer

20100 Mwembe KES170000 Annually WhatJobs

Posted 2 days ago

Job Viewed

Tap Again To Close

Job Description

full-time
Our client, a data-driven organization at the forefront of analytics and business intelligence, is seeking an experienced Lead Data Engineer to spearhead their data infrastructure initiatives. This is a fully remote position, allowing you to contribute from any location. As the Lead Data Engineer, you will be responsible for designing, building, and maintaining scalable and efficient data pipelines, data warehouses, and data lakes. You will play a pivotal role in transforming raw data into actionable insights that drive strategic decision-making across the organization.

Your responsibilities will include collaborating with data scientists, analysts, and business stakeholders to understand their data needs and translate them into robust data solutions. You will lead the development and implementation of ETL/ELT processes, ensuring data quality, integrity, and accessibility. We are looking for expertise in cloud-based data platforms such as AWS (Redshift, S3, Glue), GCP (BigQuery, Dataflow), or Azure (Data Lake, Synapse). Proficiency in SQL, Python, and distributed data processing frameworks (e.g., Spark, Hadoop) is a must. Experience with data modeling, database design, and performance tuning is also critical. The ideal candidate will possess strong leadership qualities, with the ability to mentor a team of data engineers and guide technical direction.

We require a Bachelor's or Master's degree in Computer Science, Data Science, or a related quantitative field, or equivalent professional experience. A minimum of 7 years of experience in data engineering, with at least 2 years in a lead or senior role, is expected. Proven experience in designing and implementing large-scale data architectures and pipelines is essential. Expertise in big data technologies and a solid understanding of data governance and security best practices are required. Excellent communication and collaboration skills are necessary to effectively work with diverse teams in a remote environment. If you are passionate about building state-of-the-art data solutions and leading a talented team, we encourage you to apply for this exciting remote opportunity.
This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Data engineer Jobs in Kenya !

Full Stack Data Engineer

KES1200000 - KES2400000 Y PayConstruct

Posted today

Job Viewed

Tap Again To Close

Job Description

What is our mission?:
Orbital is on an exciting mission to revolutionise global cross-border payments by innovatively combining traditional fiat banking rails with stablecoins over blockchain rails for a variety of use cases. Our class leading B2B payments platform offers multi-currency e-money accounts (corporate IBANs) combined with a suite of digital assets services. Our company sits at the frontier of payments & fintech, by intersecting blockchain and traditional (fiat) financial services, and is leading the way to bridging those two worlds for corporate enterprises globally.

We believe blockchain technology is firmly here to stay, and we want to be the first to bring a combined offering of fiat & crypto payment services under one exciting platform. Learn more about our team and company story here.

What is the purpose of this role in the delivery of our mission?
We're looking for a Full-Stack Data Engineer who can design, build, and optimize modern data systems from the ground up. You'll own the full data lifecycle—from architecting databases to building ETL pipelines, writing advanced queries, and enabling data-driven decision-making through powerful insights.

This role blends traditional data engineering with a strong analytics mindset. You'll collaborate closely with engineering, product, and compliance teams to ensure clean, accessible, and scalable data flows across our platform.

What are the key responsibilities of the role?

  • Design and develop scalable, reliable data architectures and storage solutions (SQL, NoSQL, etc.)
  • Build and maintain robust ETL/ELT pipelines to ingest, transform, and enrich data from multiple sources
  • Write performant SQL queries for reporting, dashboards, and ad-hoc analysis
  • Develop and optimize data models for both operational and analytical use
  • Collaborate with analysts and stakeholders to define metrics, KPIs, and data definitions
  • Implement data validation, monitoring, and observability across pipelines
  • Support data visualization efforts via BI tools (Metabase, Power BI or custom dashboards)
  • Ensure data security, governance, and compliance across all systems

What is the scope of accountability for the role?

  • Design, develop, deploy and maintain mission critical data applications
  • Delivery of various data driven applications
  • Develop and owning data models, dashboard and reporting
  • Business analysis and the query of databases

What are the essential skills, qualifications and experience required for the role?

  • 3+ years of experience in data engineering or similar roles
  • Strong SQL skills and experience with relational databases (e.g., PostgreSQL, MySQL, SQL Server)
  • Experience with cloud data platforms (AWS Redshift, Stich, Airbyte, Athena, Glue, S3)
  • Proficient in Python or another data scripting language
  • Experience with orchestration tools (e.g., Airflow, Prefect, Dagster)
  • Familiarity with data warehousing, data lakes, and stream processing (Kafka, Spark, etc.)
  • Understanding of data modelling techniques (e.g., star/snowflake schema, normalization)
  • Ability to communicate complex data concepts to non-technical stakeholders
  • You have strong analytical, organisational, and prioritisation skills, and a belief in writing documentation as part of writing code

What are the desirable skills, qualifications and experience that would be beneficial for the role?

  • Data Ingestion: AWS Kinesis/Firehose
  • Data Transformation: DBT (Data Build Tool)
  • Familiarity with DevOps/data infrastructure tools (Git/Bitbucket, AWS CloudFormation, AWS ECS)
  • Exposure to analytics or dashboard tools (Metabase and/or PowerBI)
  • Prior work in a startup, SaaS, or data-intensive environment
This advertiser has chosen not to accept applicants from your region.

Senior Data Engineer - Remote

90100 Abothuguchi West KES9000000 Annually WhatJobs

Posted 3 days ago

Job Viewed

Tap Again To Close

Job Description

full-time
Our client is seeking a highly skilled and experienced Senior Data Engineer to architect, build, and maintain robust data pipelines and infrastructure. This is a fully remote position, offering a unique opportunity to work on challenging data projects from the convenience of your home. You will be responsible for designing scalable data solutions, optimizing data warehousing, and ensuring the availability and integrity of data for analytics and machine learning initiatives. The ideal candidate will have a deep understanding of distributed systems, big data technologies, and cloud platforms.

Key Responsibilities:
  • Design, develop, and deploy scalable and reliable data pipelines and ETL/ELT processes.
  • Build and optimize data warehouses and data lakes for analytical and operational workloads.
  • Implement and manage data ingestion processes from various sources, including APIs, databases, and streaming data.
  • Develop and maintain data models that support business intelligence and data science needs.
  • Ensure data quality, consistency, and accuracy through robust validation and monitoring mechanisms.
  • Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver solutions.
  • Optimize query performance and data processing efficiency.
  • Implement and manage data security and governance best practices.
  • Explore and integrate new data technologies and tools to enhance our data platform.
  • Mentor junior data engineers and contribute to the development of best practices within the data engineering team.

Qualifications:
  • Bachelor's or Master's degree in Computer Science, Engineering, or a related quantitative field.
  • 5+ years of professional experience in data engineering or a similar role.
  • Strong proficiency in SQL and experience with relational and NoSQL databases.
  • Hands-on experience with big data technologies such as Spark, Hadoop, Kafka, Flink.
  • Expertise in cloud data platforms (e.g., AWS Redshift, S3, Glue; Azure Data Factory, Synapse; GCP BigQuery, Dataflow).
  • Proficiency in programming languages commonly used in data engineering (e.g., Python, Scala, Java).
  • Experience with data modeling and data warehousing concepts.
  • Familiarity with containerization technologies like Docker and Kubernetes is a plus.
  • Strong understanding of data security, privacy, and compliance regulations.
  • Excellent problem-solving, analytical, and communication skills.
  • Proven ability to work independently and manage projects effectively in a remote setting.
This role is critical to enabling data-driven decision-making across the organization. If you are passionate about building cutting-edge data infrastructure and thrive in a remote work environment, we encourage you to apply.
This advertiser has chosen not to accept applicants from your region.

Remote Senior Data Engineer (Big Data)

20102 Embu, Eastern KES300000 Annually WhatJobs

Posted 3 days ago

Job Viewed

Tap Again To Close

Job Description

full-time
Our client is a rapidly growing technology firm seeking a highly skilled and experienced Remote Senior Data Engineer to join their dynamic team. This is a fully remote position, offering flexibility and the opportunity to work with cutting-edge big data technologies from anywhere. You will be instrumental in designing, building, and maintaining scalable data pipelines, ensuring data quality and integrity for critical business analytics and machine learning initiatives. This role involves collaborating closely with data scientists, analysts, and software engineers to understand data needs, translate them into robust technical solutions, and optimize data processing workflows. You will architect solutions for data ingestion, transformation, and storage, leveraging cloud platforms and distributed computing frameworks.

Key Responsibilities:
  • Design, develop, and implement highly scalable and reliable data pipelines for batch and real-time data processing.
  • Build and maintain robust ETL/ELT processes using modern data warehousing techniques.
  • Optimize data storage and retrieval for performance and cost-efficiency on cloud platforms (AWS, GCP, or Azure).
  • Collaborate with data scientists and analysts to ensure data availability and quality for analytical models and reporting.
  • Develop and manage data infrastructure, ensuring security, compliance, and reliability.
  • Implement data governance policies and procedures.
  • Troubleshoot and resolve issues related to data pipelines and infrastructure.
  • Stay abreast of emerging technologies in big data and data engineering, and recommend their adoption.
  • Mentor junior data engineers and contribute to a culture of technical excellence.
  • Participate in code reviews and contribute to architectural design discussions.
Qualifications:
  • Bachelor's or Master's degree in Computer Science, Engineering, or a related quantitative field.
  • 5+ years of experience in data engineering, with a strong focus on big data technologies.
  • Proficiency in programming languages such as Python, Scala, or Java.
  • Extensive experience with distributed data processing frameworks like Apache Spark, Hadoop, Flink.
  • Deep understanding of SQL and NoSQL databases.
  • Experience with cloud data services (e.g., AWS S3, Redshift, EMR; Google Cloud Storage, BigQuery, Dataproc; Azure Data Lake, Synapse Analytics).
  • Familiarity with workflow management tools like Airflow.
  • Experience with data warehousing concepts and best practices.
  • Excellent problem-solving and analytical skills.
  • Strong communication and collaboration abilities, particularly in a remote team setting.
This remote role is ideal for a seasoned data professional looking to make a significant impact on data-driven decision-making.
This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Data Engineer Jobs