Back to vacancies
Data Engineer - Senior

Data Engineer - Senior

Posted 2 weeks ago by Lumenalta on Linkedin

Job Description: The role of Senior-Level Data Engineer involves designing, building, and maintaining ETL pipelines for global enterprises, focusing on large datasets and real-world impact. The position requires collaboration with various stakeholders to ensure data quality and performance. Candidates should have extensive experience in data engineering, particularly with Python or Java and SQL. The role is fully remote, allowing for flexible work arrangements.

Key Responsibilities:

  • Design, build, and maintain reliable ETL pipelines from the ground up
  • Work with large, complex datasets using Python or Java and raw SQL
  • Build scalable, efficient data flows and transformations
  • Collaborate with data analysts, product managers, and developers to deliver actionable data to stakeholders
  • Ensure data quality, consistency, and performance across systems

Skills Required:

  • 7+ years of experience as a Data Engineer
  • Strong skills in Python or Java for data processing
  • Proficient in SQL, especially for querying large datasets
  • Experience with batch and/or stream data processing pipelines
  • Familiarity with cloud-based storage and compute (e.g., AWS S3, EC2, Lambda, GCP Cloud Storage, etc.)
  • Knowledge of data modeling, normalization, and performance optimization
  • Comfortable working in agile, collaborative, and fully remote environments
  • Fluent in English (spoken and written)

Salary (Rate): undetermined

City: undetermined

Country: United Kingdom

Working Arrangements: remote

IR35 Status: undetermined

Seniority Level: Senior

Industry: IT

Detailed Description From Employer:

What We're Working On

We help global enterprises launch digital products that reach millions of users. Our projects involve massive datasets, complex pipelines, and real-world impact across industries.

What You’ll Do

Join the team as a Senior-Level Data Engineer

  • Design, build, and maintain reliable ETL pipelines from the ground up
  • Work with large, complex datasets using Python or Java and raw SQL
  • Build scalable, efficient data flows and transformations
  • Collaborate with data analysts, product managers, and developers to deliver actionable data to stakeholders
  • Ensure data quality, consistency, and performance across systems

What We’re Looking For

  • 7+ years of experience as a Data Engineer
  • Strong skills in Python or Java for data processing
  • Proficient in SQL, especially for querying large datasets
  • Experience with batch and/or stream data processing pipelines
  • Familiarity with cloud-based storage and compute (e.g., AWS S3, EC2, Lambda, GCP Cloud Storage, etc.)
  • Knowledge of data modeling, normalization, and performance optimization
  • Comfortable working in agile, collaborative, and fully remote environments
  • Fluent in English (spoken and written)

Nice to Have (Not Required)

  • Experience with Airflow, Kafka, or similar orchestration/message tools
  • Exposure to basic data governance or privacy standards
  • Unit testing and CI/CD pipelines for data workflows

This job is 100% Remote – please ensure you have a comfortable home office setup in your preferred work location.

Ongoing recruitment – no set deadline.

Rate:

Negotiable

Location:

United Kingdom

IR35 Status:

Undetermined

Remote Status:

Remote

Industry:

IT

Seniority Level:

Senior

Job Description: The role of Senior-Level Data Engineer involves designing, building, and maintaining ETL pipelines for global enterprises, focusing on large datasets and real-world impact. The position requires collaboration with various stakeholders to ensure data quality and performance. Candidates should have extensive experience in data engineering, particularly with Python or Java and SQL. The role is fully remote, allowing for flexible work arrangements.

Key Responsibilities:

  • Design, build, and maintain reliable ETL pipelines from the ground up
  • Work with large, complex datasets using Python or Java and raw SQL
  • Build scalable, efficient data flows and transformations
  • Collaborate with data analysts, product managers, and developers to deliver actionable data to stakeholders
  • Ensure data quality, consistency, and performance across systems

Skills Required:

  • 7+ years of experience as a Data Engineer
  • Strong skills in Python or Java for data processing
  • Proficient in SQL, especially for querying large datasets
  • Experience with batch and/or stream data processing pipelines
  • Familiarity with cloud-based storage and compute (e.g., AWS S3, EC2, Lambda, GCP Cloud Storage, etc.)
  • Knowledge of data modeling, normalization, and performance optimization
  • Comfortable working in agile, collaborative, and fully remote environments
  • Fluent in English (spoken and written)

Salary (Rate): undetermined

City: undetermined

Country: United Kingdom

Working Arrangements: remote

IR35 Status: undetermined

Seniority Level: Senior

Industry: IT

Detailed Description From Employer:

What We're Working On

We help global enterprises launch digital products that reach millions of users. Our projects involve massive datasets, complex pipelines, and real-world impact across industries.

What You’ll Do

Join the team as a Senior-Level Data Engineer

  • Design, build, and maintain reliable ETL pipelines from the ground up
  • Work with large, complex datasets using Python or Java and raw SQL
  • Build scalable, efficient data flows and transformations
  • Collaborate with data analysts, product managers, and developers to deliver actionable data to stakeholders
  • Ensure data quality, consistency, and performance across systems

What We’re Looking For

  • 7+ years of experience as a Data Engineer
  • Strong skills in Python or Java for data processing
  • Proficient in SQL, especially for querying large datasets
  • Experience with batch and/or stream data processing pipelines
  • Familiarity with cloud-based storage and compute (e.g., AWS S3, EC2, Lambda, GCP Cloud Storage, etc.)
  • Knowledge of data modeling, normalization, and performance optimization
  • Comfortable working in agile, collaborative, and fully remote environments
  • Fluent in English (spoken and written)

Nice to Have (Not Required)

  • Experience with Airflow, Kafka, or similar orchestration/message tools
  • Exposure to basic data governance or privacy standards
  • Unit testing and CI/CD pipelines for data workflows

This job is 100% Remote – please ensure you have a comfortable home office setup in your preferred work location.

Ongoing recruitment – no set deadline.

job card

Create a free account to view the take-home pay for this contract

Sign Up