Senior Data Engineer, Central Belt Scotland, hybrid

Job Title: Data Engineer
Central Edinburgh, excellent package on offer 

About the Role

We are seeking a Data Engineer to join our team and play a key role in building and maintaining robust data pipelines and infrastructure. This role requires a strong technical background in data architecture, ETL development, and cloud-based data solutions. The ideal candidate will have experience working with large datasets and a passion for creating scalable and efficient data solutions to support business decision-making.

Key Responsibilities

  • Design, develop, and optimize data pipelines for collecting, processing, and storing large-scale datasets.
  • Build and maintain ETL processes to extract, transform, and load data from multiple sources into a centralized data warehouse.
  • Work with structured and unstructured data to enable real-time and batch processing.
  • Ensure data integrity, security, and governance across all data platforms.
  • Collaborate with Data Scientists, BI Developers, and Analysts to provide reliable and well-structured data for analytics and reporting.
  • Design and implement data models that support business intelligence and predictive analytics needs.
  • Optimize database performance and troubleshoot issues related to data storage, processing, and retrieval.
  • Work with cloud platforms (AWS, Azure, GCP) and leverage big data tools for efficient data management.
  • Automate data workflows and ensure high availability and reliability of data infrastructure.
  • Stay up to date with the latest data engineering best practices, tools, and technologies.

Key Skills & Experience

Proficiency in SQL and database management (PostgreSQL, MySQL, SQL Server, or similar).
✅ Experience with ETL tools and frameworks (Apache Airflow, Talend, dbt, etc.).
✅ Strong knowledge of Python, Scala, or Java for data processing and automation.
✅ Experience working with cloud-based data platforms (AWS Redshift, Google BigQuery, Azure Synapse, Snowflake, etc.).
✅ Familiarity with big data technologies (Apache Spark, Hadoop, Kafka) is a plus.
✅ Knowledge of data modeling, warehousing, and data governance best practices.
✅ Experience in CI/CD pipelines and version control (Git, Jenkins, Terraform).
✅ Strong analytical skills and ability to troubleshoot data performance and reliability issues.
✅ Ability to collaborate with cross-functional teams and communicate technical concepts effectively.

Why Join?

  • Work on cutting-edge data projects that drive business success.
  • Collaborative team with opportunities for growth and learning.
  • Competitive salary, benefits, and flexible working arrangements.

If you’re passionate about building scalable data solutions and want to be part of an innovative team, we’d love to hear from you!