Senior Data Engineer

Hyderabad, Telangana, India
Aug 07, 2024
Aug 07, 2025
Remote
Full-Time
4 Years
Job Description

We are seeking an experienced Senior Data Engineer with a strong background in Databricks and Azure technologies to join our dynamic team. The ideal candidate will have a minimum of 4 years of experience in data engineering, including at least 3 years of hands-on experience with PySpark for data transformations. You should possess extensive knowledge in Azure Data Bricks, Azure Data Lake, and related technologies to manage and optimize our data infrastructure.

Key Responsibilities

  1. Data Engineering. Design, develop, and maintain robust data pipelines and ETL processes using Databricks and Azure Data Factory (ADF).
  2. Data Transformation. Utilize PySpark and Scala for complex data transformations and processing.
  3. Big Data Concepts. Apply your knowledge of Hive, Spark Framework, and other big data technologies to handle large-scale data processing.
  4. Database Management. Work with Delta Lake, Azure SQL, Azure Blob Storage, and Azure Synapse to manage and optimize data storage and access.
  5. Integration. Implement data integration and workflows using Azure Logic Apps and Azure Functions.
  6. Data Governance. Leverage Azure Purview for data cataloging and governance.
  7. Complex SQL Queries. Write and optimize complex SQL queries for data analysis and reporting.
  8. Collaboration. Coordinate independently with business stakeholders to understand and address business requirements.
  9. Version Control. Use version control tools such as Git or Bitbucket for source code management.
  10. DevOps and Agile. Apply DevOps principles and Agile methodologies to manage and deploy data engineering solutions.
  11. Monitoring and Configuration. Understand Batch Account configuration and various control and monitoring options to ensure system reliability.

Required Skills and Qualifications

  1. Experience. 4+ years of experience in data engineering with a minimum of 3 years in PySpark and Databricks.
  2. Technical Proficiency. Strong hands-on experience with Python or Scala, Azure Data Factory, Azure Data Bricks, Delta Lake, Azure SQL, Azure Blob Storage, and Azure Synapse.
  3. Big Data Knowledge. Extensive knowledge of Hive, Spark Framework, and big data processing concepts.
  4. SQL Expertise. Ability to write complex SQL queries and understand data warehousing concepts.
  5. Collaboration Skills. Proven ability to work independently with business stakeholders to gather and implement requirements.
  6. Version Control. Familiarity with Git or Bitbucket.
  7. DevOps and Agile. Experience with DevOps practices and Agile methodologies.
  8. Configuration Knowledge. Basic understanding of Batch Account configuration and monitoring.

Join us and leverage your expertise to drive impactful data solutions, optimize data workflows, and support our data-driven decision-making processes.

Related Jobs