We are seeking an experienced Senior Data Engineer with a strong background in Databricks and Azure technologies to join our dynamic team. The ideal candidate will have a minimum of 4 years of experience in data engineering, including at least 3 years of hands-on experience with PySpark for data transformations. You should possess extensive knowledge in Azure Data Bricks, Azure Data Lake, and related technologies to manage and optimize our data infrastructure.
Key Responsibilities
- Data Engineering. Design, develop, and maintain robust data pipelines and ETL processes using Databricks and Azure Data Factory (ADF).
- Data Transformation. Utilize PySpark and Scala for complex data transformations and processing.
- Big Data Concepts. Apply your knowledge of Hive, Spark Framework, and other big data technologies to handle large-scale data processing.
- Database Management. Work with Delta Lake, Azure SQL, Azure Blob Storage, and Azure Synapse to manage and optimize data storage and access.
- Integration. Implement data integration and workflows using Azure Logic Apps and Azure Functions.
- Data Governance. Leverage Azure Purview for data cataloging and governance.
- Complex SQL Queries. Write and optimize complex SQL queries for data analysis and reporting.
- Collaboration. Coordinate independently with business stakeholders to understand and address business requirements.
- Version Control. Use version control tools such as Git or Bitbucket for source code management.
- DevOps and Agile. Apply DevOps principles and Agile methodologies to manage and deploy data engineering solutions.
- Monitoring and Configuration. Understand Batch Account configuration and various control and monitoring options to ensure system reliability.
Required Skills and Qualifications
- Experience. 4+ years of experience in data engineering with a minimum of 3 years in PySpark and Databricks.
- Technical Proficiency. Strong hands-on experience with Python or Scala, Azure Data Factory, Azure Data Bricks, Delta Lake, Azure SQL, Azure Blob Storage, and Azure Synapse.
- Big Data Knowledge. Extensive knowledge of Hive, Spark Framework, and big data processing concepts.
- SQL Expertise. Ability to write complex SQL queries and understand data warehousing concepts.
- Collaboration Skills. Proven ability to work independently with business stakeholders to gather and implement requirements.
- Version Control. Familiarity with Git or Bitbucket.
- DevOps and Agile. Experience with DevOps practices and Agile methodologies.
- Configuration Knowledge. Basic understanding of Batch Account configuration and monitoring.
Join us and leverage your expertise to drive impactful data solutions, optimize data workflows, and support our data-driven decision-making processes.