As an Azure Data Engineer, you will play a pivotal role in the design, development, and optimization of data pipelines and services using Azure Data Factory (ADF) and Databricks. You will be working closely with cross-functional teams to ensure smooth data movement and transformation, enabling efficient and scalable data processing. If you have hands-on experience in these areas, combined with a problem-solving mindset and strong communication skills, we encourage you to apply!
Key Responsibilities
- Build and Maintain Data Pipelines. Leverage Azure Data Factory (ADF) and Databricks to create robust, scalable data pipelines that ensure seamless data movement and transformation across different systems.
- Azure Ecosystem Integration. Work on integrating and managing tasks within the Azure Fabric ecosystem, ensuring high availability and performance. Familiarity with other Azure services such as Azure Storage, Azure SQL, Azure DevOps, and Event Hub is a key component of the role.
- Data Processing and Transformation. Utilize programming languages like Python, SQL, and PySpark to manipulate and process data effectively, ensuring high-quality data transformations across multiple platforms.
- Collaboration and Troubleshooting. Collaborate with data architects, business analysts, and other stakeholders to troubleshoot data-related issues and optimize processes, ensuring that the data pipelines meet business requirements and SLAs.
- Documentation & Communication. Ensure comprehensive documentation of data pipeline workflows, transformation processes, and configurations. Effective communication with stakeholders to understand requirements, deliverables, and timelines is essential.
What We Are Looking For
- Proven Experience. At least 5 years of experience in the field of data engineering with a strong background in building and maintaining data pipelines using Azure Data Factory and Databricks.
- Technical Expertise. Solid understanding of Azure services, including but not limited to Azure Storage, Azure SQL, Azure DevOps, and Event Hub. Expertise in Python, SQL, PySpark, or other scripting languages for data manipulation is required.
- Hands-On Knowledge. Hands-on experience with Azure Data Factory (ADF) and Databricks is essential, as well as experience in creating and managing ETL pipelines, automating processes, and working with large-scale data environments.
- Additional Skills. Experience with Azure Fabric ecosystem and integration tasks will be a distinct advantage. Strong problem-solving and analytical skills to quickly identify and resolve issues will be highly valued.
- Communication & Collaboration. Excellent verbal and written communication skills to collaborate effectively with technical and non-technical teams, ensuring smooth project execution and stakeholder satisfaction.
- Certification. Azure certifications, particularly the Azure Data Engineer Associate certification, are preferred and will be a plus.
Why You Should Apply
- Career Growth. This is an excellent opportunity for professionals looking to take their career to the next level by working with cutting-edge technologies in the Azure cloud ecosystem.
- Challenging Projects. You will be working on a variety of data-driven projects that will allow you to enhance your skills in the rapidly evolving data engineering domain.
- Collaborative Environment. Work alongside talented and experienced professionals who are passionate about technology and delivering top-quality solutions.
Interested? Here’s How to Apply
Please send your updated resume to [email protected] along with the following details.
- Total years of experience
- Relevant experience in Azure Data Factory
- Relevant experience in Databricks
- Relevant experience in Python
- Relevant experience in PySpark
- Notice period
- Current CTC
- Expected CTC
- Current location
- Preferred location
We look forward to reviewing your application and discussing how you can contribute to our exciting projects. Let’s innovate together!