As an AWS Data Engineer, you will play a critical role in designing, building, and maintaining large-scale data pipelines using AWS Glue, PySpark, Python, and SQL. You will be responsible for ensuring data quality and supporting business requirements through effective data processing and storage solutions.
Key Responsibilities
- Design and Development. Architect and implement data pipelines using AWS Glue, PySpark, and Python to extract, transform, and load (ETL) data from various sources.
- Pipeline Management. Develop and maintain robust data pipelines that support business requirements and ensure high data quality.
- Data Processing. Handle data ingestion, processing, and storage solutions to meet organizational needs.
Experience and Skills
- Experience. 5+ years of experience in data engineering, data science, or a related field.
- Technical Expertise. Proficiency with AWS services, especially AWS Glue, PySpark, and Python.
- Data Solutions. Experience with data ingestion, processing, and storage solutions.
Additional Information
- Shift Timings. Regular Shift
- Work Arrangement. Full-time, On-site (5 days a week at ValueLabs Head Office)
- Notice Period. Immediate to 15 days
- CTC. Best in the Market
How to Apply. If you are a motivated AWS Data Engineer ready to make a significant impact, we want to hear from you! Please send your resume to [email protected] or share it with [email protected].
Spread the Word
- Help us find the right candidate by liking, sharing, and commenting on this post. Your referrals are greatly appreciated!
Connect With Us
- Anoop Singh Sengar
- L S Murthy
- Mynampati Rasuri
- Sushma Niraja R.
- Renu Reddy
- Suvarna Budili
Join ValueLabs and contribute to exciting projects while advancing your career in a supportive and innovative environment!