Coforge is seeking skilled Big Data Developers with expertise in Spark, Scala, and AWS to join our dynamic team. The ideal candidate will have a strong background in Big Data technologies, data engineering, and cloud platforms, particularly AWS. You will be responsible for designing, developing, and implementing scalable data solutions, working with large datasets, and contributing to the operationalization of machine learning models.
Key Responsibilities.
- Big Data Technologies. Demonstrate hands-on experience with AWS technologies, including DynamoDB, EKS, Kafka, Kinesis, Glue, and EMR.
- Programming. Utilize Scala with Spark for data processing and solution architecting.
- Data Engineering. Work with Hadoop MapReduce, HDFS, Hive, HBase, and NoSQL databases. Experience with data engineering platforms such as Hortonworks, Cloudera, MapR, or AWS is preferred.
- Data Ingestion. Handle data ingestion tools like Apache Nifi, Apache Airflow, Sqoop, and Oozie.
- Data Processing. Process data at scale with event-driven systems and message queues like Kafka, Flink, and Spark Streaming.
- AWS Services. Implement solutions using AWS services including EMR, Kinesis, S3, CloudFormation, Glue, API Gateway, and Lake Formation.
- Data Warehouse. Work with AWS Athena and data warehouse platforms like Apache Nifi and Kylo.
- Machine Learning. Operationalize ML models on AWS, including deployment, scheduling, and model monitoring. Engage in feature engineering and data processing for model development.
- Data Pipelines. Build and manage data pipelines for structured and unstructured data, both real-time and batch, using MQ, Kafka, and stream processing techniques.
- SQL Expertise. Write and optimize SQL queries for data analysis and processing.
- Technical Skills. Demonstrate strong technical, analytical, and problem-solving skills. Analyze source system data and data flows effectively.
- Communication. Exhibit strong organizational skills and effective communication. Be prepared to work in a UK shift.
Qualifications
- 4+ years of experience in Big Data technologies, Spark, Scala, and AWS.
- Proficiency in Hadoop, Hive, HBase, and NoSQL databases.
- Experience with data ingestion and processing tools and platforms.
- Hands-on experience with AWS services and data engineering platforms.
- Strong SQL skills and experience with data pipelines and data processing at scale.
- Ability to work autonomously and in a team-based environment.
- Excellent interpersonal skills and a pleasant personality.
How to Apply
Interested candidates are invited to send their resumes to [email protected].