We are looking for a skilled and proactive Data Engineer with strong expertise in SQL, DBT (Data Build Tool), Python, and Snowflake, who can effectively manage and support end-to-end data workflows and L1 monitoring activities. This role is critical in ensuring data reliability, early issue detection, and timely data availability for downstream business operations.
Key Responsibilities
- Monitoring & Alert Management. Take ownership of Level 1 (L1) support activities including continuous monitoring of data pipelines, job failures, system health, and alert notifications. Ensure all alerts are acknowledged, investigated, and escalated appropriately.
- Proactive Issue Handling. Be the first line of defense in identifying and resolving issues related to data failures, latency, or quality discrepancies. Ensure timely communication with relevant teams and minimal business disruption.
- Data Availability & Business Readiness. Ensure that data preparation and loading processes are completed before the start of business hours. Guarantee data pipelines are up and running smoothly and datasets are readily available for reporting and analytics teams.
- Exception Handling & Troubleshooting. Manage unexpected data anomalies or process breakdowns by analyzing logs, reviewing SQL queries, and debugging DBT models. Coordinate with relevant stakeholders for resolution when necessary.
- DBT Expertise & Data Model Management. Develop and maintain robust and scalable DBT models. Understand complex data flows and lineage across systems. Collaborate with business and analytics teams to support and enhance data models as per evolving needs.
- Data Quality & Validation. Implement and execute data quality checks using DBT and SQL. Conduct unit and integration tests to ensure accuracy and completeness of data pipelines. Identify and flag potential data issues early in the lifecycle.
- SQL & Python Development. Write complex, optimized SQL queries for data transformation, validation, and performance tuning. Leverage Python scripts for automation, exception handling, and integration with monitoring systems where necessary.
- Snowflake Experience. Utilize Snowflake for data warehousing tasks, manage data loading, perform transformation using SQL, and integrate with DBT for end-to-end data workflows.
Required Skills & Experience
- 4+ years of experience in data engineering or a similar role.
- Strong hands-on experience with DBT, including model development, testing, documentation, and deployment.
- Proficiency in SQL – deep understanding of complex joins, window functions, performance tuning.
- Experience in Python scripting for data handling, automation, or exception processing.
- Solid understanding of Data Quality frameworks and best practices.
- Prior experience working with Snowflake or other cloud-based data platforms.
- Ability to manage and troubleshoot production data pipelines, especially in a support and monitoring context.
- Familiarity with alerting tools and log analysis to proactively monitor data processes.
- Strong problem-solving skills and attention to detail.
- Excellent communication skills and ability to work in a fast-paced environment with minimal supervision.
Preferred Qualifications
- Experience in handling large-scale data environments with mission-critical data pipelines.
- Knowledge of DevOps practices and CI/CD pipelines for DBT.
- Exposure to tools like Airflow, Prefect, or similar workflow orchestration platforms.
- Background in financial services, retail, or similar industries where timely data availability is critical.
This opportunity is ideal for someone who is both technically strong and operationally reliable, who enjoys working behind the scenes to ensure smooth data delivery and takes pride in making data systems dependable and resilient. If you are passionate about working with modern data tools and enjoy the challenge of keeping data pipelines healthy and high-quality, we’d love to hear from you.