Website HCLTech
JOB TITLE Senior Data Lead – Azure Data Factory (ADF), Databricks
Experience – 6 to 8 years
Location – Noida, India
OVERVIEW OF THE ROLE
The Senior Data Lead will spearhead the design, development, and optimization of advanced data pipelines and solutions using Azure Data Factory (ADF), Databricks, and Snowflake. This pivotal role involves leading technical teams, collaborating with stakeholders, and delivering scalable, reliable, and high-performance data solutions that support critical business objectives at HCLTech. The successful candidate will be instrumental in driving data-driven decision-making and ensuring the seamless integration of data platforms within the organization’s ecosystem.
DETAILED RESPONSIBILITIES
• Lead and mentor technical teams in the successful execution of projects involving Snowflake, Azure Data Factory (ADF), and Databricks.
• Architect, develop, and maintain scalable data pipelines to fulfill evolving business requirements.
• Collaborate with business stakeholders to gather and analyze requirements, identifying opportunities for data analytics and proposing innovative solutions.
• Monitor, troubleshoot, and optimize data pipelines for quality, reliability, and performance.
• Integrate data solutions with existing systems and applications, ensuring consistency and accuracy across platforms.
• Stay abreast of the latest trends and best practices in data management and analytics, and apply them to projects.
• Provide technical expertise and ongoing mentorship, fostering a culture of learning and innovation within the team.
• Apply agile methodologies and project management practices to ensure efficient project execution and delivery.
• Oversee data ingestion and transformation using Databricks (including PySpark), implement Delta Lake, and orchestrate data pipelines across Azure and AWS environments.
SKILL REQUIREMENTS
• Proficiency in Snowflake, including data modeling, querying, and performance optimization.
• Strong expertise in Azure Data Factory (ADF) for data integration and orchestration.
• Hands-on experience with Databricks for data engineering, processing, and machine learning tasks.
• Ability to design, develop, and optimize complex ETL data pipelines.
• Advanced problem-solving skills, with the ability to diagnose and resolve data pipeline issues.
• Excellent communication skills for effective engagement with technical and non-technical stakeholders.
• Proven leadership experience in managing and motivating technical teams.
• Familiarity with agile methodologies and project management principles.
• Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field.
OTHER REQUIREMENTS (OPTIONAL)
• Relevant certifications in Snowflake, Azure Data Factory (ADF), and Databricks are highly desirable.
• Experience with AWS and Databricks for data ingestion, transformation (PySpark), Delta Lake implementation, and pipeline orchestration is a plus.
• Knowledge of additional cloud platforms or data engineering tools would be advantageous.
To apply for this job please visit in.jooble.org.
