SBAPL_Data Engineering
Welspun World
Date: 2 days ago
City: Thane, Maharashtra
Contract type: Full time

About Welspun
Welspun World is one of India's fastest growing global conglomerates with businesses in Home Textiles, Flooring Solutions, Advanced Textiles, DI Pipes, Pig Iron, TMT bars, Stainless Steel, Alloy, Line Pipes, Infrastructure & Warehousing.
At Welspun, we strongly believe in our purpose to delight customers through innovation and technology, achieve inclusive & sustainable growth to remain eminent in all our businesses. From Homes to Highways, Hi-tech to Heavy metals, We lead tomorrow together to create a smarter & more sustainable world.
Job Purpose/ Summary
We are seeking a skilled Data Engineer with 4-6 years of experience in ETL development and data transformation using Azure technologies. The ideal candidate will have expertise in Databricks, Azure Data Factory (ADF., and cloud services, along with strong experience in SQL procedures and data transformations. Knowledge of Python and Apache Spark is preferred.
The successful candidate will collaborate with cross-functional teams to support business objectives.
Job Title
SBAPL_Data Engineering
Job Description
Key Responsibilities:
Develop and maintain data lake architecture using Azure Data Lake Storage Gen2 and Delta Lake. Build end-to-end solutions to ingest, transform, and model data from various sources for analytics and reporting. Work with stakeholders to gather requirements and translate them into scalable data solutions. Optimize data processing workflows and ensure high performance for large-scale data sets. Collaborate with data analysts, BI developers, and data scientists to support advanced analytics use cases. Implement data quality checks, logging, and monitoring of data pipelines. Ensure compliance with data security, privacy, and governance standardsCollaboration and Communication:Collaborate with stakeholders to understand data requirements and deliver solutions.Communicate complex technical concepts to non-technical stakeholders effectively.Work closely with the product and engineering teams to integrate data solutions into the broader tech ecosystem.Performance Optimization and Troubleshooting:Optimize models for performance and cost-efficiency.Continuously monitor and improve the performance, scalability, and reliability of the Models.
Skill And Knowledge You Should Possess
4 – 6 years of strong hands-on experience with Azure Databricks (PySpark, SparkSQL, Delta Lake.. Solid understanding of Azure Data Factory (ADF. – building pipelines, triggers, linked services, datasets. Familiarity with Microsoft Fabric – including OneLake, Dataflows, and Lakehouses. Proficiency in SQL, Python, and PySpark. Experience working with Azure Synapse Analytics, Azure SQL, and Azure Blob/Data Lake Storage. Strong knowledge of data warehousing, data modeling, and performance tuning.
Good To Have
Experience with additional big data technologies and cloud platforms Knowledge of data governance frameworks and tools. Relevant certifications in Azure or Spark.
What We Offer
Opportunities for professional growth and development. A collaborative and inclusive work environment. Access to cutting-edge technology and tools. The opportunity to make a significant impact on the company’s data strategy and operations.
Principal Accountabilities
Key Responsibilities:
Develop and maintain data lake architecture using Azure Data Lake Storage Gen2 and Delta Lake. Build end-to-end solutions to ingest, transform, and model data from various sources for analytics and reporting. Work with stakeholders to gather requirements and translate them into scalable data solutions. Optimize data processing workflows and ensure high performance for large-scale data sets. Collaborate with data analysts, BI developers, and data scientists to support advanced analytics use cases. Implement data quality checks, logging, and monitoring of data pipelines. Ensure compliance with data security, privacy, and governance standardsCollaboration and Communication:Collaborate with stakeholders to understand data requirements and deliver solutions.Communicate complex technical concepts to non-technical stakeholders effectively.Work closely with the product and engineering teams to integrate data solutions into the broader tech ecosystem.Performance Optimization and Troubleshooting:Optimize models for performance and cost-efficiency.Continuously monitor and improve the performance, scalability, and reliability of the Models.
Key Interactions
Mid Management
Experience
5
Additional Section (Can Be Added, If Required.
NA
Welspun World is one of India's fastest growing global conglomerates with businesses in Home Textiles, Flooring Solutions, Advanced Textiles, DI Pipes, Pig Iron, TMT bars, Stainless Steel, Alloy, Line Pipes, Infrastructure & Warehousing.
At Welspun, we strongly believe in our purpose to delight customers through innovation and technology, achieve inclusive & sustainable growth to remain eminent in all our businesses. From Homes to Highways, Hi-tech to Heavy metals, We lead tomorrow together to create a smarter & more sustainable world.
Job Purpose/ Summary
We are seeking a skilled Data Engineer with 4-6 years of experience in ETL development and data transformation using Azure technologies. The ideal candidate will have expertise in Databricks, Azure Data Factory (ADF., and cloud services, along with strong experience in SQL procedures and data transformations. Knowledge of Python and Apache Spark is preferred.
The successful candidate will collaborate with cross-functional teams to support business objectives.
Job Title
SBAPL_Data Engineering
Job Description
Key Responsibilities:
Develop and maintain data lake architecture using Azure Data Lake Storage Gen2 and Delta Lake. Build end-to-end solutions to ingest, transform, and model data from various sources for analytics and reporting. Work with stakeholders to gather requirements and translate them into scalable data solutions. Optimize data processing workflows and ensure high performance for large-scale data sets. Collaborate with data analysts, BI developers, and data scientists to support advanced analytics use cases. Implement data quality checks, logging, and monitoring of data pipelines. Ensure compliance with data security, privacy, and governance standardsCollaboration and Communication:Collaborate with stakeholders to understand data requirements and deliver solutions.Communicate complex technical concepts to non-technical stakeholders effectively.Work closely with the product and engineering teams to integrate data solutions into the broader tech ecosystem.Performance Optimization and Troubleshooting:Optimize models for performance and cost-efficiency.Continuously monitor and improve the performance, scalability, and reliability of the Models.
Skill And Knowledge You Should Possess
4 – 6 years of strong hands-on experience with Azure Databricks (PySpark, SparkSQL, Delta Lake.. Solid understanding of Azure Data Factory (ADF. – building pipelines, triggers, linked services, datasets. Familiarity with Microsoft Fabric – including OneLake, Dataflows, and Lakehouses. Proficiency in SQL, Python, and PySpark. Experience working with Azure Synapse Analytics, Azure SQL, and Azure Blob/Data Lake Storage. Strong knowledge of data warehousing, data modeling, and performance tuning.
Good To Have
Experience with additional big data technologies and cloud platforms Knowledge of data governance frameworks and tools. Relevant certifications in Azure or Spark.
What We Offer
Opportunities for professional growth and development. A collaborative and inclusive work environment. Access to cutting-edge technology and tools. The opportunity to make a significant impact on the company’s data strategy and operations.
Principal Accountabilities
Key Responsibilities:
Develop and maintain data lake architecture using Azure Data Lake Storage Gen2 and Delta Lake. Build end-to-end solutions to ingest, transform, and model data from various sources for analytics and reporting. Work with stakeholders to gather requirements and translate them into scalable data solutions. Optimize data processing workflows and ensure high performance for large-scale data sets. Collaborate with data analysts, BI developers, and data scientists to support advanced analytics use cases. Implement data quality checks, logging, and monitoring of data pipelines. Ensure compliance with data security, privacy, and governance standardsCollaboration and Communication:Collaborate with stakeholders to understand data requirements and deliver solutions.Communicate complex technical concepts to non-technical stakeholders effectively.Work closely with the product and engineering teams to integrate data solutions into the broader tech ecosystem.Performance Optimization and Troubleshooting:Optimize models for performance and cost-efficiency.Continuously monitor and improve the performance, scalability, and reliability of the Models.
Key Interactions
Mid Management
Experience
5
Additional Section (Can Be Added, If Required.
NA
See more jobs in Thane, Maharashtra