TheThreeAcross

Data Engineer - PySpark/Azure Databricks

Job Location

in, India

Job Description

Job Title : Data Engineer. Experience : 5-12 Years. Location : Pune/ Bangalore. Notice Period : Immediate to 15 days Joiner. Skills Required : ETL process/understanding of Architecture and Data Modeling. Data Security, SQL. MS Fabric (Basic exp). PysparkData Bricks. We are seeking an experienced and detail-oriented Data Engineer to join our growing data and analytics team. The ideal candidate will have a solid understanding of ETL processes, data architecture, data modeling, and hands-on expertise in SQL, PySpark, and Databricks. Experience with Microsoft Fabric and a strong focus on data security will be key to success in this role. You will play a critical role in designing, building, and maintaining data pipelines and infrastructure to support high-performance analytics and business intelligence systems. Key Responsibilities : - Design, develop, and manage robust ETL/ELT pipelines using tools like PySpark and Databricks. - Extract, transform, and load data from various structured and unstructured data sources. - Collaborate with data architects and analysts to build scalable data architectures. - Develop and optimize data models (star/snowflake schemas) to support reporting and analytics. - Write advanced SQL queries for data transformation, aggregation, and analysis. - Optimize SQL queries and performance-tune large datasets. - Leverage Microsoft Fabric tools and services to support data integration and workflow automation. - Work with cloud-based data platforms and modern data stack technologies. - Ensure high standards of data quality, consistency, and security across systems. - Implement data governance and access control practices in line with organizational policies. - Work closely with data analysts, data scientists, and business stakeholders to understand data needs. - Maintain detailed technical documentation of processes, pipelines, and data flows. Required Skills & Qualifications : - Bachelor's or Master's degree in Computer Science, Information Systems, Engineering, or a related field. - 5-12 years of experience as a Data Engineer or in a similar data-centric role. - Strong experience with ETL processes, data modeling, and data architecture design. - Proficiency in SQL and relational databases (e.g., SQL Server, PostgreSQL, etc.). - Hands-on experience with PySpark and Databricks for big data processing. - Familiarity with Microsoft Fabric (Power BI Fabric, Dataflows, Pipelines) is a plus. - Strong understanding of data security and compliance best practices. - Experience with cloud platforms (Azure, AWS, or GCP). - Familiarity with CI/CD pipelines in data engineering workflows. - Understanding of data lakehouse architecture and Delta Lake. - Exposure to real-time data processing tools (Kafka, Spark Streaming, etc.). - Be part of a fast-paced, data-driven culture. - Work with modern data tools and cloud platforms. - High-impact role with opportunities to influence data strategy. - Supportive, collaborative, and growth-oriented team environment. (ref:hirist.tech)

Location: in, IN

Posted Date: 5/8/2025
View More TheThreeAcross Jobs

Contact Information

Contact Human Resources
TheThreeAcross

Posted

May 8, 2025
UID: 5187395062

AboutJobs.com does not guarantee the validity or accuracy of the job information posted in this database. It is the job seeker's responsibility to independently review all posting companies, contracts and job offers.