As a
Senior Associate in the
Data Engineering team, you will play a key role in designing, building, and optimizing modern data platforms and pipelines on Azure and Databricks. This exciting role offers opportunities in
Cloud Data Platform development and
AI Solutions. You will work within cross-functional teams to deliver scalable, secure, and high-performing data solutions that enable advanced analytics, AI, and business insights for enterprise clients. This role requires a strong understanding of cloud data architecture, hands-on experience with Azure data services (including Microsoft Fabric), and deep practical knowledge of Databricks for batch and data engineering.
Key Responsibilities for this
Data Engineering Job include:
- Design, develop, and maintain end-to-end data pipelines across structured, semi-structured, and unstructured data sources.
- Implement data ingestion, transformation, and orchestration frameworks leveraging Azure Data Factory, Synapse, and/or Microsoft Fabric Data Pipelines.
- Develop and optimize ETL/ELT processes using Databricks (PySpark, SQL, Delta Lake) to ensure high performance and scalability.
- Implement and enforce data quality, lineage, and governance practices.
- Work closely with solution architects to design modern data architectures and ensure compliance with security and privacy standards.
- Participate in client workshops and technical discussions to translate business needs into technical designs.
Eligibility / Qualification Required:
- Experience: 3–6 years of experience in data engineering, preferably in a consulting or enterprise environment.
- Strong hands-on experience with:
- Azure Data Platform: Data Factory, Synapse Analytics, Azure Data Lake Storage, Microsoft Fabric, Event Hub/IoT Hub, and Azure Functions.
- Databricks: PySpark, Spark SQL, Delta Lake, Unity Catalog, and Databricks Workflows.
- Proficiency: In Python and SQL for large-scale data processing and transformation.
- Understanding: Solid understanding of data modeling, medallion architecture, and lakehouse principles.
- Familiarity: With CI/CD pipelines, DevOps, and version control (e.g., Git, Azure DevOps).
- Knowledge: Of data governance, lineage, and observability tools.
- Experience: With performance optimization, cost control, and best practices in cloud environments.
Education: Degrees/Field of Study required: Not specified. Degrees/Field of Study preferred: Not specified.
Certifications: Not specified.
Optional Skills (Highly Valued):- Accepting Feedback
- Active Listening
- Agile Scalability
- Amazon Web Services (AWS)
- Analytical Thinking
- Apache Airflow
- Apache Hadoop
- Azure Data Factory
- Communication
- Creativity
- Data Anonymization
- Data Architecture
- Database Administration
- Database Management System (DBMS)
- Database Optimization
- Database Security Best Practices
- Databricks Unified Data Analytics Platform
- Data Engineering
- Data Engineering Platforms
- Data Infrastructure
- Data Integration
- Data Lake
- Data Modeling
- Data Pipeline
- {+ 27 more}
How to Apply:
Application instructions are not specified in the provided text.
General Conditions:
General conditions are not specified in the provided text.
View Official Posting & Apply