This is a remote position.
• Data Pipeline Engineering: Architect, build, and maintain complex, real-time, and batch data pipelines using Azure Data Factory, Python/PySpark, and Databricks.
• Architecture & Modelling: Design and implement modern data warehouse solutions, data models, and data lakes, optimizing for performance and scalability.
• Data Ingestion & Integration: Ingest, cleanse, and transform data from diverse sources into usable data structures for analytics.
• Security & Governance: Implement security features, including role-based access control (RBAC), data encryption, and governance via Azure Purview.
Requirements
• Performance Optimization & Monitoring: Troubleshoot and tune data systems and SQL queries for efficiency; monitor data workflows.
• Technical Leadership & Mentorship: Lead code reviews, mentor junior engineers, and define technical standards and best practices.
• Collaboration: Work with data scientists, analysts, and stakeholders to deliver actionable business insights.
• Azure Services: Azure Data Factory, Databricks, Synapse Analytics, Data Lake Storage.
• Languages & Tools: Python/PySpark, SQL, Scala, CI/CD (DevOps) tools.
• Processes: ETL/ELT, Data Modelling, Data processing
Benefits
Diversity Inclusion:
At Exavalu, we are committed to building a diverse and inclusive workforce. We welcome applications for employment from all qualified candidates, regardless of race, color, gender, national or ethnic origin, age, disability, religion, sexual orientation, gender identity or any other status protected by applicable law. We nurture a culture that embraces all individuals and promotes diverse perspectives, where you can make an impact and grow your career.
Exavalu also promotes flexibility depending on the needs of employees, customers and the business. It might be part-time work, working outside normal 9-5 business hours or working remotely.. We also have a welcome back program to help people get back to mainstream after a long break due to health or family reasons