We are seeking a skilled and experienced Databricks Developer to design, develop, and optimize scalable data solutions using the Databricks platform. The ideal candidate will have strong expertise in big data technologies, data engineering practices, and cloud-based data platforms. This role involves working closely with data architects, analysts, and business stakeholders to build high-performance data pipelines and analytics solutions.
Requirements
5+ years of experience in Data Engineering or Big Data development.
Strong hands-on experience with Databricks and Apache Spark.
Proficiency in Python, SQL, or Scala.
Experience working with Delta Lake and Spark SQL.
Solid understanding of ETL concepts and data warehousing principles.
Experience with cloud platforms (Azure/AWS/GCP).
Knowledge of data modeling techniques (Star Schema, Snowflake Schema).
Experience with version control systems such as Git.