PowerToFly
Recent searches
  • Events
  • Companies
  • Resources
  • Log in
    Don’t have an account? Sign up
Filters
Clear All
Advanced filters
Job type
  • Reset Show results
Date posted
  • Reset Show results
Experience level
  • Reset Show results
Company
  • Reset Show results
Skills
  • Reset Show results
Clear All
Cancel Show Results
Active filters:
Results 6592 Jobs

Wondering why you’re not getting hired?

Take our 3-min quiz and find out!

  • See what’s holding you back
  • Know exactly what to fix
  • Get a plan to move forward
Take the Quiz!
Loading...
Loading more jobs...

No more jobs to load

No more jobs to load

Databricks Data Engineer - Consultant
Save Job
Deloitte LLP

Databricks Data Engineer - Consultant

Onsite Azerbaijan
Posted 11 hours ago
Save Job

Watch this video to learn more about Deloitte LLP

Job Details


Job Title: Consultant - Databricks Data Engineer

Job Summary

As a Databricks Data Engineer, you will design, build, and optimize enterprise-scale data engineering solutions using Databricks on AWS, Azure, or Google Cloud Platform (GCP). You'll help clients modernize data platforms using Lakehouse patterns, deliver reliable and performant pipelines, and translate technical design into measurable business outcomes. You'll contribute to engineering standards, data governance, and delivery excellence while collaborating closely with stakeholders across business and technology teams.

Recruiting for this role ends on 3/13/2026.

Work You'll Do (Key Responsibilities)
  • Data platform delivery: Design, develop, test, and maintain scalable data pipelines and data products on Databricks to support enterprise analytics and reporting needs.
  • Lakehouse Engineering: Implement Databricks Lakehouse solutions using Apache Spark and Delta Lake, delivering batch/streaming pipelines with Delta Live Tables, Autoloader, Structured Streaming, Workflows, and orchestration (e.g., Apache Airflow).
  • Data Modeling: Build curated, governed data products by applying metadata-driven ingestion, PySpark incremental loads, and data quality frameworks, plus 3NF/dimensional modeling and Unity Catalog-based security/compliance controls.
  • Governance & security: Support implementation of governance, security, and compliance controls in cloud data ecosystems, including Unity Catalog and fine-grained access controls.
  • Performance optimization: Monitor and tune jobs, code, clusters, and pipeline designs to improve reliability, throughput, and cost efficiency.
  • DevOps & automation: Implement and maintain CI/CD practices for data engineering deployments using tools such as Azure DevOps, AWS CodePipeline, Jenkins, TFS, or PowerShell.
  • Communication: Clearly explain technical tradeoffs, implementation choices, and business value to technical teams and non-technical stakeholders; contribute to project plans, status updates, and client-facing deliverables.
  • Best practices contribution: Contribute to documentation and reusable patterns for data architecture, integration, modeling, and engineering standards.

Required Qualifications
  • Bachelor's degree in Computer Science, Engineering, or related field (Master's preferred).
  • 2+ years of hands-on experience in data engineering with a strong focus on Databricks, deployed on any major cloud (AWS, Azure, GCP).
  • Minimum of 2 years Technical Proficiency:
    • Databricks and cloud-native storage/compute and distributed processing platforms
    • Lakehouse architecture, Apache Spark, Delta Lake
    • Data warehousing and implementation experience with 3NF, dimensional modeling, and enterprise data lakes
    • Databricks components: Delta Live Tables, Autoloader, Structured Streaming, Databricks Workflows, and orchestration.
    • Incremental data loads; building metadata-driven ingestion and data quality frameworks using PySpark
    • Unity Catalog and fine-grained security/access control
  • Proven track record deploying solutions through automated CI/CD pipelines.
  • Experience with performance optimization of pipelines, code, and compute resources.
  • Ability to travel up to 50% on average, based on the work you do and the clients and industries/sectors you serve
  • Limited immigration sponsorship may be available

Preferred Qualifications
  • Strong knowledge of one or more cloud ecosystems: AWS, Azure, GCP cloud ecosystems and associated big data stacks is strongly preferred.
  • Experience supporting or enabling AI/ML use cases.
  • Databricks certifications

The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Deloitte, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is $84,400 - $155,400.

You may also be eligible to participate in a discretionary annual incentive program, subject to the rules governing the program, whereby an award, if any, depends on various factors, including, without limitation, individual and organizational performance.

Information for applicants with a need for accommodation: https://www2.deloitte.com/us/en/pages/careers/articles/join-deloitte-assistance-for-disabled-applicants.html
Company Details
Deloitte LLP
 New York City, NY, United States
Work at Deloitte LLP

Don't imagine what's next. Discover it. We provide industry-leading audit & assurance services, consulting, tax and advisory services to many of... Read more

Did you submit an application for the Databricks Data Engineer - Consultant on the Deloitte LLP website?