Loading...
USM Jobs / Databrick Platform Eng - 1680
High Contract

JB061353 - Databrick Platform Eng - 1680 Apply

  • Start Date:
    Interview Types Other
  • Skills Databricks, data gov..
    Visa Types Green Card, US Citiz..
Job Title: Databricks Platform Engineer
Location: Houston, Texas 77079 (On-Site)
Duration: 7 Months Contract
Only on W2
 
Job Description:
We are seeking a Platform Engineer with deep expertise in Databricks administration, data governance, and platformlevel engineering standards. This role enables multiple analytics and AI teams to build safely, efficiently, and consistently on a shared Databricks platform by enforcing data quality, ingestion standards, security policies, and cost governance.
You will be the technical owner of platform guardrails, operational stability, access patterns, and cost controls—ensuring the platform scales reliably across business teams.
 
Key Responsibilities
Platform Administration & Governance
·         Administer Databricks workspaces, clusters, jobs, Unity Catalog, compute policies, environment configuration, and platform guardrails.
·         Implement and maintain RBAC and ABAC access controls for secure, compliant data access.
·         Define and enforce data ingestion standards, naming conventions, schema rules, Delta Lake design patterns, and data quality expectations.
 
Data Quality & Ingestion Standards
·         Set platformwide standards for ingestion pipelines, Delta architecture, lineage, versioning, and validation.
·         Review and approve onboarded pipelines for compliance with platform requirements.
·         Partner with data engineering teams to uplift patterns and enforce consistency.
 
Security, Compliance & Access Controls
·         Manage workspace and catalog permissions, row/columnlevel policies, attributebased filtering, and workspace isolation.
·         Collaborate with security teams to maintain compliance and enforce global data protection standards.
 
Cost Management & Monitoring
·         Implement cost thresholds, alerts, compute policies, and usage dashboards to prevent overspend.
·         Monitor job and cluster costs, detect anomalies, and recommend optimization actions.
·         Provide visibility into SKUlevel spend and workspace cost patterns.
 
Operational Stability & Observability
·         Ensure platform reliability through automated testing, CI/CD templates, and code governance.
·         Build dashboards to track code compliance, data access, pipeline health, schema drift, and cost thresholds.
·         Resolve platform incidents and prevent recurrence by strengthening guardrails and configurations.
 
Enablement & Best Practices
·         Define “handrails” for building on the platform: ingestion, Delta conventions, CI/CD, observability, and AI/ML patterns.
·         Coach data/analytics teams on compliant onboarding and optimal platform usage.
·         Maintain internal documentation, patterns, code templates, and guidance.
 
Required Skills & Experience
·         5+ years in data engineering or platform engineering, with at least 2–3 years in Databricks administration.
·         Expert knowledge of Unity Catalog, cluster policies, Delta Lake, Spark, workspace configuration, and jobs.
·         Strong grounding in data governance, data modeling, ingestion frameworks, schema enforcement, versioning, and lineage.
·         Proven experience implementing RBAC and ABAC in Databricks or similar platforms.
·         Experience with cost optimization, monitoring, billing logs, and compute governance.
·         Strong Python/PySpark and SQL skills; familiarity with DLT, Airflow, or Databricks Workflows.
·         Strong communication skills with the ability to set standards and influence teams diplomatically.
 
Preferred Qualifications
·         Experience in large-scale enterprise data platforms (Azure/AWS/GCP).
·         Familiarity with Trading & Supply or other highstakes analytical environments.
·         Experience creating dashboards for governance, cost, compliance, and pipeline health.
·         Experience with CI/CD, GitHub Actions, Azure DevOps, or similar tools.
 
Success Indicators
·         Teams consistently follow ingestion and data standards.
·         Platform costs are stabilized and predictable.
·         Strong adoption of platform guardrails, templates, and operational dashboards.
·         Reduced incidents related to access, cost, or ingestion quality.