Data Engineer with Databricks @ KMD Poland

Polsko

Ideal candidate:  

  • Has 3+ years of commercial experience in implementing, developing, or maintaining data load systems (ETL/ELT). 
  • Is proficient in Python, with a solid understanding of data processing challenges. 
  • Has experience working with Apache Spark and Databricks. 
  • Is familiar with MSSQL databases or other relational databases. 
  • Has some experience working with distributed systems on a cloud platform. 
  • Has worked on large-scale systems and understands performance optimization. 
  • Is comfortable with Git and CI/CD practices, and can contribute to deployment processes for data pipelines. 
  • Is proactive, eager to learn, and has a strong can-do attitude. 
  • Communicates fluently in English and Polish, both written and spoken. 
  • Is a team player with excellent collaboration and communication skills. 

Nice to Have:

  • Experience with Azure 
  • Experience working with SSIS 
  • Familiarity with Azure PostgreSQL 
  • Knowledge of Docker and Kubernetes 
  • Exposure to Kafka or other message brokers and event-driven architecture 
  • Experience working in Agile/Scrum environments 

Location: Warsaw (Inflancka 4A) or Remote Work (Poland) 

B2B Contract, Targeted Salary: 150 - 170 PLN Net/Hour 

#Python #ApacheSpark #Databricks #MSSQL #Git #CI/CD #Docker #Azure #Kubernetes

Are you ready to join our international team as a Data Engineer with Databricks? We shall tell you why you should...

What products do we develop?

KMD Elements is a cloud-based solution tailored for the energy and utility market. It offers a highly efficient way to handle complex data validation and advanced formula-based settlements on time series. Designed for the international market, KMD Elements automates intricate calculation and billing processes. Key features include an advanced configuration engine, robust automation capabilities, multiple integration options, and a customer-centric interface. More info can be found here

How do we work?

#Agile #Scrum #Teamwork #CleanCode #CodeReview #E2Eresponsibility #ConstantImprovement


,[Develop and maintain data delivery pipelines for a leading IT solution in the energy market, leveraging Apache Spark, Databricks, Delta Lake, and Python. , Have end-to-end responsibility for the full lifecycle of features you develop. , Design technical solutions for business requirements from the product roadmap. , Ensure optimal performance , Refactor existing code and enhance system architecture to improve maintainability and scalability. , Design and evolve the test automation strategy, including technology stack and solution architecture. , Prepare reviews, participate in retrospectives, estimate user stories, and refine features ensuring their readiness for development. ] Requirements: Apache Spark, Databricks, Python, User stories, ETL, Spark, MSSQL, Relational database, Cloud, Git, Azure, SSIS, PostgreSQL, Docker, Kubernetes, Kafka Additionally: Remote work, Private healthcare, Flat structure, International projects, Sport subscription, Free coffee, Playroom, Free snacks, Free beverages, Modern office, No dress code.

Kategorie

data

  • Podrobné informace o nabídce práce
    Firma: KMD Poland
    Lokalita: Práce v Polsku
    Odvětví práce: data
    Pracovní pozice: Data Engineer with Databricks @ KMD Poland
    Směnnost práce fulltime - 40 hours per week
    Nástup do práce od: IHNED
    Nabízená mzda: neuvedeno
    Nabídka přidána: 29. 10. 2025
    Pracovní pozice aktivní
Odpovědět na inzerát
    Buďte první, kdo se na danou nabídku práce přihlásí!

Práce Data Engineer with Databricks @ KMD Poland: Často kladené otázky

👉 V jakém městě se nabízí nabídka práce Data Engineer with Databricks @ KMD Poland?

Práce je nabízena v lokalitě Remote, Warsaw.

👉 Jaká firma nabírá na tuto pozici?

Tato nabídka práce je do firmy KMD Poland.

0.0924