Databricks Data Engineer (SQL, pySpark, Python) @ Upvanta
Requirements:
- Minimum 5 years of experience in the Data Engineering field
- At least 2 years of hands-on experience with Databricks
- Strong proficiency in SQL, PySpark, and Python
- Solid background in data warehousing, ETL, distributed data processing, and data modeling concepts
- Strong analytical and problem-solving skills, especially in a big data environment
- Experience working with structured, semi-structured, and unstructured data
- Hands-on experience with at least one public cloud platform (Azure, AWS, or GCP)
- Knowledge of relational database design and non-relational storage
- Familiarity with concepts such as Data Marts, Data Warehouses, Data Lakes, and Data Mesh
- Very good command of English (spoken and written)
- Experience with Agile methodologies (Scrum, Kanban) and working with DevOps and CI/CD principles
We are looking for an experienced Data Engineer to join our team and help us build and optimize cutting-edge big data, cloud, and advanced analytics solutions.
,[] Requirements: Data engineering, Databricks, SQL, PySpark, Python, ETL, Data modeling, Big Data, Cloud platform, AWS, GCP, Relational database, Storage, Data warehouses, Data Lake, Kanban, DevOpsKategorie
data
- Podrobné informace o nabídce práce
Firma: Upvanta Lokalita: Remote Odvětví práce: data Pracovní pozice: Databricks Data Engineer (SQL, pySpark, Python) @ Upvanta Směnnost práce fulltime - 40 hours per week Nástup do práce od: IHNED Nabízená mzda: neuvedeno Nabídka přidána: 4. 10. 2025
Pracovní pozice aktivní
Buďte první, kdo se na danou nabídku práce přihlásí!
Práce Databricks Data Engineer (SQL, pySpark, Python) @ Upvanta: Často kladené otázky
👉 V jakém městě se nabízí nabídka práce Databricks Data Engineer (SQL, pySpark, Python) @ Upvanta?
Práce je nabízena v lokalitě Remote.
👉 Jaká firma nabírá na tuto pozici?
Tato nabídka práce je do firmy Upvanta.