📝 Podrobnosti o nabídce práce
- Programming: Minimum of 1-2 years as data/software engineer, or in a relevant field
- Python working knowledge: Coding experience in Python, particularly in delivering/maintaining data pipelines and troubleshooting code-based bugs. Experience working with large codebases, working with an IDE and Git.
- Data Skills: Structured approach to data insights and diagnostic skills for data related issues.o data insights
- Cloud: Familiarity with cloud platforms (preferably Azure)
- Data Platforms: Knowledge of Databricks, Snowflake, or similar data platforms
- Database Skills: Knowledge of relational databases, and working experience with SQL.
- Big Data: Experience using Apache Spark/PySpark is a plus
- Documentation: Experience in creating and maintaining structured documentation.
- Testing: Proficiency in utilizing testing frameworks (pytest) to ensure code reliability and maintainability
- Version Control: Experience with Git and Gitlab or equivalent
- English Proficiency: B2 level plus
- Interpersonal Skills: Strong collaboration abilities, willing to learn new skills and tools, adaptive and exploring mindset. Were looking for candidates that do not fear to reach out to others.
Join our team to work on enhancing a robust data pipeline that powers our SaaS product, ensuring seamless contextualization, validation, and ingestion of customer data. Collaborate with product teams to unlock new user experiences by leveraging data insights. Engage with domain experts to analyze real-world engineering data and build data quality solutions that inspire customer confidence. Additionally, identify opportunities to develop self-service tools that streamline data onboarding and make it more accessible for our users.
Our Client was established with the mission to fundamentally transform the execution of capital projects and operations. Designed by industry experts for industry experts, our Client’s platform empowers users to digitally search, visualize, navigate, and collaborate on assets. Drawing on 30 years of software expertise and 180 years of industrial legacy as part of the renowned Scandinavian business group, Client plays an active role in advancing the global energy transition. The company operates from Norway, the UK, and the U.S.
,[Design, build, and maintain data pipelines using Python, Collaborate with an international team to develop scalable data solutions, Conduct in-depth analysis and debugging of system bugs (Tier 2) , Develop and maintain smart documentation for process consistency, including the creation and refinement of checklists and workflows, Set up and configure new tenants, collaborating closely with team members to ensure smooth onboarding, Write integration tests to ensure the quality and reliability of data services, Work with Gitlab to manage code and collaborate with team members, Utilize Databricks for data processing and management] Requirements: Python, Data pipelines, Snowflake, Relational database, Git, Cloud platform, Azure Data, Databricks, SQL, Big Data, Apache Spark, Testing, pytest, GitLab Additionally: Flexible working hours and remote work possibility, English classes, Active tech community, Training budget, International team, Mentoring program, Compensation of certifications, Free coffee, Kitchen, modern office building.Kategorie
data
-
Lokalita: Remote, Krakow, Wrocław, Warsaw
-
Směnnost: fulltime - 40 hours per week
-
Nástup: IHNED
-
❓ Vše, co o této práci potřebujete vědět
👉 Kde je tato práce?
Práce je v lokalitě Remote, Krakow, Wrocław, Warsaw.
👉 Kdo na tuto pozici nabírá?
Nabídku zveřejnila firma N-iX.
👉 Jaká je směnnost?
Směnnost: fulltime - 40 hours per week.
👉 Kdy je nástup?
Nástup je od IHNED.