Data Engineer (Pyspark or Scala) @ Antal

Polsko

  • Must have Requirements
  • Pyspark or Scala development and design.
  • Experience using scheduling tools such as Airflow.
  • Experience with most of the following technologies (Apache Hadoop, Pyspark, Apache Spark, YARN, Hive, Python, ETL frameworks, Map Reduce, SQL, RESTful services).
  • Sound knowledge on working Unix/Linux Platform
  • Hands-on experience building data pipelines using Hadoop components - Hive, Spark, Spark SQL.Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA.
  • Understanding of big data modelling techniques using relational and non-relational techniques
  • Experience on debugging code issues and then publishing the highlighted differences to the development team.

Good to have Requirements


  • Experience with Elastic search.
  • Experience developing in Java APIs.
  • Experience doing ingestions.
  • Understanding or experience of Cloud design patterns
  • Exposure to DevOps & Agile Project methodology such as Scrum and Kanban.

Are you looking to make a real impact in a global financial project? We are working  Data Engineer  to join our Clients team. 

Company Description


Our Clients team is dedicated to fostering a data-driven culture across the organization. Their strategy focuses on protecting the data through robust management policies, engaging colleagues with enhanced training opportunities, and building sustainable capabilities to unlock value for customers. 

What We Offer


  • b2b contract and support of the Contractor Care Team
  • Private Medical Care
  • Cafeteria system
  • Life insurance
,[As a key member of the technical team alongside Engineers, Data Analysts and Business analysts, you will be expected to define and contribute at a high-level to many aspects of our collaborative Agile development process:, Total 5+ years’ experience with software design, Pyspark development, automated testing of new and existing components in an Agile, DevOps and dynamic environment, Promoting development standards, code reviews, mentoring, knowledge sharing, Production support & troubleshooting., Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring Liaison with BAs to ensure that requirements are correctly interpreted and implemented., Participation in regular planning and status meetings. Input to the development process – through the involvement in Sprint reviews and retrospectives. Input into system architecture and design.] Requirements: Scala, Airflow, Apache Hadoop, Apache Spark, Yarn, Hive, Python, ETL, SQL, REST API, Unix, Linux, Data pipelines, Hadoop, Spark, Git, GitHub, Ansible, Pyspark, frameworks, Jenkins, Jira, Java, Cloud, Design Patterns, DevOps, Kanban

Kategorie

data

  • Podrobné informace o nabídce práce
    Firma: Antal
    Lokalita: Práce v Polsku
    Odvětví práce: data
    Pracovní pozice: Data Engineer (Pyspark or Scala) @ Antal
    Směnnost práce fulltime - 40 hours per week
    Nástup do práce od: IHNED
    Nabízená mzda: neuvedeno
    Nabídka přidána: 18. 4. 2025
    Pracovní pozice aktivní
Odpovědět na inzerát
    Buďte první, kdo se na danou nabídku práce přihlásí!
Zajímavé nabídky práce v okolí:

Práce Data Engineer (Pyspark or Scala) @ Antal: Často kladené otázky

👉 V jakém městě se nabízí nabídka práce Data Engineer (Pyspark or Scala) @ Antal?

Práce je nabízena v lokalitě Kraków.

👉 Jaká firma nabírá na tuto pozici?

Tato nabídka práce je do firmy Antal.

0.0948