#10439 | 2019-01-15 Malmö area, Sweden

Big Data Operations Engineer (Hadoop, Spark)

Job Summary:
We are seeking a solid Big Data Operations Engineer focused on operations to administer/scale our multipetabyte Hadoop clusters and the related services that go with it. This role focuses primarily on provisioning, ongoing capacity planning, monitoring, management of Hadoop platform and application/middleware that run on Hadoop. (an onsite role in Malmö).

Job Description:
  • Hands on experience with managing production clusters (Hadoop, Kafka, Spark, more).
  • Strong development/automation skills. Must be very comfortable with reading and writing
Python and Java code.
  • Overall 10+ years with at least 5+ years of Hadoop experience in production, in medium to
large clusters.
  • Tools-first mindset. You build tools for yourself and others to increase efficiency and to make
hard or repetitive tasks easy and quick.
  • Experience with Configuration Management and automation.
  • Organized, focused on building, improving, resolving and delivering.
  • Good communicator in and across teams, taking the lead.
Education:
Bachelors or Master Degree in Computer Science or similar technical degree.

  • Responsible for maintaining and scaling production Hadoop, HBase, Kafka, and Spark clusters.
  • Responsible for the implementation and ongoing administration of Hadoop infrastructure including monitoring, tuning and troubleshooting.
  • Provide hardware architectural guidance, plan and estimate cluster capacity, Hadoop cluster deployment.
  • Improve scalability, service reliability, capacity, and performance.
  • Triage production issues when they occur with other operational teams.
  • Conduct ongoing maintenance across our large scale deployments.
  • Write automation code for managing large Big Data clusters
  • Work with development and QA teams to design Ingestion Pipelines, Integration APIs, and provide Hadoop ecosystem services
  • Participate in the occasional on-call rotation supporting the infrastructure.
  • Hands on to troubleshoot incidents, formulate theories and test hypothesis, and narrow down possibilities to find the root cause.

Competence demands:
  • Hands on experience with managing production clusters (Hadoop, Kafka, Spark, more).
  • Strong development/automation skills. Must be very comfortable with reading and writing Python and Java code.
  • Overall 10+ years with at least 5+ years of Hadoop experience in production, in medium to large clusters.
  • Tools-first mindset. You build tools for yourself and others to increase efficiency and to make hard or repetitive tasks easy and quick.
  • Experience with Configuration Management and automation.


Start: as soon as found
Duration: long term assignment
Work location: Malmö area, Sweden
Requirements: Min. 5 years of professional IT experience.
Job type: Freelance

Projekt zamknięty

Przepraszamy, już nie poszukujemy konsultantów do tego projektu.

Przejdź do zakładki ”Bieżące projekty”, aby zobaczyć listę aktualnych propozycji.

Jeżeli jesteś naszym potencjalnym klientem i poszukujesz tego typu konsultantów, użyj naszego formularza "Przejrzyj CV konsultanta".

ProData Consult przechowuje dane w przeglądarce / urządzeniu za pomocą plików cookie w celu zbierania statystyk i optymalizacji naszych stron internetowych oraz opcjonalnie w reklamach kierowanych. Akceptując, wyrażasz zgodę na wykorzystanie plików cookie. Przeczytaj naszą Politykę prywatności, aby uzyskać więcej informacji. Zawsze możesz wycofać swoją zgodę tutaj: Polityka prywatności & pliki cookies

Strona wymaga użycia "Niezbędnych plików cookie". Nasze niezbędne pliki cookie są wykorzystywane wyłącznie w celu zapewnienia funkcjonowania strony i usługi internetowej.

Wybrane usługi stron trzecich mogą przechowywać pliki cookie w celu umieszczania kierowanych reklam, które