#10439 | 2018-05-09 Malmö area, Sweden

Hadoop Operations Engineer

Job Summary:
We are seeking a solid Hadoop engineer focused on operations to administer/scale our multipetabyte Hadoop clusters and the related services that go with it. This role focuses primarily on provisioning, ongoing capacity planning, monitoring, management of Hadoop platform and application/middleware that run on Hadoop.

Job Description:
  • Hands on experience with managing production clusters (Hadoop, Kafka, Spark, more).
  • Strong development/automation skills. Must be very comfortable with reading and writing
Python and Java code.
  • Overall 10+ years with at least 5+ years of Hadoop experience in production, in medium to
large clusters.
  • Tools-first mindset. You build tools for yourself and others to increase efficiency and to make
hard or repetitive tasks easy and quick.
  • Experience with Configuration Management and automation.
  • Organized, focused on building, improving, resolving and delivering.
  • Good communicator in and across teams, taking the lead.
Education:
Bachelors or Master Degree in Computer Science or similar technical degree.

  • Responsible for maintaining and scaling production Hadoop, HBase, Kafka, and Spark clusters.
  • Responsible for the implementation and ongoing administration of Hadoop infrastructure including monitoring, tuning and troubleshooting.
  • Provide hardware architectural guidance, plan and estimate cluster capacity, Hadoop cluster deployment.
  • Improve scalability, service reliability, capacity, and performance.
  • Triage production issues when they occur with other operational teams.
  • Conduct ongoing maintenance across our large scale deployments.
  • Write automation code for managing large Big Data clusters
  • Work with development and QA teams to design Ingestion Pipelines, Integration APIs, and provide Hadoop ecosystem services
  • Participate in the occasional on-call rotation supporting the infrastructure.
  • Hands on to troubleshoot incidents, formulate theories and test hypothesis, and narrow down possibilities to find the root cause.

Competence demands:
  • Hands on experience with managing production clusters (Hadoop, Kafka, Spark, more).
  • Strong development/automation skills. Must be very comfortable with reading and writing Python and Java code.
  • Overall 10+ years with at least 5+ years of Hadoop experience in production, in medium to large clusters.
  • Tools-first mindset. You build tools for yourself and others to increase efficiency and to make hard or repetitive tasks easy and quick.
  • Experience with Configuration Management and automation.


Start: as soon as found
Duration: long term assignment
Work location: Malmö area, Sweden
Requirements: Min. 5 years of professional IT experience.
Job type: Freelance

Jestem zainteresowany/a tym projektem!

Przeciągnij pliki aby móc je załączyć
Nic nie zostało załączone.
Przekazywanie: "{{uploadingFile.name}}" Załączono: "{{candidate.fileName}}"
Błąd przetwarzania "{{uploadErrFile.name}}". Dozwolone są następujące typy plików: ".pdf, .doc, .docx". Dostosuj swój plik i spróbuj ponownie. Plik musi być mniejszy niż {{uploadErrFile.$errorParam}}
{{uploadErrorMsg}}
Wybierz dokument
{{$select.selected.commonName}}
Popraw dane w tych polach, a następnie spróbuj przesłać ponownie
  • {{ getValFieldName(field) }}

Formularz został wysłany.

Ważne: Jeśli uznamy, że jesteś odpowiednim kandydatem na to stanowisko, postaramy się z tobą jak najszybciej skontaktować. Twoje dane i CV nie zostaną przekazane klientowi bez twojej zgody.

Jeżeli masz pytania związane z tym projektem, skontaktuj się z nami:

Olga Saibel
Sourcing Specialist

Email
Telefon kom.+46 76 843 39 34