Senior Big Data Engineer (900810)

Locatie
Utrecht
Startdatum
19/03/2020
Einddatum
31/12/2020
Uren
36
Klant
Hiringdesk ATOS Nederland
Aanvraagnummer
SRQ139619
Deel deze aanvraag:

Locatie: Utrecht
Start: z.s.m.
Eind: 31/12/2020
Inzet: 36 uur per week                                                                                                   
Taal: Nederlands
Tarief: €80,- / €85,-

ZZP: Niet toegestaan
Deadline: 20/3 om 13:00

Senior Big Data Engineer

Imagine…
That you as software developer are responsible for delivering data to reporting, operational systems and for the Rabobank app for all department within the Rabobank. The data is key in our communication towards to client, in our marketing communication and for management information. You develop complete and scalable solutions in Spark on a Cloudera platform using Azure DevOps pipelines.

As a developer you can make a difference
The Data and Content within the organization has the responsibility to help teams with data for all IT Systems applications. The goal is to deliver data in a secure, compliant and automated way as fast as possible meeting Security DevOps principles working together with all data teams.
These teams are responsible for data storage, data processing, data flows and data provisioning. We work in DevOps teams where we work on data modelling, data logistics, data quality, data lake and data warehousing to adjust and maintain the data flows and data services we provide for systems and departments within Rabobank.

For one of our teams working on the data lake we are looking for a senior big data engineer.

You will be responsible for
• Development and maintenance of datapipelines mainly in spark.
• Setting up and maintain CI/CD pipelines in Azure devops.
• Setting up monitoring.
• Development of scalable and performant solutions based on open source technologies (Spark, Sqoop, Hive, Hbase, Oozie)
• Streaming flow development with HDF (hortonworks) NiFi and Spark streaming.
• Advice on and involvement in the realization of the Data Lake .
• Working in a multi-disciplinary scrum team and contributing to the T-shaping in the team.
• Providing input and contributing to experiments.
• Keeping informed on the developments in the market.

The compentences/expertise we would like you to have:
• Willingness to learn and being open to new techniques.
• Willingness and capability to teach and share knowledge.
• Good communication skills.
• Strong in collaboration.
• Showing entrepreneurship.
• Flexibility.
• Knowledge on financial products and business processes.
• Being able to work in a structured manner.
• Able to advise and communicate on new solutions

The knowledge we would like you to have:
• Knowledge on (Big) Data environments and tooling (Hadoop, Cloudera , Hortonworks Microsoft Azure, NiFi tooling)
• Relevant HBO or WO education (Informatica, Informatiekunde of Bedrijfskundige richting)
• 4-6 years of experience in ICT data environment
• Extensive experience in software development in an agile environment
• Cloudera 5.12.1
• Streaming (Kafka)
• Spark
• Scala
• Java
• Cloud platform (preferably Azure)
• CI/CD pipelines (preferably Azure devops)
• SQL + NoSQL (at least 2 years of experience)
• Scripting languages (Python/Bash)

Hiringdesk ATOS Nederland

Inschrijven
Utrecht
36
19/03/2020
19/03/2020
31/12/2020
Hiringdesk ATOS Nederland
SRQ139619
Locatie:
Startdatum:
Einddatum:
Uren:
Utrecht
19/03/2020
31/12/2020
36

Locatie: Utrecht
Start: z.s.m.
Eind: 31/12/2020
Inzet: 36 uur per week                                                                                                   
Taal: Nederlands
Tarief: €80,- / €85,-

ZZP: Niet toegestaan
Deadline: 20/3 om 13:00

Senior Big Data Engineer

Imagine…
That you as software developer are responsible for delivering data to reporting, operational systems and for the Rabobank app for all department within the Rabobank. The data is key in our communication towards to client, in our marketing communication and for management information. You develop complete and scalable solutions in Spark on a Cloudera platform using Azure DevOps pipelines.

As a developer you can make a difference
The Data and Content within the organization has the responsibility to help teams with data for all IT Systems applications. The goal is to deliver data in a secure, compliant and automated way as fast as possible meeting Security DevOps principles working together with all data teams.
These teams are responsible for data storage, data processing, data flows and data provisioning. We work in DevOps teams where we work on data modelling, data logistics, data quality, data lake and data warehousing to adjust and maintain the data flows and data services we provide for systems and departments within Rabobank.

For one of our teams working on the data lake we are looking for a senior big data engineer.

You will be responsible for
• Development and maintenance of datapipelines mainly in spark.
• Setting up and maintain CI/CD pipelines in Azure devops.
• Setting up monitoring.
• Development of scalable and performant solutions based on open source technologies (Spark, Sqoop, Hive, Hbase, Oozie)
• Streaming flow development with HDF (hortonworks) NiFi and Spark streaming.
• Advice on and involvement in the realization of the Data Lake .
• Working in a multi-disciplinary scrum team and contributing to the T-shaping in the team.
• Providing input and contributing to experiments.
• Keeping informed on the developments in the market.

The compentences/expertise we would like you to have:
• Willingness to learn and being open to new techniques.
• Willingness and capability to teach and share knowledge.
• Good communication skills.
• Strong in collaboration.
• Showing entrepreneurship.
• Flexibility.
• Knowledge on financial products and business processes.
• Being able to work in a structured manner.
• Able to advise and communicate on new solutions

The knowledge we would like you to have:
• Knowledge on (Big) Data environments and tooling (Hadoop, Cloudera , Hortonworks Microsoft Azure, NiFi tooling)
• Relevant HBO or WO education (Informatica, Informatiekunde of Bedrijfskundige richting)
• 4-6 years of experience in ICT data environment
• Extensive experience in software development in an agile environment
• Cloudera 5.12.1
• Streaming (Kafka)
• Spark
• Scala
• Java
• Cloud platform (preferably Azure)
• CI/CD pipelines (preferably Azure devops)
• SQL + NoSQL (at least 2 years of experience)
• Scripting languages (Python/Bash)

Inloggen