•Development of processes, using Big Data tools, such as spark streaming, sqoop, storm, zeppelin, kafka, cloudera, impala, camus, etc.
•Take accountability for end to end data implementations, including working with BI designers on one side and working with R&D on the other side.
•Implement wrapping processes for Metadata Management and Quality Assurance.
•Take part in creating the new data lake of the company.
•Perform training to other team members.
•B.Sc. in Computer Science/ Engineering – must.
•3+ years' experience in Java programming – must.
•Experience with data flow processes – must.
•Acquaintance with Hadoop eco-system – advantage.