Additional details:
• Creating new data solutions, maintaining existing and being a focal point of all technical aspects of our data activity. You will develop advanced data and analytics to support our analysts and production with validated and reliable data. The ideal candidate is a hands-on professional with strong knowledge of data pipelines and an ability to translate business needs into flawless data flow.
• Create ELT/Streaming processes and SQL queries to bring data to/from the data warehouse and other data sources.
• Own the data lake pipelines, maintenance, improvements and schema.
• Creating new features from scratch, enhance existing features, and optimize existing functionality
• Collaborate with various stakeholders across the company like data developers, analysts, data science, etc, in order to deliver team tasks. Work closely with all business units and engineering teams to develop a strategy for long-term data platform architecture.
• Implement new tools and development approaches.
• Ensure adherence to coding best practices and development of reusable code
• Constantly monitor data platform and make recommendations to enhance system architecture
• 4+ years of experience as a Data Engineer
• 4+ years of experience in Spark – In-depth knowledge in Apache Spark and the broader Data Engineering ecosystem
• 4+ years of direct experience with SQL (e.g., Redshift/Postgres/MySQL), data modeling, data warehousing, and building ELT/ETL pipelines – MUST
• 2+ years of Python/ NodeJS experience.
• 3+ years of experience in scalable data architecture, fault-tolerant ETL, and monitoring of data quality in the cloud
• Experience working with cloud environments (AWS preferred) and big data technologies (EMR,EC2, S3, Snowflake , spark-streaming, hive, DBT)
• Exceptional troubleshooting and problem-solving abilities,, debugging, and root causing defects in large scale systems.
• Deep understanding of distributed data processing architecture and tools such as Kafka and Spark and Airflow
• Experience with design patterns and coding best practices, understanding of data modeling concepts, techniques and best practices
• Proficiency with modern source control systems, especially Git
• Basic Linux/Unix system administration skills
Nice to have
• BS or MS degree in Computer Science or a related technical field – An advantage
• Experience with data warehouses
• NoSQL, Large scale DBs.
• Understanding fintech business processes
• DevOps – AWS.
• Microservices
• Experience in DBT
What Else
• Energetic and Data enthusiastic
• Analytical
• Self-motivated and work well both independently and as part of a team
• Excellent verbal/written communication & data presentation skills, including experience communicating to both business and technical teams.
• You are a team player with strong communication skills.
• Love to explore new technologies and Fast and self-learner: you can quickly master concepts, disciplines and methods.