Designs, modifies, develops, and implements solutions for ingesting, provisioning and securing data in our Lake (S3), Warehouse (Redshift) and Marts (Redshift and MySQL) using Scala, Apache Spark, Hive, Apache Pig, MapReduce, AWS Data Pipeline, AWS Glue, AWS EMR and Apache Kafka as well as have experience with Informatica and Talend. Participates in the testing process through test review and analysis, test witnessing and certification of software. Requires 0-2 years of experience in the field or in a related area. Has knowledge of commonly-used concepts, practices, and procedures within a particular field. Relies on instructions and pre-established guidelines to perform the functions of the job.
- Hands on experience in Java, Scala and Spark
- Hands on experience with Kafka, NiFi, AWS, Maven, Stash and Bamboo
- Hands on experience to write MapReduce jobs.
- Experience in INFA/Talend is a plus