Data Engineer: Big Data
* Minimum 6+ years of experience in Big Data technologies.
* Minimum 4 + Years of experience in Python and Scala Programming.
* Experience in developing applications on Big Data and Cognitive technologies including API development.
* Application Development background along with knowledge of Analytics libraries, Open-source Natural Language Processing,Statistical and Big Data Computing libraries.
* Expertise in Spark, Scala and Kafka technologies.
* Ability to demonstrate micro / macro designing and familiar with Unix Commands and basic work experience in Unix Shell Scripting Demonstrated ability in solutioning covering data ingestion, data cleansing, ETL, data mart creation and exposing data for consumers.
Roles & Responsibilities
* As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the design of data solutions using Hadoop based technologies along with Python & Spark programming.
* Responsible to Ingest data from files, streams and Databases.
* Process the data with Spark, Scala, Kafka, Hive and Scoop.
* Develops Hadoop applications using Horton Works or other Hadoop distribution.
* Experienced with pulling data from various database systems, Network Elements and unstructured text from web, Social Media Sites and other Domain Specific file.
* Develop efficient software code for multiple use cases leveraging Python and Big Data technologies for various use cases built on the platform.
Aptitude Tests, Technical Tests, Interviews, Medical Health Checkup.
Best in Industry
Remote (Work From Home)