The Big Data Architect will work within the Capital Markets / Commodities / Financial space to
Architect/Develop/Implement high-end software products/services involving large-scale data processing like data ingestion, in-stream analytics and batch analytics.
5+ years of hands-on experience with building various elements of Big data solutions like data Ingestion, data modeling (HDFS & No-SQL), batch and stream data processing and analytics leveraging technologies (including but not limited to) Hadoop, HDFS, MR, Yarn, Hive, Spark, Spark Streaming, Storm, Kafka, ELK, Avro, Parquet, HBase, Sqoop, Flume, NiFi, Oozie etc.
10+ years of rich industry experience with Java/JEE background and command on programming languages like Java and Scala
Solid experience with designing and architecting large scale Big Data applications leveraging on-premise and/or cloud platforms, and Big Data architecture patterns like Data lakes, Lambda, Kappa
Solid experience with applying agile methodology and quality in Big Data environment
Active involvement in thought leadership Best Practices development, Written White Papers and/or POVs on Big Data Technology
Self-starter, Self-learner, with a keen interest to grow in Analytics space
Excellent consulting skills, oral and written communication, presentation, and analytical skills
Bachelor’s/Master’s degree in Computer Science or related field
Primary Locations: New Jersey area; New York City, New York area
Travel: Openness to frequent overnight travel and work at client offices required