Share this Job

Lead Software Develop Engineer

Date Posted: Mar 7, 2018

Location: Cary, NC, US, 27513

Company: MetLife

Job Location: United States : North Carolina : Cary  


Role Value Proposition: 

We are looking for a Big Data Engineer who will work on the collecting, storing, processing, and analyzing of huge sets of data. Transforming the data using Bigdata tools for Analytics and BI needs. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them. You will also be responsible for integrating them with the architecture used across the company.


Key Responsibilities:  

• Ingesting huge volumes data from various platforms for Analytics needs.

•Building and Implementing ETL process developed using Big data tools such as Spark(scala/python), Nifi etc.

•Monitoring performance and advising any necessary infrastructure changes.

•Defining data security principals and policies using Ranger and Kerberos.

• Works with IT and business customers to develop and design requirements to formulate technical design. 


Supervisory Responsibilities:  Leads and motivates project team members that are not direct reports, as well as providing work direction to lower-level staff members 



Essential Business Experience and Technical Skills:


  • 10+ years of solutions development experience
  • Proficient understanding of distributed computing principles
  • Management of Hadoop cluster, with all included services – preferably Hortonworks.
  • Proficiency with Hadoop v2, MapReduce, HDFS
  • Proficiency in Unix scripting
  • Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming
  • Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala
  • Extensive Experience with Spark & Scala
  • Experience in Java/MapReduce, Storm, Kafka, Flume and Data security using Ranger
  • Experience with integration of data from multiple data sources, Sqoop
  • Experience with NoSQL databases, such as HBase, Cassandra, MongoDB



  • Proficiency in HDFS, Hive, Spark,Scala, Python,Hbase, Pig, Flume, etc.
  • Unix & Python scripting
  • SQL tuning and DW concepts



  • Experience with Big Data ML toolkits, such as Mahout, SparkML, or H2O
  • Any experience building RESTful APIs
  • Exposure to Analytical tool such as SAS, SPSS is a plus
  • Experience with Informatica PC 10 and implemented push down processing into Hadoop platform, is a huge plus.



At MetLife, we’re leading the global transformation of an industry we’ve long defined. United in purpose, diverse in perspective, we’re dedicated to making a difference in the lives of our customers.



MetLife is a proud equal opportunity/affirmative action employer committed to attracting, retaining, and maximizing the performance of a diverse and inclusive workforce. It is MetLife's policy to ensure equal employment opportunity without discrimination or harassment based on race, color, religion, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity or expression, age, disability, national origin, marital or domestic/civil partnership status, genetic information, citizenship status, uniformed service member or veteran status, or any other characteristic protected by law.

MetLife maintains a drug-free workplace.

For immediate consideration, click the Apply Now button. You will be directed to complete an on-line profile. Upon completion, you will receive an automated confirmation email verifying you have successfully applied to the job.

Requisition #: 90301 

Nearest Major Market: Raleigh