Share this Job

Lead Software Develop Engineer

Date Posted: Dec 10, 2017

Location: Cary, NC, US, 27513

Company: MetLife

Job Location: United States : North Carolina : Cary  

 

Role Value Proposition: 

We are looking for a Big Data Engineer who will work on the collecting, storing, processing, and analyzing of huge sets of data. Transforming the data using Bigdata tools or ETL tools such as Informatica/Alteryx for Analytics and BI needs. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them. You will also be responsible for integrating them with the architecture used across the company.

    

Key Responsibilities:  

  • Selecting and integrating any Big Data tools and frameworks required to provide requested capabilities
  • Building and Implementing ETL process developed using Big data tools or ETL tool such as informatica.
  • Monitoring performance and advising any necessary infrastructure changes.
  • Defining data retention policies
  • Works with IT and business customers to develop and design requirements to formulate technical design. 

 

Supervisory Responsibilities:  Leads and motivates project team members that are not direct reports, as well as providing work direction to lower-level staff members 

 

 

Essential Business Experience and Technical Skills:

 

  • 10+ years of solutions development experience
  • Proficient understanding of distributed computing principles
  • Demonstrated expertise utilizing ETL tools, Informatica PC version 9 or above and RDBM systems like SQL Servers, Oracle, and DB2.
  • Demonstrated expertise in Spark , proficiency in Scala , Python is a must.
  • Management of Hadoop cluster, with all included services – preferably Hortonworks.
  • Proficiency with Hadoop v2, MapReduce, HDFS
  • Proficiency in Unix scripting
  • Experience with Informatica PC 10 and implemented push down processing into Hadoop platform, is a huge plus.
  • Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming
  • Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala
  • Experience with integration of data from multiple data sources, Sqoop, Kafka.
  • Experience with NoSQL databases, such as HBase, Cassandra, MongoDB
  • Knowledge of various ETL techniques and frameworks, such as Flume

 

Required:

  • Proficiency in HDFS, Hive, Spark, Pig, Flume,Kafka, scala etc.
  • Informatica PC, IDQ version 9 or above
  • Unix scripting
  • SQL tuning and DW concepts

 

Preferred:

  • Experience with Big Data ML toolkits, such as Mahout, SparkML, or H2O
  • Exposure to Analytical tool such as SAS, SPSS is a plus
     

 

At MetLife, we’re leading the global transformation of an industry we’ve long defined. United in purpose, diverse in perspective, we’re dedicated to making a difference in the lives of our customers.

 

 

MetLife is a proud equal opportunity/affirmative action employer committed to attracting, retaining, and maximizing the performance of a diverse and inclusive workforce. It is MetLife's policy to ensure equal employment opportunity without discrimination or harassment based on race, color, religion, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity or expression, age, disability, national origin, marital or domestic/civil partnership status, genetic information, citizenship status, uniformed service member or veteran status, or any other characteristic protected by law.

MetLife maintains a drug-free workplace.

For immediate consideration, click the Apply Now button. You will be directed to complete an on-line profile. Upon completion, you will receive an automated confirmation email verifying you have successfully applied to the job.

Requisition #: 90301 


Nearest Major Market: Raleigh