Progressive Staffing - Careers

Hadoop Service Operations- Chicago, Illinois


Date Posted:

07-24-20 (02:21 AM)

Location:

Chicago, Illinois, UnitedStates

Salary:

Openings:

1


Description:

JOB DESCRIPTION:

  • No database really on this job!
  • The person is in Hadoop Operations!
  • So, ready for 1 week on call and 24x7 phone availability
  • 2+ years Demonstrable experience in Hadoop will do. 
  • Willing to constantly learn!
  • UNIX Solid Skills - debugging, scripting and automation in Hadoop environment a must
  • Horton Works, HDFS, Cloudera, Hive must be on resume as experience

Keywords:

  • BS In Engineering or Math or Physics; 
  • Hadoop Operations
  • Unix - debugging, scripting and automation
  • Azure - Hadoop

 Job Purpose: 

This position is responsible for handling 24x7 operations infrastructure support services of all non-production and production environments in the area of responsibility; handling the use of analytics to improve service, availability, and customer interactions; handling incident resolution and issues escalated by vendors; transitioning knowledge from design/build teams; handling operational vendor performance management.

Required Job Qualifications: 

  • Bachelors Degree in Computer Science, Information Systems, or another related field. Or 7 years of equivalent work experience.
  • Knowledge of ITIL v3 framework
  • Knowledge of ITSM systems
  • Knowledge of all Infrastructure technologies, including monitoring and event management technologies
  • Supplier management
  • Infrastructure domain knowledge
  • Continuous improvement
  • Incident management
  • Knowledge of required technologies (incl 3rd party solutions)
  • Problem Management / RCA
  • Customer service oriented
  • Adaptability and ability to introduce/manage change
  • Drive conflict management in high-pressure situations
  • Performance / metrics-driven decision making
  • Preferred Job Qualifications:
  • Bachelor's degree in a relevant field (IT, Engineering) or equivalent career experience
  • Responsible for implementation and ongoing administration of Hadoop infrastructure
  • Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments
  • Working with data delivery teams to set up new Hadoop users. This job includes setting up Linux users, setting up Kerberos principals and testing HDFS, Hive, HBase and Yarn access for the new users
  • Cluster maintenance as well as creation and removal of nodes using tools like Cloudera Manager Enterprise, etc.
  • Performance tuning of Hadoop clusters and Hadoop MapReduce routines
  • Screen Hadoop cluster job performances and capacity planning
  • Monitor Hadoop cluster connectivity and security
  • Manage and review Hadoop log files
  • File system management and monitoring
  • HDFS support and maintenance
  • Diligently teaming with the infrastructure, network, database, application, and business intelligence teams to guarantee high data quality and availability
  • Collaborating with application teams to perform Hadoop updates, patches, version upgrades when required
  • Work with Vendor support teams on support tasks
  • General operational expertise such as good troubleshooting skills, understanding of system's capacity, bottlenecks, basics of memory, CPU, OS, storage, and networks
  • The most essential requirements are: They should be able to deploy Hadoop cluster, add and remove nodes, keep track of jobs, monitor critical parts of the cluster, configure name-node high availability, schedule and configure it and take backups
  • Solid Understanding on-premise and Cloud network architectures
  • Additional Hadoop skills like Sentry, Spark, Kafka, Oozie, etc
  • Oral/written communication skills.