Progressive Staffing - Careers

Infrastructure Engineer Consultant – Hadoop - Dallas, Texas, USA

Date Posted:

07-20-20 (02:24 AM)


Dallas, Texas, UnitedStates






This position is responsible for collaborating with Solutions Engineering, Infrastructure Operations, and Infrastructure Service Management teams in the design and build of infrastructure solutions/blueprints for the area of responsibility; participating in the design and build of repeatable patterns (build-kits) to improve deployment times for non-prod and prod environments; transitioning knowledge to Infrastructure Operations.

Required Job Qualifications:

  • Bachelor's Degree and 5 years in Information Technology or relevant experience OR Technical Certification and/or College Courses and 7-year Information Technology experience OR 9 years Information Technology experience.
  • Operations Management.
  • Experience or Advance Knowledge with HDFS, Spark, MapReduce, Hive, HBase, ZooKeeper, Impala, SOLR, Oozie, NiFi, Flink, Sqoop, Pig, MongoDB, KAFKA and Flume.
  • Ability to simplify & standardize complex concepts / processes
  • Understanding of business priorities (e.g., vision), trends (e.g., industry knowledge) and markets (e.g., existing/ planned)
  • Oral & written communications.
  • Problem-solving / analytical skills, tools, and techniques.
  • Supplier management.
  • Ability to prioritize and make trade-off decisions.
  •  Drive cross-functional execution.
  • Adaptability and ability to introduce/manage change.
  • Teamwork and collaboration.
  • Organized and detail-oriented.
  • Analytical and problem-solving skills.

Preferred Job Qualifications:

  • Bachelor*s Degree (Computer Science, MIS, or related degrees).
  • 8+ years of experience with Big Data solutions and techniques.
  • Candidate should be ready for on-call support if needed.
  • 4+ years of Hadoop application infrastructure engineering and development methodology background.
  • Experience with Ambari, Hortonworks, HDInsight, Cloudera distribution (CDH) and Cloudera Manager is preferred.
  • Experience in the cloud (Azure/AWS) big data solutions using EMR, HDInsight, Kinesis, Azure Event Hubs, etc.
  • Experience with evaluating COTS applications.
  • Strong understanding and experience with different SaaS, PaaS, IaC, IaaS, DBaaS cloud models.
  • Strong knowledge in Cloud Data warehouses like Synapse, CDP, SQL Data warehouse, etc.
  • Experience with multi-tenant platforms considering Data Segregation, Resource Management, Access control, etc
  • Strong Programming Experience with Red Hat Linux, UNIX Shell Scripting, Java, Python, Scala, RDBMS, NoSQL, and ETL solutions.
  • Experience with Kerberos, TLS encryption, SAML, LDAP.
  • Experience with full Hadoop SDLC deployments with associated administration and maintenance functions.
  • Experience developing Hadoop integrations for data ingestion, data mapping, and data processing capabilities.
  • Experience with designing application solutions that make use of enterprise infrastructure components such as storage, load-balancers, 3-DNS, LAN/WAN, and DNS.
  • Experience with concepts such as high-availability, redundant system design, disaster recovery and seamless failover.
  • Expertise in common Hadoop file-formats including Avro, Parquet, and ORC.
  • Experience with automation using Ansible, CloudFormation, ARM, PowerShell, etc.
  • Experience distributed systems/data lake and MPP databases capable of efficiently processing Terabytes of data using Teradata, Hadoop, Netezza, etc.
  • Develop proofs-of-concept to benchmark key metrics for tools and architecture evaluation for big data and cloud technologies.
  • Planning & deploying new Hadoop Infrastructure, Upgrades, Cluster Maintenance, Troubleshooting, Capacity Planning, and Resource optimization as part of Infrastructure Operations.
  • Data ingestion from Kafka, NIFI, and the use of such data for streaming analytics.
  • Overall knowledge of Big Data technology trends, Big Data vendors, and products.