Help us maintain the quality of jobs posted on PowerToFly. Let us know if this job is closed.
Job Type
Full Time
Job Details
Job Description he Big Data team plays a critical and strategic role in ensuring that ServiceNow can exceed the availability and performance SLAs of the ServiceNow Platform powered Customer instances - deployed across the ServiceNow cloud and Azure cloud. Our mission is to: Deliver state of the art Monitoring, Analytics and Actionable Business Insights by employing new tools, Big Data systems, Enterprise Data Lake, AI, and Machine Learning methodologies that improve efficiencies across a variety of functions in the company: Cloud Operations, Customer Support, Product Usage Analytics, Product Upsell Opportunities enabling to have a significant impact both on the top-line and bottom-line growth. The Big Data team is responsible for:
- Collecting, storing, and providing real-time access to large amount of data
- Provide real-time analytic tools and reporting capabilities for various functions including:
- Monitoring, alerting, and troubleshooting
- Machine Learning, Anomaly detection and Prediction of P1s
- Capacity planning
- Data analytics and deriving Actionable Business Insights
- Responsible for deploying, production monitoring, maintaining and supporting of Big Data infrastructure, Applications on ServiceNow Cloud and Azure environments.
- Big Data deployment automation from vision to delivering the automation of Big Data foundational modules (Cloudera CDP), prerequisite components and Applications leveraging Ansible, Puppet, Terraform, Jenkins, Docker, Kubernetes to deliver end-end deployment automation across all ServiceNow environments.
- Automate Continuous Integration / Continuous Deployment (CI/CD) data pipelines for applications leveraging tools such as Jenkins, Ansible, and Docker.
- Performance tuning and troubleshooting of various Hadoop components and other data analytics tools in the environment: HDFS, YARN, Hive, HBase. Spark, Kafka, RabbitMQ, Impala, Kudu, Redis, Hue, Kerberos, Tableau, Grafana, MariaDB, and Prometheus.
- Provide production support to resolve critical Big Data pipelines and application issues and mitigating or minimizing any impact on Big Data applications. Collaborate closely with Site Reliability Engineers (SRE), Customer Support (CS), Developers, QA and System engineering teams in replicating complex issues leveraging broad experience with UI, SQL, Full-stack and Big Data technologies.
- 3+ years of overall experience as a Big Data DevOps / Deployment Engineer
- Demonstrated expert level experience in delivering end-end deployment automation leveraging Puppet/Ansible/Terraform, Jenkins, Docker, Kubernetes or similar technologies.
- Good understanding of Hadoop/Big Data Ecosystem. Good knowledge in Querying and analyzing large amount of data on Hadoop HDFS using Hive and Spark Streaming and working on systems like HDFS, YARN, Hive, HBase. Spark, Kafka, RabbitMQ, Impala, Kudu, Redis, Hue, Tableau, Grafana, MariaDB, and Prometheus.
- Experience securing Hadoop stack with SSL, Ranger, LDAP, Kerberos KDC
- Experience supporting CI/CD pipelines on Cloudera on Native cloud and Azure/AWS environments
- In-depth knowledge of Linux internals (RHEL 7.x/8.x) and shell scripting
- Good Knowledge on CI/CD, GIT
- Good knowledge of Python, Bash, Groovy or other scripting languages
- Ability to learn quickly in a fast-paced, dynamic team environment
About the Company
ServiceNow
Santa Clara, CA, United States
At ServiceNow, our technology makes the world work for everyone, and our people make it possible. We deliver digital workflows that create great... Read more