Netskope is a fast-growing cloud security company in the Cloud Access Security Broker (CASB) market that provides discovery, visibility, monitoring, granular control and security for both sanctioned AND unsanctioned cloud apps. Many enterprises believe they have a low rate of cloud app adoption or take a "block all cloud app posture." The most recent Netskope Cloud Report indicates an average of 1031 actively used cloud apps per enterprise. Netskope enables IT organizations to discover apps, direct usage, protect sensitive data, defend against threats, and ensure compliance in real-time, on any device, including native apps on mobile devices and whether on-premises "in the enterprise network" or remote "out of network."
Netskope has been featured in The Wall Street Journal, Forbes, and TechCrunch and was recently named a SINET 16 2015 Innovator. The company's technology has been recognized by leading publications including SC Magazine, Security Products Magazine, and CIO. Netskope is headquartered in Santa Clara, California. Visit us at www.netskope.com and follow us on Twitter @Netskope and Facebook.
Develop, create, and modify general computer applications software. Analyze user needs and develop software solutions. Design software and customize software for client use with the aim of optimizing operational efficiency. Analyze and design databases within an application area. Be a key member in building data transport, collection, and storage and create a robust and scalable data platform. Take ownership of the company’s core data pipeline that powers Netskope’s Cloud Scale metrics. Leverage data expertise to help evolve data models in various components of the data stack. Lead in architecting, building, and launching highly scalable and reliable data pipelines to support Netskope’s growing data processing and analytics needs. Utilize distributed systems with technologies like Hadoop, Spark, Presto, Kafka, Hive, Flink. Responsible for spotting bad schemas and designing good ones, as well as building, breaking, and fixing production data pipelines. Utilize SQL skills and data stores like Elasticsearch, Druid, Postgres, Teradata as well as programming skills such as Java and Python. Monitor junior engineers in designing and building our big data workflows. Be a key member in establishing best practices and code hygiene in the data pipeline. Work closely with the Data SRE(s) to effectively monitor and troubleshoot jobs. Leverage the cloud to optimize cost and computing efficiency.
Minimum Requirements: Master’s degree in Computer Science, Engineering or related field and 5 years of experience in the job offered or in a software engineer-related occupation.
Position requires at least 4 years of experience in each of the following skills:
- Utilize knowledge of Java/JVM technologies to design, build, modify, test, debug and deploy software at large scale;
- Utilize knowledge of Python or other scripting languages to build tools and infrastructure to configure, monitor and measure the performance of software systems;
- Utilize knowledge of large scale distributed database technology such as HBase to design and build a distributed database system for storing and querying data;
- Utilize knowledge of distributed coordination software such as Apache ZooKeeper to design and build distributed workflow for scheduling and running various software services;
Position requires at least 3 years of experience in each of the following skills:
- Utilize knowledge of Object Oriented Programming (OOP) to build software using sound design;
- Utilize knowledge of development on Unix platform to build, test and deploy software to production environment.
TO APPLY: Please e-mail resume to email@example.com and indicate job code KJB022 on resume. Proof of authorization to work in U.S. is required if hired. The company is an Equal Opportunity Employer and fully supports affirmative action practices.