The Position: As a system engineer at Kargo, you will be responsible for building the very foundation on which our engineering department and its products will run. You will be engineering our cloud infrastructure to be durable, scalable, and easy to manage. Some of the tasks required to achieve this goal:
Take ownership over the monitoring and logging tools that the rest of department will use to gain insight into our environments
Improve the scalability and durability of our systems via infrastructure updates. We’re cloud first and hosted on Amazon Web Services
Help the data engineering team deploy Apache Spark based machine learning systems
Has worked with a cloud provider for at least 3 years. Bonus points for AWS experience.
Very strong in a Linux environment, including experience with scripting.
Programming skills in at least one of the major scripting languages (Bash, Python, Ruby, Node, etc)
Has worked with continuous integration tools such as Jenkins, Travis, Circle, Bamboo, Team City, etc.
An understanding of all major Web standard protocols, web servers (Apache/Nginx) and load balancing.
Experience with Docker in a production environment using a clustering product such as Mesos, ECS, Kubernetes, Swarm etc.
Experience designing a continuous integration/deployment process from scratch
Experience managing large scale data storage and processing systems such as Apache Hadoop, Apache Spark, Apache Kafka, or Amazon Redshift