Convoy is transforming the $800 billion trucking industry. Our mission is to transport the world with endless capacity and zero waste. The industry is huge and so is the opportunity to fundamentally changing the way freight moves across America and beyond for the better.
We are passionate about thinking big and solving really complex problems to make the lives of truck drivers, shippers and other people in freight industry easier through our innovation and technology. There will always be a better, more efficient way to transport goods, and we won’t ever stop inventing it.
Founded in 2015, we’re a mission-driven, well-funded, fast-growing startup, backed by world-class investors, including CapitalG, the growth equity investment fund of Google, and leading industry disruptors, including the founders and CEOs of Amazon, Salesforce, eBay, Linkedin, Expedia, Dropbox, KKR, Starbucks, and others. We were named one of Washington State’s top places to work, a LinkedIn Top Startup and were the recipient of the GeekWire Next Text Titan in 2018.
Who we’re looking for:
Data and our data infrastructure are core to Convoy’s mission to automate the logistics industry. Today, we use machine learning to figure out freight prices, shipment relevance for carriers, auction bidding strategy, and automate other internal processes. However, our models are only as good as the data on which they are built. Convoy’s Data Science team is looking for a talented Data Engineer with a strong background in ETL, data warehousing, and interest in collaborating with Data Scientists to evolve the data foundation for our business.In this role you will own the following:
- Architecture design and implementation of next generation data platform and model deployment solutions.
- Building robust and scalable data integration (ETL) pipelines using SQL, EMR, Python, and Spark.
- Mentoring and developing other Data Engineers and Data Analysts.
- Building and delivering high quality data architecture to support data analysts, data scientists, and customer reporting needs.
- Interfacing with other technology teams to extract, transform, and load data from a wide variety of data sources.
- Continually improving reporting and analysis processes, automating or simplifying self-service support for customers
- Bachelor's degree or higher in an engineering or technical field such as Computer Science, Physics, Mathematics, Statistics, or Engineering.
- 5+ years of relevant experience in one of the following areas: data engineering, database engineering, business intelligence or business analytics.
- 5+ years of hands-on experience in writing SQL queries across data sets.
- 2+ years of experience in scripting languages like Python.
- Experience with data modeling, ETL development, and Data warehousing.
- Experience with Redshift, Oracle, etc.
- Experience with AWS services including S3, Redshift, EMR, and RDS.
- Experience with Big Data Technologies (Hadoop, Hive, Hbase, Pig, Spark, etc.
- You have experience with working and delivering end-to-end projects independently.
- Knowledge of distributed systems as it pertains to data storage and computing.