The Boston Dynamics Central Software (CSW) team is seeking a creative and passionate Data Infrastructure Software Engineer to join our dynamic and collaborative team. If you enjoy learning new technologies and are driven to improve the efficiency and effectiveness of data operations, this role offers an exciting opportunity to make a significant impact.
As a Data Infrastructure Software Engineer, you will be instrumental in developing and maintaining robust cloud-based data pipelines and other big data solutions for use across our company, including integration with our robots. The solutions you develop will help expand the reach and capabilities of our advanced mobile robots.
Boston Dynamics is at the forefront of mobile robotics, tackling some of the most challenging problems in the field. Having captivated YouTube audiences with the remarkable abilities of our robots for years, we are now rapidly emerging as a leader in providing automation solutions for industrial applications and warehouse logistics.
Day-to-Day Activities:
Design, develop, and maintain scalable and robust data pipelines using Apache Airflow and other big data technologies.
Optimize existing data systems for performance, reliability, and cost-effectiveness.
Collaborate with machine learning engineers and other software engineers to respond to data needs and solve problems with data.
Troubleshoot and resolve issues related to data availability, performance, and accuracy.
Monitor data quality and integrity, implementing processes for data validation and error handling.
Participate in code reviews, contributing to a high standard of code quality and best practices.
Research and evaluate new technologies and tools to improve our data platform.
Contribute to the overall architecture and strategy for data infrastructure.
Participate in our agile development process, coordinating work with others, identifying challenges, and communicating progress regularly.
Mentor and upskill peers and other contributors across the organization.
Desired skills:
5+ years of professional experience in delivering data infrastructure solutions to end-users.
Proven ability to design, develop, and optimize efficient ETL/ELT pipelines for large-scale data ingestion and transformation (e.g., Apache Airflow).
In-depth knowledge and hands-on experience with big data technologies such as Apache Spark, Hadoop, Kafka, Flink, or similar distributed systems.
Expertise in relational databases (e.g., PostgreSQL, MySQL).
Experience with major cloud providers like AWS, Google Cloud Platform (GCP), or Microsoft Azure, including services related to data storage, processing, and analytics.
Proficiency in Python.
Familiarity with Git version control and a comfortable working proficiency in a Linux development environment.
Bachelorβs in Engineering, Computer Science, or other technical areas.
Additional skills:
Experience with C++ or Rust.
Familiarity with containerization (Docker, Kubernetes).
#LI-JM1