Join Cape Ann

Our team is growing.
Check out our current job opening.

Cape Ann Enterprises is looking for a smart, creative and energetic DevOps Engineer to take our Life Science Department to the next level! 

DevOps services are a key role that supports software development and data engineering needs of the department. By combining one of the most trending IT careers at the moment with Life Sciences, you will gain a high-value marketable skill set. 

At Cape Ann, you will have the opportunity to work with some of the smartest companies in the world and with top PhDs in the space of Bioinformatics and Genomics that support our strong software development team. Our clients in the United States are developing groundbreaking technologies that come out of research at Harvard and MIT. With these clients, we get to work on exciting projects in Genetics, Medical Diagnostics, Drug Discovery, Artificial Intelligence and other areas of technology that have the power to make the world a better place. We are proud of our contributions to this important work.

If you are ready to challenge the world, join our team and take advantage of a unique opportunity to work with our clients as a DevOps Engineer for the Scientific computing function.

Responsibilities

The Life Science Department at Cape Ann Enterprises is seeking a DevOps Engineer for the Scientific Computing function. DevOps services are a key role that supports software development and data engineering needs of the department. 

Primary responsibilities of the DevOps Engineer include:

  • With the support of our senior team, work closely with genetic, bioinformatics and machine learning scientists to convert and deploy development grade scripts, bioinformatics pipelines, and machine learning models into production ready software.
  • Linux systems administration and VM management
  • Experience in at least two container technologies such as Docker, Kubernetes, Docker Swarm and Rancher
  • In collaboration with the data science team and IT, implement a cloud-based infrastructure for software development, data storage/management, and bioinformatics pipeline design. Present practical input for strategic data science technology decisions.
  • Day-to-day operation and support of the Data Science cloud infrastructure (e.g., AWS )

Requirements

  • Experience with DevOps, infrastructure management
  • Experience with Linux environments
  • Work experience in biotech/pharma is preferred

Competencies

  • Deep understanding of Linux/RHEL environments and VM management best practices
  • Strong coding skills in one or more of modern programming languages such as Python, Go
  • Experience with Cloud infrastructure (e.g., AWS, Azure) to provide IaaS/IaC, including compute, storage & networking (EC2, EBS, S3, Glacier, Gateway, VPC)
  • Experience with building solutions for high-dimensional datasets such as genomic data using Big Data technologies (e.g., HDFS, Impala, Spark, etc.)
  • Strong comprehension of Software Development Life Cycle, Source Control systems, and CI/CD pipelines
  • Ability to create CI/CD pipelines (Jenkins or similar)
  • English Language is a must

Preferred but not required

  • Proven ability to work independently, self-motivated to learn and develop new methodologies, manage multiple projects simultaneously, keep accurate records, follow instructions, and comply with company policies
  • Ability to ramp up quickly and learn tools and technologies
  • Strong knowledge of database admin fundamentals and best practices (Postgres preferred)
  • Infrastructure-as-code (IaC) and deployment automation tools e.g., Terraform, Ansible
  • Experience with using variety of APIs and web services technologies such as GraphQL, REST
Share This