Sign up to our low volume newsletter for news, viewpoints and events from Spirit AI.
Apply up for access!
We are currently running an exclusive beta program. Enter your details and we’ll be in touch within 48 hours.
Dev Ops Engineer
Spirit AI is building a tool to empower community managers in novel ways – an Ally to assist in nourishing communities to their fullest. Ally uses the latest advancements in machine learning to boil down the millions of lines of text communication in a community to the interactions that help and hurt the growth of the community.
As the DevOps Engineer, you will be working within the Ally engineering team both managing services and developing core functionality. The DevOps Engineer role is responsible for ensuring live services are running as planned, developing tools/dashboards to assist with the running of these live services, and developing new areas of the solution as the product grows and adapts to client requirements, defect fixing etc. You should be an effective team player and should be able to work as part of a diverse company. You should have a good understanding of the SLAs with clients and you should be able to prioritise tasks to ensure the best service possible is provided.
Responsible for leading the continued development, new features and enhancements of Ally,from requests fed by the client
Overseeing the whole solution, ensuring the codebase is written and maintained to a high standard
Responsible for the software architecture design and overseeing it’s development
Onboarding and training new team members
Documentation, estimates, design and functional specifications for use by internal teams
Ensure all code is written to a high quality; ensure the team adheres to coding standards and policies
Oversee and maintain the interface between the Ally and Data Science team
Fix internal and client-submitted defects
Minimum 3years’DevOps Engineer experience
Must have G-Cloud experience for DevOps, desired for Engineer
Must have Solr experience for DevOps, desired for Engineer
Experience in Artificial Intelligence, Tensorflowand/orAWS is desired
Workeffectively within an Agile framework, adhering to agile best practices and processes including TDD, BDD
Docker, Git, Kafka, Postgres or other relational data base experience, Linux, Kubernetes