On 13th December 2022, CatSci Ltd hosted the second instalment of their Digital Webinar Series, in collaboration with LabLinks, entitled: “Voice Activated Digital Assistant In The Lab”. The free webinar featured a line-up of expert speakers:
- Dr Rebecca Green – Senior Principal Scientist, Bristol Myers Squibb (BMS)
- Ian Kerman – Director of Customer Success, LabVoice
- Steve Soccorso-McCoy – Head of Sales, LabVoice
- Dr Elizabeth Yuill – Principal Scientist, Bristol Myers Squibb (BMS)
The event was chaired by Dr Sam Whitmarsh, Director of Digital Transformation at CatSci and Co-Founder of LabLinks. Steve and Ian from LabVoice discussed a range of case studies and the development of the underpinning AI tools. Rebecca and Elizabeth of BMS, users of this tool, then shared their first-hand experience on the application of this technology in the lab.
Steve Soccorso-McCoy opened the proceedings with a presentation entitled “Voice In The Lab: Use Cases”. Steve discussed the challenges of digital transformation in the laboratory, highlighting the reliance on paper-based processes and manual data entry as being time-consuming, error-prone, and difficult to manage. The increasing complexity of data generated by modern research can be difficult to manage using traditional methods. Steve shared how the LabVoice Digital Lab Assistant aims to address these challenges by providing tools to capture audio data and optimise existing lab processes. It also acts as an interface to allow users to execute processes from audio commands. He grouped use cases into three categories: hands-free data capture, automation, and compliance. Steve shared case studies involving using the tool in isolation, as well as connecting it to existing infrastructure and informatics systems. Integration is already developed to many common data management systems, such as Office 365 or Slack. The case studies showed users capturing information about a process, such as weights, lot numbers and substance volumes. Steve shared an example of an audio SOP (Standard Operating Procedure), demonstrating how the tool offered a step-by-step user guide, informed the user exactly what to do, and captured data along the way.
Ian Kerman from LabVoice in his presentation showed how the associated software tools are used. Available as an app, the application can run on mobile devices or laptops for users to access on-the-go. The app includes a designer tool that allows customisation of several components: the type of data that needs to be collected, any actions or instructions that need to be spoken to the user, what types of calculations need to be done, and where the data needs to be written or stored. The live voice digital system can then be deployed itself as a mobile app, as a Docker container, or as a physical hub powered by a Raspberry Pi. These allow for on-site integration with various devices such as balances. The app allows for guided process execution, meaning it can walk scientists through the steps of their workflow, from simple tasks like calibrating a balance to more complex procedures. The mobile app can also collect rich data, such as videos, images, and barcodes, as well as audio. Ian shared examples of additional benefits of digitalising the workflows. With the tool, users can identify bottlenecks with their process or equipment. They can better understand common compliance issue failure points, and the tool also enables the prediction of consumable and inventory usage. As with many digital technologies, the meta data collected can be just as valuable as the primary data itself.
Dr Rebecca Green and Dr Elizabeth Yuill of BMS then shared their real examples of the use of the Lab Voice technology in the BMS labs, including application of balance calibration and data recording, before moving to a round table discussion.
The first discussion compared adoption rates of this technology, specifically in scientific fields. One of the main concerns was how well these technologies would handle technical and scientific vocabulary. LabVoice have spent significant time optimising their own voice recognition algorithms to ensure technical language is understood. Another issue discussed was the lack of industry-specific or application-specific tools for voice-enabled tasks. The panellists suggested that the best way to overcome these hurdles was with continued use of the technology, the AI will begin to learn and adapt to specific vocabulary and tasks. Additionally, they suggested that finding the right applications and tasks where voice-enablement was particularly useful, such as in the car or in isolated environments, would lead to broader adoption of this technology. They provided examples of high potency isolators or gloveboxes where the cost of pausing to touch a screen or keyboard is significant. The cost of adoption was not seen as a significant barrier, but the time and resources needed to understand and implement the technology was a concern. This demonstrates a need for those who already use the technology to continue to showcase case studies and their own experience of the benefits and productivity improvements that AI can bring.
Increased efficiency and improvements in quality when recording data and taking notes were highlighted as two of the main benefits of this technology. Practical application at BMS showed the benefits in additional experimental insights, resulting in improved tech transfer, as the transcription captures more subjective language and real time thoughts. Additionally, unexpected benefits were realised by the ability to reference real time notes afterwards; a reduction in the frustration factor as degloving, pausing a task or changing train of thought were avoided by being able to record voice data. A major attraction of the BMS trials were the environmental benefits of reduced glove use and repeated work, reducing the carbon footprint of achieving results.
The CatSci and LabLinks teams would like to thank our speakers: Steve Soccorso-McCoy and Ian Kerman from LabVoice, and Dr Rebecca Green and Dr Elizabeth Yuill from BMS. We would also like to thank our attendees for their views and insights around this topic. Voice activated digital lab assistants are emerging as useful tools that allow scientists to automate and streamline tasks, enabling them to focus on the more complex challenges at hand. They have intuitive user interfaces and voice-controlled commands that can record data in real-time, guide manual workflows, and initiate automated equipment. Adoption of these assistants is expected to bring more benefits in the future as they work to free up the hands of scientists and expedite laboratory work.
Discover our Computer Aided Retrosynthesis summary here: