These are decisive days for the global food industry. Leaders stand to gain lasting commercial advantage by being first to market with nutritious, sustainable products that are as good for business as they are the planet. At the heart of opportunity lies biotechnology and novel bioprocesses, ranging from personalised products based on your microbiome, to precision fermentation and cellular agriculture. It is now well known that investing in bioprocesses is challenging due to the poor predictability of the outcome at commercial scale, combined with significant capital exposure.
Digital twins have emerged as a transformative tool, offering a digital representation of bioprocesses that offer improved predictability at scale and thus improved invest-ability. They are trained with real data from lab experiments and the known bioreactor physics and can then be used to explore and understand the viability of the process before any capital investment. (See my previous articles on the subject for a more detailed explanation of the benefits of digital twins for bioprocesses and how the wider bioeconomy will benefit.)
However, there is no such thing as a free lunch. The cost and time required to acquire high quality data and build effective digital twins pose significant challenges for small enterprises and large multinationals alike. Emerging deep tech innovation can drive down these costs and timescales, making the predictive benefits of digital twins more accessible and commercially viable.
Digital twins invest-ability and predictability
The merits of digital twins as a transformative tool have gained traction because they offer the ‘invest-ability’ of improved predictability at scale. But all organisations, from start-ups and small enterprises to ambitious multinationals, currently face the significant cost barriers I’ve already alluded to.
Why is it all so challenging and expensive? It’s been shown that the physical behaviour of bioreactors can be predicted using computational fluid dynamics (CFD). It’s tough, but it can be achieved with the right expertise, a powerful computer and knowledge of vessel geometry and operating parameters.
But to build a powerful and predictive digital twin, we also need to predict the biological behaviour – that is the cell’s response to the physical. Behaviour – for example the growth rate of cells in culture – is influenced significantly by at least ten process variables, most of which interact in a complex, dynamic and non-linear way. It can’t be described in the same mechanistic way that the flow of fluids and gases can in CFD.
This means digital twins need to combine CFD with data-driven AI models of the biology. As with all AI, these models need extensive and high-quality data to accurately predict complex biological responses within commercial bioreactors. This can come at considerable cost and take significant time – potentially to the extent it negates the value of the model.
In fact, training a general-purpose model with the capacity to capture the necessary complexity needs datapoints in the high hundreds to low thousands – potentially costing several million dollars and several years of experiments. I’m going to be bold here and say that using deep tech innovation to propel a predictive digital twin build could at least halve both metrics. Reliable digital twins become viable.
AI with real intelligence
The approach begins by supplementing AI with ‘real intelligence’. What I mean here is combining AI with domain expertise to cut the number of experiments – from the low thousands to the high tens, say. This can be achieved by using hybrid approaches, where the digital twin is seeded with things we know to be true to reduce the number of experiments we need.
For instance, we can embed relationships between variables (e.g., if CO2 goes up, the pH goes down), something that might have taken tens of data points for the model to learn otherwise. We can also constrain or ‘punish’ the model for deviating too far from known physical constraints (e.g., chemical equations) during the training process – again reducing the number of experiments needed for it to learn that relationship.
Automation for real-time data collection
Now let’s turn to automated inline sensors and dosing mechanisms. We can further cut the cost and time of by implementing automated technologies for real-time data collection. Real time measurement and modulation of experiments can reduce labour costs, increase the information per experiment and push processes closer to the edge (and even over it).
To illustrate the point, lactate, ammonia, glucose and growth factors are typically measured daily, if at all. This is a viable approach within the lab because when these variables move close to dangerous levels, it’s simply a case of changing the media and providing a lot of contingencies. But the approach taken in the lab is not feasible at scale – we need to be able to dynamically measure and modify these variables more frequently than daily if we are to optimise and explore, and in a way that is practical and cost effective.
Development of low-cost parallel bioreactors
By creating lower-volume, multi-well bioreactors with real-time sensing of key parameters, we can conduct numerous experiments in parallel, reducing both time and cost of the overall experimental programme.
Up till now, microtiter plates have been used to carry out a high number of experiments in parallel. But in the context of gathering data to train a digital twin, this presents problems. Performance in microtiter plates correlates poorly with performance in stirred tank bioreactors. So, any data gathered is not likely to be useful.
Also, the technologies for obtaining measurements in real time are limited. To get representative data in real time, the status quo is reaching for systems such as the ambr250 or maybe the ambr15. But to obtain the quantity of data needed to build a quality digital twin on these systems may be too high for some. By using microfluidics, electronics and sensor miniaturisation, it’s possible to bridge the gap between traditional bioreactors and microtiter plates. And that offers a much more scalable solution for data collection.
What next for food bioprocessing efficiency?
To sum up then, the need to develop cost-effective digital twins is a crucial imperative as the cellular agriculture industry continues to mature. By leveraging deep tech innovation to propel new approaches to bioreactor design, ambitious companies can overcome the financial barriers to building effective digital twins.
The team here at CC continues its work to drive further innovation towards better predictability in cellular agriculture and bioprocess optimisation. If you’d like to discover more about our expertise in deep tech and bioprocess engineering – and ways to scale up operations and reduce risk on the road to commercial success – please get in touch. I’d love to continue the conversation around bioprocessing.
Expert authors
James is responsible for growing ground-breaking industrial biotech capabilities and leading multi-disciplinary teams to invent radical innovations for ambitious clients.