How AI Contributes to make High-Throughput Screening more Efficient
Artificial Intelligence and Machine Learning (AI/ML) are transforming drug discovery paradigms. The latest wave of AI advances is set to have a major impact on the cost and complexity of high-throughput screening assays. Finding new ways to maximize assay information-content and biological relevance, while reducing reagent costs and materials consumed per well, has never been more critical.
AI-Driven drug discovery: time to get more data bang for your high-throughput screening buck
Artificial intelligence and machine learning (AI/ML) are positively transforming the way we humans do things—and drug discovery is no exception. The latest wave of advances, including the emergence of powerful foundation models and iterative screening applications, is set to have a big impact on high-throughput screening (HTS) paradigms.
AI-based drug discovery is changing high-throughput screening forever
AI/ML technology is undergoing a tectonic shift, and the ripples of change are reaching every corner of the pharmaceutical industry. Analysts estimate that the global market size of AI-based drug discovery[1] is increasing by 45.7% annually. At that rate, it could reach USD $4.0 billion by 2027. Many industry experts predict that by 2030, AI-driven decision-making will play at least some part in the design of virtually every new drug.
One transformational development is AI-driven iterative screening, which has the power to dramatically increase the efficiency of hit-finding in drug discovery. In iterative screening, the compound library is screened in batches, and the most promising compounds for each successive round are cherry-picked based on results from the previous batch. While this concept is not new, it is too impractical and costly to implement using manual methods. However, with AI to do the cherry-picking (using machine learning), iterative high-throughput screening becomes a viable and very powerful alternative to brute force screening. A recent study showed that AI-driven iterative screening can more than double the return rate of conventional high-throughput screening[2], recovering nearly 80% of active compounds from 35% of the library with just a few iterations.
Use of AI foundation models is another emergent trend that has profound potential to alter drug discovery paradigms. As the name suggests, a foundation model is a versatile platform for rapid construction of AI systems. Rather than being trained for a specific task, a foundation model is educated using broad, unlabeled data sets. Once trained, usually through self-supervised learning, the model can be quickly adapted to tackle new tasks, with minimal modifications. In this way, one foundation model can be re-used as the seed for many new AI-powered applications.
AI systems are hungry for data
While foundation models are awe-inspiring in their potential, creating the models themselves is consuming petabytes of data. Initially, very large, high quality data sets are needed for training and validation.
Although a wealth of existing ‘omics information and “big data” has already been gathered from biological, chemical, pharmacological and clinical domains, its quality, coverage, and suitability for certain AI applications are limited. As a result, we are seeing a surge in large-scale initiatives to generate massive biological data sets, using high-throughput methods designed specifically with AI applications in mind.
A growing number of companies is securing large amounts of funding for AI/ML initiatives in drug discovery. As part of their strategy, they are generating vast stores of well-curated cellular imaging data as potential fuel for AI-driven applications in target identification and deconvolution, hit screening, lead optimization, and early toxicity testing.
High-throughput screening assays are getting more cost-intensive, not less
When it comes to putting iterative AI approaches into practice, the quality and information-content of high-throughput screening assay data are crucial. An AI/ML engine is only going to make high-quality, reliable decisions if it is fed high quality, reliable assay data. This means that there is likely going to be a shift from the more simplistic high-throughput screening assays typically selected for primary screening towards more information-rich and biologically relevant ones.
Fortunately, with iterative approaches supporting higher hit-returns from smaller compound sets, the resource savings can be shifted to support the use of more cost-intensive assays and model systems, such as iPSC-derived and 3D cell models. Downstream of primary screening, we may also see an increase in the number of hits going through into complex phenotypic assays for secondary screening, lead optimization and predictive toxicity testing.
Squeezing more data out of every well
For all the above reasons, as more companies tap into the power of AI-driven high-throughput screening paradigms, we are likely to see a trend towards more cost-intensive assays. At the same time, high-throughput screening applications are expanding to include rare diseases and studies where there are limited sample volumes and sample sizes. This further adds to the pressure to find new ways to improve cost-efficiency in high-throughput screening.
The central challenge is how to maximize information-content and biological relevance of high-throughput screening assays, while reducing reagent costs and materials consumed per well. One way to do this is through assay miniaturization.
In upcoming articles we’ll see how miniaturization dramatically reduces assay cost per well, and we’ll take a closer look at the technology advances that are making this possible.
Are you interested in how to run cost-intensive assays more efficiently? Follow this series by subscribing to our blog.
References
[1] https://www.marketsandmarkets.com/Market-Reports/ai-in-drug-discovery-market-151193446.html
[2] https://www.sciencedirect.com/science/article/pii/S2472555222066692