The inaugural Gen AI Global Council roundtable meeting brought together leaders who are spearheading Generative AI development mandates in top pharma companies to discuss the latest trends, challenges, and opportunities in this rapidly evolving space. The insights shared were invaluable, shedding light on the evolving landscape of large language models (LLMs) and the strategic decisions surrounding their development. Below, we summarize eight key takeaways from this key gathering.
1. Evolving Trends in the World of LLMs
The large language models (LLMs) market is expanding rapidly, with hyperscalers leading the charge. However, it’s not just the big players making waves. Open-source models are gaining ground, offering capabilities that increasingly match those of their more expensive, commercial counterparts—at least in certain use cases. Additionally, we’re seeing a surge in domain-specific models, as well as a growing ecosystem of instruction-tuned models. Meanwhile, the proliferation of AI agents and the frameworks for building them are advancing quickly, bringing greater specificity and intricacy to AI workflows, particularly in areas related to AI feedback on LLM-generated output.
2. Beyond the Binary: Addressing Both Disruptive and Incremental Gen AI Use Cases
There’s a growing debate within pharma organizations about whether to prioritizedisruptive or incremental use cases for Gen AI. One camp advocates for transformative solutions that can radically alter processes and user experiences, delivering maximum impact. The other favors a more conservative approach, focusing on point solutions that optimize existing processes while still fitting within existing workflows and addressing specific bottlenecks without overwhelming users with change.
Both approaches have merit, and pharma companies may find success in pursuing both tracks in parallel, provided they maintain clear distinctions between the two efforts and resource them with separate groups of people. They also need to establish a well-thought-out prioritization framework to avoid getting sidetracked by competing demands from business groups. While it’s important to prioritize use cases that solve function-specific pain points, companies should also consider addressing pain points that are common across functions. This broader approach can lead to more cohesive solutions that benefit the organization as a whole, ensuring that the most impactful opportunities are not overlooked.
3. Centralized Technology Architecture: A Unified Approach to Compliance and Governance
As pharma organizations increasingly deploy Gen AI solutions across their enterprises, the lack of a standardized approach to addressing concerns related to data privacy, bias, hallucination, model explainability, and other compliance and governance-related issues continues to prevail. Many organizations are finding themselves in a fragmented landscape of Gen AI solutions, each addressing these issues in different ways from one another. A centralized technology architecture, at the enterprise level, could provide a solution by offering a consistent set of controls and standards as guardrails, applicable across various use cases. Such an architecture would allow for a unified governance layer, enabling audits to be conducted in one central place, rather than dealing with disparate solutions separately.
4. Centralized vs. Decentralized Development Model: Finding the Right Fit
In a centralized model, where solution development functions are managed centrally across business units, the advantages include better visibility within the enterprise and a greater potential to attract top-tier talent because of its visibility. However, this approach can also come with risks, such as lower adoption rates by business units. Centralized teams might develop high-tech solutions with a technology-first mindset, potentially building solutions that are not always fit-for-purpose.
Conversely, a decentralized model embedding solution development teams within individual business units often results in teams having a clear focus on developing solutions with definitive value but may struggle with visibility and attracting top talent. Additionally, the decentralized model might limit opportunities for experimentation with cutting-edge technologies.
Both models have their benefits and risks. The choice between them depends on the organization’s decision-making structure. A centralized development model aligns well with a centralized decision-making structure, while a decentralized development model is better suited to a decentralized decision-making structure.
5. Generative AI Development: Retaining Core Foundations
Despite its novelty, Gen AI solution development shares many foundational similarities with traditional application development. For instance, aspects such as the development lifecycle, front-end development, and some testing and validation processes remain largely the same. While it might seem like Gen AI development requires a completely new approach, it’s important to recognize that the fundamental processes and structures from traditional development still apply. However, Gen AI development does require the introduction of domain experts into the development process, who play a central role throughout, unlike in traditional development.
6. Domain Experts: From Supporting Roles to Protagonists
Gen AI development shifts the focus significantly from technology expert personas to domain expert personas. Unlike in traditional development, where domain experts' roles might be limited to providing initial requirements, they now play a central role throughout the entire Gen AI development lifecycle. In Gen AI development, domain experts are considered principal contributors across the journey, crucial not only during development but also in post-production. Their ongoing involvement is essential in monitoring model performance and making necessary adjustments to ensure the output remains accurate.
7. Unlocking Precision: The Role of Knowledge Engineering in Advanced Gen AI
For advanced use cases like document authoring and content creation, where Gen AI systems need to interact with enterprise data and sources of truth, the way data is made available and understood by these systems becomes critical. Traditional approaches like Retrieval-Augmented Generation (RAG) often fall short when Gen AI systems need to connect and interpret data from multiple disparate sources. This is where knowledge engineering comes in. A more sophisticated knowledge base is required, where knowledge must be codified into domain-specific ontologies and taxonomies through knowledge graphs. These knowledge graphs then feed AI agents in downstream workflows, enabling more accurate outputs. While creating these ontologies is a human-driven task, it can be augmented by general-purpose Gen AI systems. The process of ingesting data from various sources and converting it into a knowledge graph format can be largely automated by Gen AI workflows.
8. Unlocking Transformation: Gen AI is Just One Piece of the Puzzle
While Gen AI often takes center stage, it’s important to recognize that truly transformative use cases often involve a fusion of various technologies. While Gen AI might be the most visible and talked-about component, successful solutions need to draw on knowledge engineering, traditional AI and ML, data science and engineering, and more. However, by framing these transformative use cases within a Gen AI development agenda, development functions can leverage the momentum to secure better resources—not only for technical development but also for governance areas such as legal, ethics, and compliance.
As the biopharma industry continues to explore the vast potential of generative AI, it’s clear that its success depends on a nuanced approach that integrates both new and established technologies. The insights from the Gen AI Global Council meeting highlight the importance of a strategic approach to drive meaningful progress.