Welcome to the May 2025 edition of the 360 Clinical Research Consultancy Insights! In this issue, AI Hype Meets Site Reality: Why Adoption Still Looks Uneven in Clinical Research
May 2025 was a useful corrective for the clinical research industry. After months of aggressive conversation about AI, digital transformation, and patient-centric innovation, the operating reality became harder to ignore. Adoption remained uneven. Sites continued to push back on fragmented technology stacks. And the most credible examples of participant-centric innovation were not the most ambitious tools, but the ones that removed practical friction from participation.
For sponsor companies, that is the real story of the month. The question is no longer whether technology matters in clinical development. It does. The more important question is whether the technology being introduced is improving execution, reducing burden, and strengthening quality, or simply adding another layer of complexity to already fragile workflows.
From a quality leadership perspective, May reinforced a principle that should now guide every technology decision in clinical research: the value of a tool is determined less by what it promises in a demo and more by what it changes in day-to-day trial operations.
By May 2025, few serious organizations were questioning whether AI would play a long-term role in clinical research. The more relevant issue was where that role was becoming practical and where it was still immature. Across the industry, AI was being explored in protocol design, document drafting, feasibility, patient identification, signal detection, query support, and data review. In theory, the use cases were compelling. In practice, adoption was still highly uneven.
That unevenness should not be misread as resistance to innovation. In most cases, it reflects a more basic truth: technology is adopted when it fits into accountable workflows, not simply when it appears useful in principle.
Clinical research remains a controlled environment with multiple points of accountability. Sites need clarity on what a tool is doing, what remains human-reviewed, how outputs are verified, and what happens when the tool gets something wrong. Sponsors need to know whether the system is fit for purpose, how it is governed, how its performance is monitored, and whether its outputs can be defended in inspection or submission contexts. When those questions remain unresolved, adoption slows for good reason.
That is why the market still looks patchy. Some organizations are scaling narrow, well-governed AI use cases. Others are still running pilots. Many sites are interacting with AI-enabled functionality without considering it a meaningful improvement at all, because the surrounding workflow has not changed enough to matter.
One of the clearest lessons from May was that site adoption follows practical value, not strategic enthusiasm. A site does not care that a sponsor has an AI roadmap. A site cares whether the technology reduces manual effort, speeds up common tasks, lowers the volume of avoidable queries, or makes trial conduct easier without creating new documentation or training burden.
This is where many digital strategies still fail. Sponsors often evaluate tools by feature set, while sites evaluate them by operational drag. If the tool introduces another login, another interface, another set of alerts, or another exception pathway that staff must learn and remember, it is unlikely to be perceived as progress even if the underlying algorithm is impressive.
The same principle applies to trust. Site teams are more likely to use automated or AI-enabled functionality when they can understand its role, verify its outputs, and remain clearly in control of the final action. They are less likely to rely on it when it behaves like a black box layered onto an already crowded process.
Adoption, in other words, is not just a technology decision. It is a workflow decision.
Quality teams should view this uneven adoption pattern as a sign of governance maturity, not market hesitation. In a regulated environment, uneven adoption is often what responsible implementation looks like in the early stages. Tools should not scale simply because they are available. They should scale because the organization has defined the use case, assessed the risk, assigned accountability, documented review expectations, and built the controls needed to use them responsibly.
That includes version control, access control, validation where appropriate, traceability of outputs, change management, training, and clear human oversight. Credibility and the ability of the sponsor to demonstrate credibility is key. It also includes a willingness to limit use when the workflow is not mature enough to support reliable execution.
The strongest sponsor organizations in 2025 are not necessarily the ones using the most AI. They are the ones making disciplined decisions about where AI actually improves quality, speed, or consistency without eroding control.
If AI exposed one side of the industry’s digital maturity problem, integration exposed the other. By May 2025, site fatigue with fragmented platforms was no longer a minor usability complaint. It had become a structural issue affecting efficiency, data flow, training burden, and site willingness to engage with sponsor technology stacks.
Most sites are still navigating too many portals across the study lifecycle. They move between startup tools, training systems, EDC, ePRO dashboards, safety portals, device platforms, recruitment tools, document repositories, communication systems, and vendor-specific interfaces. Each system may be defensible in isolation. In combination, they create friction that is felt immediately by site staff.
That friction has consequences. It consumes time, increases the chance of missed tasks, encourages workarounds, and weakens consistency. It also affects morale. Sites do not experience portal sprawl as digital innovation. They experience it as fragmented work.
For sponsors, this should now be treated as a quality issue as much as an operational one. When teams are forced to swivel between systems that do not communicate well, the risk is not just inconvenience. The risk is delay, inconsistency, duplicate entry, oversight gaps, and preventable error.
The integration problem is often misunderstood as a technology procurement issue. It is not simply about choosing better software. It is about choosing a more coherent operating model.
Sites increasingly want fewer systems, clearer workflows, and stronger connectivity between the systems that remain. That means interoperability matters more than marketing language. Single sign-on matters more than another dashboard. Structured data flow matters more than decorative automation. Integration between sponsor systems, site systems, and participant-facing tools is now one of the clearest indicators of whether a digital strategy is helping or hurting execution.
This is particularly important in studies with lean site staffing, hybrid visit schedules, decentralized components, or multiple specialist vendors. Every handoff point becomes more fragile when systems are disconnected. Every additional re-entry step increases the likelihood that the process will slow or break.
Sponsor companies should take this seriously. Integration is no longer a technical enhancement to be considered after implementation. It is central to whether implementation succeeds.
Many organizations still speak about site burden in abstract terms. That is no longer enough. If digital fragmentation is affecting trial conduct, it should be measured and governed like any other operational risk.
That means looking beyond anecdotal complaints and assessing the burden created by the technology ecosystem itself. How many separate systems does a site team actually need to access? How much duplicate data entry is required? How often do helpdesk issues delay site activity? How much training is tied to platform-specific navigation rather than trial-critical tasks? How many avoidable queries, missed notifications, or process deviations are linked to disconnected workflows?
Once sponsors begin measuring burden this way, the integration problem becomes easier to see and harder to dismiss. More importantly, it becomes easier to prioritize. Not every system can be replaced immediately, but every sponsor can start designing toward simplification.
May also helped clarify what participant-centric technology really means in practice. The best examples were not necessarily the most advanced or the most visible. They were the tools that made trial participation easier in specific, practical ways.
That includes technologies and processes that reduce unnecessary travel, simplify communication, make scheduling easier, improve visibility into next steps, support reimbursement more smoothly, enable remote completion of low-burden activities, or help participants stay engaged without feeling overwhelmed. In most cases, the value came from reducing friction at predictable points in the participant journey.
This is an important distinction. Participant-centric technology is often framed as a category of innovation. In reality, it is a design discipline. A tool is participant-centric only if it improves the experience of participating in the study without creating new confusion, burden, or risk elsewhere in the process.
That is why some relatively simple tools continue to outperform more ambitious platforms. A clear reminder system can be more valuable than a sophisticated app that participants do not understand or trust. A well-executed tele-visit model can outperform a more complex digital workflow if it removes time and travel without compromising data quality. A reimbursement process that works predictably can matter more to retention than a branded engagement portal with limited practical value.
There is a tendency in clinical research to define participant-centricity through intent rather than execution. That is a mistake. A technology can be designed to help participants and still fail if it is unreliable, confusing, poorly supported, or not aligned with the protocol.
For participant-facing tools, reliability is not a secondary consideration. It is the condition that determines whether convenience is real. Missed reminders, unstable devices, confusing interfaces, weak multilingual support, poor escalation pathways, or inconsistent coordination between site staff and participant-facing vendors can quickly turn a convenience feature into a source of dropout risk or protocol deviation.
This is why quality teams should remain closely involved in participant-tech decisions. The question is not just whether the tool appears helpful. The question is whether it can be deployed in a way that is operationally stable, understandable to the intended population, and consistent with trial requirements.
That includes looking at usability, training, identity verification where needed, privacy handling, exception management, and how participant-generated data is reviewed and acted upon. Technology does not become participant-centric simply because it sits closer to the participant. It becomes participant-centric when it works reliably in the real trial environment.
The most durable participant-centric solutions in 2025 are the ones that reduce friction on both sides of the study. If a tool is easier for participants but significantly harder for sites to support, its impact will usually be limited. If it improves site efficiency but creates confusion for participants, it will not sustain engagement. The strongest solutions improve the interaction between the two.
That is an important lesson for sponsors. Too many digital tools are still evaluated in silos. A participant-facing tool should not be assessed only through a patient-engagement lens. It should also be assessed through site workload, oversight requirements, data quality implications, and escalation readiness. The same is true in reverse. Site tools that support smoother participant coordination often have more strategic value than they initially appear to.
Participant-centric design, at its best, is not about choosing one stakeholder over another. It is about reducing friction across the system.
The first leadership implication from May is that digital strategy should now be centered on workflow design rather than tool accumulation. Sponsors should be asking where friction actually sits, which tasks create the most avoidable burden, and which technology decisions simplify the end-to-end process rather than just improving one isolated step.
That is a different discipline from buying promising platforms. It requires cross-functional design, operational humility, and a willingness to retire or avoid tools that create more fragmentation than value.
The second implication is that adoption should be governed through use-case clarity. Not every AI-enabled feature deserves broad rollout. Not every participant technology belongs in every protocol. Not every portal should survive procurement simply because it offers a niche capability.
Quality leaders should push for decisions that are grounded in risk, usability, and accountability. Where is the tool used? Who reviews the output? What burden does it remove? What burden does it create? How is performance monitored? What happens when it fails? These are now strategic questions, not implementation details.
The third implication is cultural. Sponsors should stop treating digital maturity as a measure of how many platforms, automations, or participant-facing features they can deploy. In clinical research, maturity is better measured by how much friction the organization has removed without losing control.
A sponsor with fewer systems, cleaner integration, stronger oversight, and better site and participant usability is often more advanced than a sponsor running a larger but less coherent digital ecosystem.
May 2025 made that point difficult to ignore. AI is advancing, but adoption remains uneven because workflow and governance still determine what is practical. Sites are asking for fewer portals because connectivity now matters more than feature count. Participant-centric technology is creating value when it removes real-world burden, not when it adds surface-level innovation.
For sponsor companies, the conclusion is clear. The next phase of digital transformation in clinical research will not be defined by who adopts the most technology. It will be defined by who applies technology with the most discipline. The organizations that win will be the ones that simplify operations, protect quality, and make trial participation easier for the people actually doing the work.
Talk to 360 CRC today about how 360 Clinical Research Consultancy can help your organisation achieve and maintain regulatory compliance.
Get in TouchLatest Posts

12 min read
Welcome to the April 2026 edition of the 360 Clinical Research Consultancy Insights! In this issue, The New UK Clinical Trials Regulations Are Live: What April 2026 Changed in Practice
10 Apr 2026
Read More →
12 min read
Welcome to the March 2026 edition of the 360 Clinical Research Consultancy Insights! In this issue, March Becomes the Implementation Month: What the UK’s Countdown Webinar Revealed
10 Mar 2026
Read More →
12 min read
Welcome to the February 2026 edition of the 360 Clinical Research Consultancy Insights! In this issue, FDA’s One-Trial Default Changes the Conversation: What February 2026 Means for Drug Development Strategy.
10 Feb 2026
Read More →