
Few today remember PayPal’s early competitors. And for good reason—PayPal was a pioneer of automation strategy, with innovations that enabled them to scale to eventually dominate their market. But that future wasn’t always a given.
Shortly after surviving the dot‑com bubble burst, PayPal and its competitors were bleeding tens of millions of dollars each month to credit card fraud. With thousands of transactions flowing through these platforms every minute, the problem was far too vast for any human team to solve in real time. Engineers at these companies built automated systems to automatically identify and reject fraudulent activity, but criminals were adapting faster than their models could.
Then, PayPal found the winning strategy: instead of trying to cut humans out of the loop, they modified their algorithms to surface suspicious transactions to expert analysts, providing key data points to support more flexible, nuanced judgment calls. This system—an approach that used automation to augment, not replace, humans—is what kept PayPal in business.
This story predates the advent of large language models (LLMs) and generative AI, but the tension at its heart remains relevant today, especially in healthcare.
Robots vs. Mech Suits
In the race to innovate, many healthcare decision-makers are tempted by the promise of blanket automation, seeking to fully obviate expensive or scarce human labor with machines. This might be called the “robot” approach, and to be sure, it has its place—it shines in many repetitive, high-volume tasks that have long strained healthcare operations or where qualified people are in short supply. But applied bluntly, it can become the proverbial hammer to many things only superficially nail-like, failing acutely where fuzzy human judgment, emotional connection, and physical presence are valued.
In such scenarios, another approach to automation may prevail: the “mech-suit.” In this framing, AI tools instead extend a clinician or coordinator’s reach by placing rich context at their fingertips, supporting decision-making with key insights and replacing their more menial tasks with focused work at the “top of their licensure.”
In time, AI will be everywhere, pervading every facet of healthcare. But knowing where and how to deploy it takes wisdom: when to send a robot and when to gear up in a mech-suit. Ultimately, it comes down to using AI to do what humans can’t, so that humans can do what AI can’t.
AI and the Spectrum of “Automated Intelligence”
To be sure, today’s AI models are more accessible, capable, and integrated than any previous automation wave. But—lest one be swept away by the ebb and flow of hype cycles—the success of these models shouldn’t be measured by their benchmarks or sophistication but by the value they deliver.
Even as they make Clarke’s Third Law manifest, that “any sufficiently advanced technology is indistinguishable from magic,” these AI models must be recognized for what they are: tools. And like any tools, they have their strengths and weaknesses. While it’s trendy to compare generative AI’s rise to the automobile’s relegation of horse travel to the pages of history, in many healthcare contexts a better analogy might be the microwave to the oven. Yes, microwaves have, through their ease, cost, and speed, changed the way we cook, and even spawned entirely new industries (think, frozen meals). But when you need muffins for your child’s bake sale or a juicy turkey roast on Thanksgiving day, ovens still prove their worth. Likewise, while LLMs represent an inflection point for technological progress, they will always perform poorly relative to simulation-based approaches in large-scale optimization problems, and well-tuned machine learning (ML) models still outclass LLMs in narrow, focused tasks. Together, these techniques can be seen to comprise a broader spectrum of “automated intelligence.”
Rather than applying AI tools indiscriminately, healthcare leaders can frame the applications of such “automated intelligence” more broadly around three classes of opportunity: unlocking the impossible, automating the prohibitively expensive, and accelerating evolution.
At its most transformative, automated intelligence can enable things that no human or legacy software can accomplish alone. This is unlocking the impossible—it’s real-time copilots that surface relevant patient chart data in response to ambient audio, or a routing engine for in-home nurses that explores millions of permutations to replace time spent behind a windshield with time in front of patients. In such cases, technology can expand the realm of what is possible.
AI can also excel in automating tasks that are possible for humans to do, but prohibitively expensive or labor-intensive at scale. The most popular example of this would be ambient scribes, which trim documentation time for doctors, returning those precious minutes to face-to-face patient interactions but ultimately keeping humans in the loop. For an example of a more “robot”-like use-case, we can look to high-volume data ingestion, cleaning, and quality control—work that would otherwise consume resources at an enormous rate and for which the human touch is less valuable.
Finally, generative AI in particular can accelerate the evolution of care and operations by compressing the cycle from idea to execution. Rather than spending a year building specialist expertise, gathering test data, and tuning ML models for some automated task, teams can in many cases see 80+% of the accuracy simply by instructing LLMs on what to look for, allowing them to test hypotheses, evaluate results, and iterate quickly. In this way, AI models aren’t being used to simply streamline workflows beyond what was already possible with ML, but to enable exploration and innovation far faster than would otherwise be possible.
While AI is evolving at a rapid clip, the fundamentals of good strategy haven’t changed, and they probably never will. In an age of commoditized intelligence, defensibility still depends on the same moats it always has: proprietary data, operational infrastructure, trusted relationships, and business models that are hard to replicate.
AI can help strengthen those moats, but it certainly can’t dig them.
For healthcare leaders, the best AI strategy will leverage instances where people create unique value, identify their limitations, and then determine the best tool for the job, whether a robot or a mech-suit.
As PayPal found, success often depends less on technology itself and more on how wisely it is used.
About Cameron Behar
Cameron Behar is the Co-founder and Chief Technology Officer of Sprinter Health, a company that combines the best of AI and a W-2 clinical workforce to serve patients across the country, even rural areas.