Learning to Fail
In tech and in healthcare, people are taught to fear the wrong kind of failure.
Traditional education treats failure like a final judgment. You get a grade, it goes in a file, and that label follows you. There is no iteration, no chance to test revisions, no second submission. That mindset trains people to protect their ego instead of improving their work. It also directly conflicts with how innovation in robotics, clinical safety, and engineering actually happens.
In real product teams, including healthcare robotics, failure is not a verdict. It is data.
I teach my team to separate Failure with a capital F from failure with a lowercase f.
Capital F Failure is the kind that takes you offline. The contract that collapses after six months of work. A critical safety issue in a clinical workflow. A funding decision that changes the roadmap. Those are events you plan hard to avoid.
Lowercase f failure is different. A prototype does not behave the way we expected in a hospital corridor. A workflow test with clinicians reveals friction we did not anticipate. A pricing story does not land in a budget meeting. None of those are disasters. They are fast feedback loops. Iteration cycles like this are the reason teams learn and systems get safer. This is consistent with what we see in high-performing teams in healthcare and engineering: teams that treat early-stage failure as shared learning perform better and catch problems sooner, especially when there is psychological safety to admit uncertainty [1].
At Rovex Robotics, we design for a regulated, high-stakes environment. That means we do not get to experiment recklessly. What we do instead is design to fail small, fail early, and fail safely.
That mindset shows up in a few concrete habits: We treat every early demo as an experiment, not a pitch. The goal is not to impress. The goal is to find which part of the story actually matters to the clinician or the administrator: safety, reliability, efficiency, cost savings. Each conversation is a data point, not a win-or-lose moment.
At UF, innovating at the edge of our current knowledge in HCI and VR means most of the ideas we generate won’t pan out. What we do instead is hypothesize to fail early, and fail in order to learn.
In research, you do not try to prove yourself right. You try to find out quickly if you are wrong, because being wrong fast is cheaper than being wrong at late (anyone tried to constantly resubmit marginal study results in the hopes that some set of reviewers will see the work as sufficient for publication? How did that work out?).
We budget for intelligent failure. In my VERG lab, I give students a small "failure budget" to test risky ideas ($500 from overhead goes to Prolific or Amazon Mechanical Turk). The expectation is not "make this work," it is "learn something we did not know before by testing your idea with a small (n=10) set of users". At first, students are horrified if initial data looks unpromising… “uh oh, I’ve wasted money!” By the third cycle, they stop being afraid to be wrong. They start being afraid not to learn. The same applies in robotics. You cannot just reward outcomes. You have to reward signal discovery.
Here is the uncomfortable truth: most first ideas are not good. When I think back at the last ten of my ideas, maybe two are worth pursuing. That ratio is normal [1]. The problem is not bad ideas. The problem is when a team burns six months protecting a bad idea instead of letting it die in week two.
This is why culture matters more than slogans.
If an engineer or student is afraid to surface a weak result, that weak result goes silent. Silence is dangerous in healthcare robotics, because silence can hide edge cases, misuse, or safety blind spots. Work on surgical robotics and simulation training has shown that structured, feedback-rich environments improve technical performance, reduce error rates, and enhance operator safety over time, because errors are treated as training signals rather than personal flaws [2]. You cannot get that effect in a blame culture.
So here is the real leadership question for any CTO: in your org, does failure kill momentum, or create it?
At Rovex, we deliberately build rituals that normalize lowercase f failure:
- Celebrating "What surprised us, and what broke in a useful way?"
- Demo reviews that include what did not work and why, not just polished clips.
- Honest investor and partner conversations. Experienced stakeholders do not lose trust when you say, "This test did not work." They lose trust when you pretend everything is flawless, because everyone in healthcare knows nothing is flawless.
Psychological safety is not about being nice. It is about telling the truth early enough to act on it [1].
I believe we also need to rethink how we train new engineers and new researchers to think about failure. Look at software. Your phone updates constantly. That is the same assignment being resubmitted over and over, each version informed by real-world feedback. Imagine education built on that model. Imagine clinical training built on that model. Imagine robotics validation built on that model. You do not "ship once and hope." You iterate, observe, correct, repeat.
This is not softness. This is discipline.
Closing thought for the team, the partners, and the industry: If you respect time, truth, and safety, then you have to respect failure. Lowercase f failure is not carelessness. It is curiosity. It is how you get safer systems, smarter robots, and more trusted technology in real hospitals.
Because in healthcare, the cost of pretending you are right is always higher than the cost of quickly learning you were wrong.
References
1. Edmondson AC. Psychological safety and learning behavior in work teams. Adm Sci Q. 1999;44(2):350-83.
2. Gallagher AG, Satava RM. Virtual reality as a surgical training tool: a review of current status and future directions. Br J Surg. 2002;89(7):857-68.
#HealthcareRobotics #ResearchExploration #FailFastLearnFaster
image prompt: "give me an image of roboticist and VR researchers learning to fail. Title the slide "Learning to Fail". make it landscape. and make it cartoony."