By Col. Frederick “Trey” Coleman (USAF, ret.), Chief Product Officer, Raft | March 4, 2026
At 7:02 a.m. on December 7, 1941, two radar operators at Hawaii’s Opana Point detected a massive formation of incoming aircraft: the first wave of the Japanese attack on Pearl Harbor.
They reported it, but their report was dismissed. So defensive assets were not scrambled, and history took its course.
Radar was new technology in 1941. In many ways, it was the “artificial intelligence” of its day: experimental, powerful, promising, but not yet fully trusted. It’s impossible to know how much might have changed had the radar reports been taken seriously. But we do know this: the technology worked. The failure wasn’t in the system. It was in the trust.
Today, we find ourselves at a similar “Opana Point” moment. Artificial intelligence is no longer a science-fiction concept or a laboratory curiosity; it’s the definitive future of warfighting. The battlespace, whether in the air, at sea, in space, or cyberspace, is saturated with data.
The side that can sense, make sense, and act on that data with the greatest velocity and clarity will be the side that wins. If we fail to integrate AI into our decision-making cycles, we are not simply falling behind; we are accepting defeat before the first shot is fired.
The Pentagon understands these stakes and is racing toward adoption. Their urgency is real and necessary, but there remains a critical gap between technological capability and operational trust. That gap is dangerous.
We cannot wait for a flawless, “perfect” version of AI to emerge from a pristine laboratory environment. War does not wait for perfect, and combat rarely presents ideal conditions. Instead, we must put experimental tools into the hands of operators now. We must stress them in real-world environments. We must learn from their failures as well as their successes.
In many ways, we are building the plane while it is flying. That’s nothing new to the American military. We have always adapted under pressure, and trust is not built in a vacuum. It’s built through use, through repetition, refinement, and accountability. Operators need to see AI tools improve their situational awareness. Commanders need to witness how AI shortens decision timelines without sacrificing judgment. Analysts need to experience how these systems enhance, not replace, their expertise.
At the same time, adoption cannot be purely bottom-up. Leadership must set the expectation that AI is not optional. It cannot remain an “experimental add-on” or a boutique capability reserved for specialized units. It must become embedded in the commander’s intent, planning processes, and operational doctrine. The message should be clear: integrating AI into our force is not a pilot program. It is a strategic imperative.
As we accelerate this integration, however, we must avoid a second mistake – one that is less visible but equally consequential.
AI is too powerful and too malleable to be confined within a single proprietary ecosystem. The temptation to lock ourselves into one vendor’s “black box” solution is understandable. It promises simplicity. It promises speed. But this tactical expediency risks strategic failure.
The U.S. military requires open, data-agnostic architectures that allow us to pivot as technology evolves. Breakthroughs will not come from a single company or consortium. They will emerge across a dynamic, competitive ecosystem. If we shackle ourselves to restrictive licensing or incompatible systems, we surrender flexibility – the very quality that has defined American military success.
Innovation thrives on interoperability and competition. It withers under monopolies.
The stakes of getting this right are not theoretical. They are geopolitical.
China has been explicit about its ambition to unify Taiwan. A cross-strait conflict would not be a regional event. It would threaten global shipping lanes, disrupt the world economy, and place control of critical semiconductor production in Beijing’s hands. Such a shift would alter the balance of global power for decades.
Deterrence in the Pacific will depend not only on ships and aircraft, but on information superiority. AI will likely be the first to detect subtle “left-of-launch” indicators – logistical shifts, unusual maritime patterns, cyber probes, satellite movements. It will surface patterns that human analysts alone may miss.
And then a decision will be required.
When that alert flashes across a commander’s screen, will it be trusted? Will defensive forces posture in time? Or will we hesitate, debate, and second-guess – repeating the quiet dismissal of December 1941?
Technology rarely fails us outright. More often, we fail to adapt to it in time.
The lesson of Opana Point is not about machines. It is about mindset. It is about the courage to trust new tools when the evidence demands it. It is about building systems and cultures that move at the speed of the threat.
Artificial intelligence will not replace American warfighters, but it will define the tempo at which they operate.
The question before us is simple: when the next warning appears on the screen, will we be ready to believe it?


