View of the United Nations Office in Geneva adorned with flags of various countries.

Global Security at the UN: Space, Military AI & Cyber Challenges

Hello friends,

I’ve just wrapped Session 3 of the Disarmament meetings (Friday), and I’m still buzzing. It was a 165-minute sprint through outer space security, military AI, Northeast Asia’s rapidly shifting dynamics, and ICT/cyber—guided by some of the sharpest minds in the UN system. As the person helping to wrangle the flood of Q&A and chat, I had a front-row seat to what people really want to know, and what actually matters next.

I just finished tidying up my notes from the meeting and decided to make a blog post from them, with whatever is not confidential and what I think is important for people to think about. Wish I could share some of the slides, but I was told I can’t do that (buy me a coffee). Here’s what stood out, why it matters, and where I’m putting my energy in the weeks ahead.

Outer space security is no longer abstract—it’s infrastructure

Michael Spies (UNODA Geneva) delivered a masterclass on the space domain that cut through buzzwords and got specific: LEO, MEO, GEO, and why each orbit matters; the three components of space systems (space segment, ground segment, data links); and the four threat vectors (Earth-to-space, space-to-space, space-to-Earth, and the too-often-overlooked Earth-to-Earth cyber vector).

His debris slide is going to haunt me. A single anti-satellite (ASAT) test can spike the debris population and create cascading, years-long risk for every satellite that keeps your phone navigating, your hospital scheduling, and your country’s early-warning systems humming. The takeaway: space security is supply-chain security for modern life.

Policy-wise, the center of gravity is finally consolidating. The open-ended working group (OEWG) on the prevention of an arms race in outer space (PAROS) runs through 2028 and is intentionally looking at everything, like capabilities and behaviours, binding and non-binding. Meanwhile, the Conference on Disarmament still exists (and still struggles), but the energy is with broad-based, inclusive formats. One near-term bright spot: the voluntary national commitments not to conduct destructive direct-ascent ASAT tests—exactly the kind of norm that can and should become binding.

My takeaway: verification and transparency are the currency of enforceability in space. You can’t “police” orbit like a street corner, but you can make cheating hard, costly, and reputationally devastating. That’s the game.

AI in the military domain: the document I can’t stop thinking about

René Holbach walked us through the UN Secretary-General’s new report on AI in the military domain (A/80/78), and it’s hard to overstate how significant this document is. It captures, with clarity and urgency, the opportunities, risks, and the legal spine that must guide everything from battlefield decision-support to lethal autonomy.

And yes, seeing my written contributions reflected in the report was surreal. I’ve been working for years to make sure the lived concerns from training rooms, policy labs, and dialogue sessions make it into the official record: compressed decision timelines, the OODA loop sped up to machine tempo; the risks of automation bias; and the non-negotiable need for human responsibility throughout the AI lifecycle—from design to deployment to decommissioning.

Three things I keep repeating:

  1. International law applies. Full stop. The Charter, IHL, IHRL—across the AI lifecycle.
  2. Decisions on nuclear weapons must remain human. That floor is non-derogable.
  3. We need an inclusive UN process dedicated to military AI governance—and we need it now.

This isn’t abstract. States are already integrating AI into ISR, logistics, cyber defence, and targeting workflows. The question isn’t whether—it’s how. Done right, AI can improve discrimination and reduce civilian harm. Done wrong, it lowers the threshold for force, supercharges miscalculation, and erodes accountability. The SG’s call for a legally binding instrument on lethal autonomous weapons systems (LAWS), prohibiting those that can’t comply with IHL and regulating everything else by 2026, is the smartest, most workable path on the table.

Northeast Asia: a region that exports risk—and doctrine

Dr. Soyoung Kim’s regional spotlight was the reality check we needed. A few notes that stuck with me:

  • Russia–North Korea ties aren’t symbolic. They’re transactional and tactical: munitions for tech transfer and battlefield learning. That experience compounds over time.
  • Japan’s shift is real. Higher defence spending, counter-strike capabilities, a deliberate rebuild of its defence industrial base—this is a strategic redefinition, not a blip. If you have ever read the Japanese constitution, you’ll understand why this shift is really concerning. Do you know what I’m referring to?
  • Korean defence exports are reshaping supply chains in Eastern Europe and Southeast Asia. That changes who depends on whom, fast.
  • The region lacks robust security institutions. No NATO analogue. ASEAN-style habits of dialogue don’t extend north. In a high-velocity, tech-saturated environment, the absence of persistent, institutionalised risk-reduction channels is itself a vulnerability.

The strategic culture is tilting toward transactionalism (short-term relative gains over long-term collective security), and the normative guardrails that once held are eroding. In a world of hypersonics, persistent ISR, AI-enabled C2, and proliferating cyber capabilities, that’s the wrong direction.

ICT and cybersecurity: the most mature multilateral track (and it’s evolving again)

Katherine Prizeman laid out what’s working in the cyber track and what has to come next. The UN’s work on ICT security goes back to 1998, and it shows: norms of responsible state behaviour, confidence-building measures, and capacity building all became deliverables. The latest OEWG wrapped in July with practical steps like a global points-of-contact directory to handle incidents. Next year, a permanent mechanism kicks in. That’s what institutional learning looks like.

The threat picture is as real as it gets: ransomware, wiperware, BGP hijacks, DDoS against critical services—often by state-affiliated actors. The dividing line between “criminal” and “state-linked” keeps blurring, and jurisdictions still struggle to cooperate at machine speed. This problem will become infinitely worse when quantum computing becomes the norm. The answer won’t be one megatreaty; it’s layered governance, scalable norms, and relentless capacity building, especially for developing countries that didn’t design today’s protocols but live with their consequences.

What I heard in the questions and what I’m doing next

Because I spend these sessions in the flow of chat and Q&A, patterns jump out:

  • People want enforceability, not poetry. The appetite is for norms you can test, commitments you can verify, and consequences that bite.
  • The private sector is now a strategic actor in space and cyber. That demands clearer state obligations to supervise national activities (outer space law 101), but also new models for due diligence, transparency, and liability—especially when commercial systems support military operations.
  • “Precision” isn’t a free pass. Yes, accuracy can support IHL compliance. But precision can also make force tempting, normalise persistent, low-visibility pressure, and widen asymmetries. Tools don’t solve doctrine; humans do.
  • Capacity building is not charity. It’s self-defence. Read that again. Interdependence means the weakest link sets your exposure—on cyber, on space traffic management, on AI safety culture. Closing digital divides is national security.

On my plate after Session 3

  1. I’m coordinating to get written responses to the many sharp questions we couldn’t answer yet, especially on debris mitigation, private actor accountability in space, LAWS accountability chains, and cyber incident cooperation.
  2. I’m compiling a resource pack, linking to the SG reports on AI in the military domain and science/tech developments, UNODA materials on cyber and space, and clear explainers on the OEWGs—so meeting participants can go deeper without getting lost.

If you’ve read this far, here’s my honest ask

Share this with one person who cares about the intersection of technology and peace. Not because “awareness matters,” but because the window for shaping workable rules is open now and the UN is actually moving. The Secretary-General’s AI report isn’t a think piece; it’s a staging document for international AI governance. The outer space OEWG is finally holistic. The cyber track is institutionalising. These are levers we can push—together.

And yes, it’s a little wild to see my fingerprints in an UN General Assembly report. It’s also a reminder: the UN is porous to good ideas, especially when they’re grounded in reality and delivered with persistence. That’s the work I’m here for.

Until next session,
Avi

P.S. If you’ve got a burning question on space debris enforcement, AI accountability, or cyber incident response that didn’t get answered, send it my way. I’ll route it to the right expert and make sure it lands in the post-session write-up.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

One Comment