Global Security at the UN: Space, Military AI & Cyber Challenges
Hello friends,
Iāve just wrapped Session 3 of the Disarmament meetings (Friday), and Iām still buzzing. It was a 165-minute sprint through outer space security, military AI, Northeast Asiaās rapidly shifting dynamics, and ICT/cyberāguided by some of the sharpest minds in the UN system. As the person helping to wrangle the flood of Q&A and chat, I had a front-row seat to what people really want to know, and what actually matters next.
I just finished tidying up my notes from the meeting and decided to make a blog post from them, with whatever is not confidential and what I think is important for people to think about. Wish I could share some of the slides, but I was told I can’t do that (buy me a coffee). Hereās what stood out, why it matters, and where Iām putting my energy in the weeks ahead.
Outer space security is no longer abstractāit’s infrastructure
Michael Spies (UNODA Geneva) delivered a masterclass on the space domain that cut through buzzwords and got specific: LEO, MEO, GEO, and why each orbit matters; the three components of space systems (space segment, ground segment, data links); and the four threat vectors (Earth-to-space, space-to-space, space-to-Earth, and the too-often-overlooked Earth-to-Earth cyber vector).
His debris slide is going to haunt me. A single anti-satellite (ASAT) test can spike the debris population and create cascading, years-long risk for every satellite that keeps your phone navigating, your hospital scheduling, and your countryās early-warning systems humming. The takeaway: space security is supply-chain security for modern life.
Policy-wise, the center of gravity is finally consolidating. The open-ended working group (OEWG) on the prevention of an arms race in outer space (PAROS) runs through 2028 and is intentionally looking at everything, like capabilities and behaviours, binding and non-binding. Meanwhile, the Conference on Disarmament still exists (and still struggles), but the energy is with broad-based, inclusive formats. One near-term bright spot: the voluntary national commitments not to conduct destructive direct-ascent ASAT testsāexactly the kind of norm that can and should become binding.
My takeaway: verification and transparency are the currency of enforceability in space. You canāt āpoliceā orbit like a street corner, but you can make cheating hard, costly, and reputationally devastating. Thatās the game.
AI in the military domain: the document I canāt stop thinking about
RenĆ© Holbach walked us through the UN Secretary-Generalās new report on AI in the military domain (A/80/78), and itās hard to overstate how significant this document is. It captures, with clarity and urgency, the opportunities, risks, and the legal spine that must guide everything from battlefield decision-support to lethal autonomy.
And yes, seeing my written contributions reflected in the report was surreal. Iāve been working for years to make sure the lived concerns from training rooms, policy labs, and dialogue sessions make it into the official record: compressed decision timelines, the OODA loop sped up to machine tempo; the risks of automation bias; and the non-negotiable need for human responsibility throughout the AI lifecycleāfrom design to deployment to decommissioning.
Three things I keep repeating:
- International law applies. Full stop. The Charter, IHL, IHRLāacross the AI lifecycle.
- Decisions on nuclear weapons must remain human. That floor is non-derogable.
- We need an inclusive UN process dedicated to military AI governanceāand we need it now.
This isnāt abstract. States are already integrating AI into ISR, logistics, cyber defence, and targeting workflows. The question isnāt whetherāitās how. Done right, AI can improve discrimination and reduce civilian harm. Done wrong, it lowers the threshold for force, supercharges miscalculation, and erodes accountability. The SGās call for a legally binding instrument on lethal autonomous weapons systems (LAWS), prohibiting those that canāt comply with IHL and regulating everything else by 2026, is the smartest, most workable path on the table.
Northeast Asia: a region that exports riskāand doctrine
Dr. Soyoung Kimās regional spotlight was the reality check we needed. A few notes that stuck with me:
- RussiaāNorth Korea ties arenāt symbolic. Theyāre transactional and tactical: munitions for tech transfer and battlefield learning. That experience compounds over time.
- Japanās shift is real. Higher defence spending, counter-strike capabilities, a deliberate rebuild of its defence industrial baseāthis is a strategic redefinition, not a blip. If you have ever read the Japanese constitution, you’ll understand why this shift is really concerning. Do you know what I’m referring to?
- Korean defence exports are reshaping supply chains in Eastern Europe and Southeast Asia. That changes who depends on whom, fast.
- The region lacks robust security institutions. No NATO analogue. ASEAN-style habits of dialogue donāt extend north. In a high-velocity, tech-saturated environment, the absence of persistent, institutionalised risk-reduction channels is itself a vulnerability.
The strategic culture is tilting toward transactionalism (short-term relative gains over long-term collective security), and the normative guardrails that once held are eroding. In a world of hypersonics, persistent ISR, AI-enabled C2, and proliferating cyber capabilities, thatās the wrong direction.
ICT and cybersecurity: the most mature multilateral track (and itās evolving again)
Katherine Prizeman laid out whatās working in the cyber track and what has to come next. The UNās work on ICT security goes back to 1998, and it shows: norms of responsible state behaviour, confidence-building measures, and capacity building all became deliverables. The latest OEWG wrapped in July with practical steps like a global points-of-contact directory to handle incidents. Next year, a permanent mechanism kicks in. Thatās what institutional learning looks like.
The threat picture is as real as it gets: ransomware, wiperware, BGP hijacks, DDoS against critical servicesāoften by state-affiliated actors. The dividing line between ācriminalā and āstate-linkedā keeps blurring, and jurisdictions still struggle to cooperate at machine speed. This problem will become infinitely worse when quantum computing becomes the norm. The answer wonāt be one megatreaty; itās layered governance, scalable norms, and relentless capacity building, especially for developing countries that didnāt design todayās protocols but live with their consequences.
What I heard in the questions and what Iām doing next
Because I spend these sessions in the flow of chat and Q&A, patterns jump out:
- People want enforceability, not poetry. The appetite is for norms you can test, commitments you can verify, and consequences that bite.
- The private sector is now a strategic actor in space and cyber. That demands clearer state obligations to supervise national activities (outer space law 101), but also new models for due diligence, transparency, and liabilityāespecially when commercial systems support military operations.
- āPrecisionā isnāt a free pass. Yes, accuracy can support IHL compliance. But precision can also make force tempting, normalise persistent, low-visibility pressure, and widen asymmetries. Tools donāt solve doctrine; humans do.
- Capacity building is not charity. Itās self-defence. Read that again. Interdependence means the weakest link sets your exposureāon cyber, on space traffic management, on AI safety culture. Closing digital divides is national security.
On my plate after Session 3
- Iām coordinating to get written responses to the many sharp questions we couldnāt answer yet, especially on debris mitigation, private actor accountability in space, LAWS accountability chains, and cyber incident cooperation.
- Iām compiling a resource pack, linking to the SG reports on AI in the military domain and science/tech developments, UNODA materials on cyber and space, and clear explainers on the OEWGsāso meeting participants can go deeper without getting lost.
If youāve read this far, hereās my honest ask
Share this with one person who cares about the intersection of technology and peace. Not because āawareness matters,ā but because the window for shaping workable rules is open now and the UN is actually moving. The Secretary-Generalās AI report isnāt a think piece; itās a staging document for international AI governance. The outer space OEWG is finally holistic. The cyber track is institutionalising. These are levers we can pushātogether.
And yes, itās a little wild to see my fingerprints in an UN General Assembly report. Itās also a reminder: the UN is porous to good ideas, especially when theyāre grounded in reality and delivered with persistence. Thatās the work Iām here for.
Until next session,
Avi
P.S. If youāve got a burning question on space debris enforcement, AI accountability, or cyber incident response that didnāt get answered, send it my way. Iāll route it to the right expert and make sure it lands in the post-session write-up.
Avi is a researcher educated at the University of Cambridge, specialising in the intersection of AI Ethics and International Law. Recognised by the United Nations for his work on autonomous systems, he translates technical complexity into actionable global policy. His research provides a strategic bridge between machine learning architecture and international governance.








One Comment