This is a special edition of ASPI's Daily Cyber & Tech Digest, a newsletter that focuses on the topics we work on, including cybersecurity, critical technologies, foreign interference & disinformation. Sign up here.
Follow us on Bluesky, LinkedIn, or X.
Welcome to another edition of The Daily Cyber & Tech Digest Monthly Roundup! Each month, an ASPI expert shares their top news picks and provides their take on a key story. This month, James Corera, ASPI Cyber, Technology & Security Program Director, shares his perspective.
Looking back on 2025, one thing stands out clearly: the lines between cyber risk, information manipulation and geostrategic pressure have blurred. What were once discrete challenges now reinforce each other. The threats facing open societies aren’t just multiplying — they’re converging, accelerating faster than our defensive architecture can adapt.
Technology is no longer a neutral enabler. It has become a primary lever of power, shaping geostrategic competition, national resilience and the integrity of public debate.
Australia’s Director-General of Security, Mike Burgess, has captured this shift clearly: our security environment is becoming “more dynamic, more diverse and more degraded.” This year showed what that means in practice.
Across 2025, governments and industry confronted a sharp rise in state-backed cyber operations, ransomware campaigns against critical infrastructure, unprecedented exploitation of zero-day vulnerabilities, and increasingly synchronised information operations across the Indo-Pacific. Disinformation campaigns blended generative AI with behavioural targeting in ways that are harder to detect, attribute and counter. In one notable case, a Chinese state-backed group misused commercial AI tools to help automate cyber operations targeting government agencies and technology firms.
Offensive capabilities are proliferating, barriers to entry are collapsing, and grey-zone tactics are exploiting the seams between digital, physical and cognitive domains. The weaponisation of commercial AI was not an anomaly — it was a warning that trust in core systems, technical and institutional alike, is eroding.
Against this backdrop, ASPI’s Cyber, Technology and Security (CTS) program upgraded two of its core public-interest data platforms in 2025: the Critical Technology Tracker and the China Defence Universities Tracker. Together they form one of the clearest lenses into where trust is being built, where it is being undermined and where strategic dependencies are sharpening.
The Critical Technology Tracker now covers 74 technologies, including 10 newly added fields across advanced computing, communications, AI and emerging neurotechnologies. China leads in high-impact research output in eight of the ten new areas. Four — cloud and edge computing, computer vision, generative AI and grid-integration technologies — are rated high for Technology Monopoly Risk. These are not abstract academic signals; they are early indicators of where future trust gaps will emerge unless mitigated.
The expanded China Defence Universities Tracker, now covering more than 180 civilian and military institutions, deepens that visibility. New ASPI research shows rising collaboration between leading Chinese universities and Russian institutes in dual-use fields, raising the prospect that international partnerships with China could indirectly support Russia’s war in Ukraine. Understanding these linkages matters precisely because trust—and the lack of it—now shapes strategic choices as much as capability does.
This logic underpins ASPI’s In Whose Tech We Trust?, released in November 2025. The report documents a shift by governments away from vendor-by-vendor risk assessments towards system-level evaluations of ownership, control and influence across cloud services, telecommunications, subsea cables, IoT, AI and operational-technology environments.
The policy question is no longer simply who to avoid. It is how to enable trusted participation persistently, and how to operationalise trust through enforceable frameworks that reflect how modern systems actually function.
This also connects directly to CTS’s work in 2025 on cyber-regulatory fragmentation, which reinforced a core insight: rising complexity cannot be managed with narrow or simplistic rules. Trust must be built across entire technology stacks, not assumed at isolated layers. That means grappling with systemic exposure, capital flows, vendor lineage, and the cumulative effects of design choices that shape control, access and influence.
Meeting this reality requires a shift from a threat-based mindset to a risk-based one.
Threat thinking catalogues hostile actors, capabilities and intentions. Risk thinking asks a harder question: how we respond, adapt and endure. Threat is about exposure; risk is about capacity — our ability to act with clarity, reduce the likelihood of harm, and limit consequences when harm occurs.
These themes were front and centre at The Sydney Dialogue (TSD) earlier this month. Leaders from government, industry and research across nearly 20 countries converged on a shared conclusion: trust and risk navigation are now the organising principles for technology, security and prosperity.
Yet the information environment that enables us to wrestle with this complexity is splintering. We increasingly rely on AI assistants for basic knowledge, including news. The risk of error rates—due, in part, to AI slop and hallucinations—remains high, yet users increasingly treat the outputs as authoritative without validation.
Every response about the Chinese Communist Party, AUKUS or Australian defence spending becomes a potential vector for persuasion, polarisation or disinformation. Burgess has already warned that AI could take online radicalisation and disinformation to new levels, citing recently uncovered links between pro-Russian influencers in Australia and an offshore media organisation almost certainly directed by Russian intelligence.
We have more information than ever, but diminishing clarity about what to trust. That erosion of confidence is itself a strategic risk.
This is why 2026 will be pivotal.
CTS’s focus next year is to help rebuild the capacity to trust — not rhetorically, but operationally. That means moving decades of validated datasets and analytic IP from static reference material to actionable insight: integrated environments that link infrastructure data, vendor lineage, supply-chain exposure, geostrategic signals and emerging risk across entire technology stacks.
In a world where threats are ambient and trust is contested, resilience depends on visibility, coherence and the ability to act before uncertainty hardens into vulnerability.
The concepts CTS has been developing — around high-risk capital, systemic dependencies and stack-level assurance— show how these strands can converge to give governments and operators visibility grounded in real indicators of control, access and influence. Without that visibility, neither speed nor coherence is possible.
The shift in 2026 is about making this visibility practical and shared. CTS will broaden collaboration with leading data aggregators, explore commercial pathways to scale its analytic IP, and co-create tools that can be used across government, industry and regional partners.
Trust is not built by any one institution. It is created through aligned frameworks, shared evidence and interoperable approaches to risk.
The path forward is partnership: building trusted ecosystems with allies and like-minded partners, embedding trust frameworks across regions, and closing critical insight gaps.
The aim is not just sharper research, but deployable capability — continuously updated models of systemic risk, early-warning signals that cut across sectors, and assurance frameworks that integrate directly into procurement, regulation and planning.
As Australia’s outgoing Director-General of National Intelligence Andrew Shearer observed in conversation with ASPI Executive Director Justin Bassi at TSD:
“For Australia the key is to bring a clear realism to the challenges we face and a clear realism to the strengths we have as a country. Finding that mature, realistic position that reflects our strengths is how we exercise agency — and ultimately how we uphold our sovereignty.”
In that shift, clarity becomes an asset. Insight becomes resilience. And in 2026, both will be essential to building an Indo-Pacific that is open, stable — and trusted.
ASPI
A foundational analysis of how governments could reframe trust from a vendor-level question to a system-level requirement. It explores ownership, control, influence and the governance of critical technology ecosystems across cloud, telecoms, subsea cables and operational technology. It grounds the roundup’s central argument that trust is now a strategic capability and that enforceable trust frameworks—not ad hoc decisions—are needed for 2026.
Carnegie Endowment
A clear-eyed assessment of how U.S. alliances contribute to—and complicate—strategic competition with China. The report details each ally’s capacity, political will and entanglement risk, and highlights how trust, predictability and shared risk perception determine whether alliances strengthen or weaken collective strategy. It reinforces the core themes that trust is a strategic capability, that partnerships require realism and continuous adaptation, and that allied alignment—especially with Australia—is essential to navigating systemic risk in the Indo-Pacific.
Department of Industry, Science and Resources
A newly released roadmap that sets out how Australia intends to build an “AI-enabled economy” that is inclusive, competitive and trusted. The Plan emphasises three core goals: capturing the economic opportunities of AI, spreading its benefits broadly (across regions, communities and industries), and keeping Australians safe as the technology evolves. It reinforces that Australia is wrestling constructively with the challenge of balancing these potentially competing imperatives. The extent to which the Plan now delivers isn’t simply an imperative for government, but is rather a shared endeavour for government, industry and society.
Share
For more on China's pressure campaign against Taiwan—including military threats, interference and cyber warfare, check out ASPI’s State of the Strait Weekly Digest.