Pentagon–Anthropic clash exposes unresolved rules for military AI use
Plus, SpaceX cuts Russian Starlink access, disrupting frontline drone operations in Ukraine.
Welcome to the latest edition of ASPI’s Cyber & Tech Digest.
Each week, ASPI curates and contextualises the most important developments in cyber, technology, and geopolitics — highlighting what matters and why.
This edition covers the period: 14 February 2026 to 20 February 2026.
Follow the Australian Strategic Policy Institute on Bluesky, LinkedIn, and X.
A quick note to readers:
We’re working to make the Digest as comprehensive and readable as possible, though this edition is on the longer side.
If you’d prefer shorter updates more frequently (for example, two or three editions across the week), we’d like to hear that. As outlined in our recent format update, we’ll soon introduce a Substack pledge option so readers can signal whether they’d support a more frequent release schedule. If there’s sufficient interest, we would look at moving to up to three editions per week, delivering key developments faster and in shorter bursts.
We also want to be transparent about our workflow: we use AI tools to assist with research and drafting, but every edition is reviewed, edited and curated by our team. When the Digest includes our own analysis or commentary, it is written by us.
Feedback is always welcome. You can reply directly to this email, leave a comment on the post, or contact us at aspicts@substack.com
— The ASPI Cyber, Technology & Security Program
What We’re Tracking
Pentagon threatens to cut Anthropic contract over AI guardrails
What happened: The Pentagon is considering severing or scaling back its relationship with Anthropic after months of tense negotiations over how the military can use its Claude model, according to reporting from Axios and Semafor.
The dispute intensified following a U.S. raid targeting former Venezuelan President Nicolás Maduro, in which Claude was used via Palantir Technologies’ AI platform, as first reported by the Wall Street Journal. After the operation, an Anthropic employee contacted a counterpart at Palantir, prompting concern inside the Defense Department that the company might object to certain military uses.
At issue is the Pentagon’s demand that AI labs permit use of their models for “all lawful purposes,” including weapons development and intelligence operations. Anthropic has not agreed, seeking carve-outs around mass domestic surveillance and fully autonomous weapons. Other labs, including OpenAI, Google, and xAI, are reported to have lifted ordinary guardrails for Pentagon work.
The Pentagon’s contract with Anthropic, valued at up to $200 million, is now under review.
Why we’re tracking this: Anthropic was the first frontier AI lab to place a model on classified U.S. networks. A rupture would signal that access to classified systems may hinge not just on technical capability, but on willingness to accept open-ended operational use.
The standoff also reveals how quickly general-purpose models are being embedded in military workflow, and how unresolved questions about autonomous systems are being negotiated through procurement rather than legislation.
What people are saying:
“Everything’s on the table,” a senior administration official told Axios, including replacing Anthropic if necessary.
“We will not employ AI models that won’t allow you to fight wars,” Defense Secretary Pete Hegseth said in remarks reported by Semafor.
Anthropic said it is “committed to using frontier AI in support of US national security,” according to statements quoted by the Wall Street Journal.
My view: Both positions here are reasonable: Anthropic maintaining limits around autonomous weapons and mass surveillance, and the Pentagon seeking operational certainty from its suppliers. But their clash exposes something more fundamental: there is no democratically settled boundary on how frontier AI models should be used in lethal contexts. In the absence of legislation, those boundaries are being drawn through procurement contracts and vendor relationships, an arrangement that is inherently unstable. Until Congress acts, these conflicts will recur, and each time, the pressure on firms to quietly concede will grow.
— Fergus Ryan, CTS
What We’re Watching
A weekly scan of notable developments we’re tracking across technology, policy, and geopolitics.
🚀 Strategic competition
China has spent more than $150 billion to build a domestic semiconductor industry, according to The New York Times, but Chinese firms still produce far fewer and less advanced chips than foreign competitors due largely to U.S.-led export controls on key equipment. Analysts estimate Chinese companies may produce only about 2% as many AI chips as foreign firms this year, with memory chip output lagging far behind global leaders. Huawei and other companies are developing workarounds such as linking weaker chips and building state-backed computing clusters, but supply, efficiency and cost constraints persist; Chinese AI companies remain heavily dependent on foreign chips and cloud services despite recent policy shifts allowing limited Nvidia sales to China.
The Trump administration has paused several proposed tech-security measures targeting China ahead of a planned summit with Xi Jinping, including moves affecting China Telecom’s U.S. operations, Chinese equipment in U.S. data centres, TP-Link routers and Chinese electric vehicles. Sources described the pause as following a trade truce and aiming to stabilise relations. Officials indicated the measures could be revived if relations deteriorate.
The Pentagon briefly posted and then withdrew an updated list of Chinese firms it alleges support China’s military, adding companies including Alibaba, Baidu, BYD and WuXi AppTec while removing memory chipmakers CXMT and YMTC. The Pentagon did not explain the withdrawal, which came amid a softer U.S. policy tone toward China following a trade truce and ahead of a potential Trump–Xi meeting. Inclusion on the list does not impose sanctions but restricts future Pentagon contracting and signals national security concerns.
Chinese AI entrepreneurs featured in this Bloomberg piece have amassed a combined $100.5 billion in wealth as Beijing’s push for technological self-reliance accelerates. Founders from firms including MiniMax, DeepSeek, Unitree Robotics and Moore Threads have benefited from government support, IPOs and buy-local mandates. Many are former U.S. tech employees and maintain low public profiles to avoid sanctions and domestic scrutiny.
ByteDance has launched Doubao 2.0, an upgraded version of China’s most-used AI chatbot that it positioned for an agent era focused on complex multi-step task execution. The company said the model offers advanced reasoning capabilities at significantly lower cost as competition intensifies with domestic rivals including DeepSeek and Alibaba’s Qwen. Meanwhile, Alibaba launched its Qwen 3.5 AI model, claiming major performance gains and a 60% cost reduction while enabling agentic capabilities that can execute tasks across apps. The release comes as competition intensifies with domestic rivals including ByteDance and DeepSeek. Alibaba said the model outperforms several leading U.S. models on selected benchmarks.
Meanwhile, ByteDance is recruiting for nearly 100 U.S.-based roles in its AI division, Seed, as it expands research and product development across labs in the U.S., Singapore and China, according to Bloomberg. The report said roles include work on large language models, image and video generation tools, human-like AI systems and science models for drug discovery. It said the hiring push follows ByteDance’s deal to sell parts of its U.S. TikTok business and comes amid renewed scrutiny of its AI video model Seedance 2.0.
Baidu will integrate the OpenClaw AI agent into its main search app, giving users the option to automate tasks such as scheduling, coding and file organisation. The rollout targets Baidu’s 700 million monthly users and will extend to e-commerce and other services. Rivals including Alibaba have also embedded their AI tools, such as Qwen, into shopping and travel platforms. (OpenClaw is the open-source, self-hosted assistant formerly known as Clawdbot (and briefly Moltbot), built to run an AI “agent” that can execute actions via plug-in skills rather than just chat.)
India is using the India A.I. Impact Summit in New Delhi to position itself as a champion of a low-cost, localised AI model for the developing world, pairing subsidies for compute, tools such as Adalat AI, and a newly approved $1.1 billion state-backed venture fund with a push for large-scale data-centre investment and a proposed “global AI commons” agenda on shared datasets, standards and safety norms. Prime Minister Narendra Modi convened foreign leaders, Silicon Valley firms and Indian conglomerates with a focus on investment and commercial deals, as Anthropic partnered with Infosys and OpenAI announced a collaboration with Tata Consulting Services and plans for its first India office; Adani, Reliance and Tata pledged major data-centre spending, including Adani’s $100 billion commitment by 2035. The summit also highlighted industry rivalries, with Sam Altman and Dario Amodei drawing attention during a symbolic photo moment and outlining differing emphases on AI safety, while Bill Gates withdrew from delivering a keynote hours before the event amid renewed scrutiny over past ties to Jeffrey Epstein, with a foundation official speaking in his place.
China is accelerating development of brain-computer interface technology as part of a national strategy to create world-class companies by 2030, with Beijing backing efforts to expand the sector. Shanghai start-up NeuroXess said a paralysed patient controlled a computer cursor days after implantation and that the company aims to move toward human trials, supported by streamlined regulation and investment. The sector has seen rising funding and multiple clinical trials as China seeks to compete with companies such as Elon Musk’s Neuralink.
Saudi Arabia’s AI company Humain has invested $3 billion in Elon Musk’s xAI during its Series E funding round, becoming a significant minority shareholder. Humain said its stake was converted into SpaceX shares after SpaceX acquired xAI. The investment follows a partnership to build 500 megawatts of AI data centre infrastructure.
🛡 Cyber posture
Palo Alto Networks reportedly removed references to China from a cyberespionage report despite internal confidence in the attribution, citing concerns about retaliation after China banned its software. The published report instead described the hackers as a state-aligned group operating out of Asia. The hacking operation targeted governments and critical infrastructure in 37 countries. In The Strategist this week, James Corera, the director of ASPI’s Cyber, Technology and Security program argued that openly attributing state-linked cyber activity is becoming a commercial advantage, and urged governments to reinforce incentives for transparency through procurement and trusted supplier frameworks, while ASPI executive director Justin Bassi warned that failing to publicly name China’s cyberattacks risks eroding trust, weakening deterrence and misaligning government and industry responses.
Former Sony Pictures executive Michael Lynton recounted fast-tracking approval of the 2014 film “The Interview” outside normal processes, ahead of the cyberattack that crippled Sony’s IT systems and exposed confidential emails, employee data and unreleased films. He wrote the hack disrupted operations for months and that major theatre chains refused to screen the film after threats, prompting Sony to release it online with support from Google and Stripe. The piece notes the attack was later attributed by the FBI to North Korea.
Texas Attorney General Ken Paxton has sued TP-Link Systems, alleging the company misled consumers about security and allowed Chinese state-sponsored hackers to exploit router vulnerabilities. The lawsuit cited prior research linking TP-Link firmware flaws to Chinese hacking campaigns and followed earlier Texas lawsuits against Chinese TV manufacturers. TP-Link denied the allegations and said its U.S. operations and data storage are based in the U.S.
Australia’s Department of Parliamentary Services said a Chinese YisouSpider crawler bot caused a temporary outage of the Australian Parliament website last month by overloading it while indexing pages. DPS told Senate estimates there was no broader cyber incident linked to the disruption.
Australia’s Department of Defence has awarded Palantir a one-year $7.6 million limited-tender contract for an ICT system platform for its Cyber Warfare Division. The deal brings Defence spending on Palantir to more than $26 million since 2013, following earlier contracts including a $4.1 million software deal that ran until late last year. Emails obtained via FOI show a prior contract involved Palantir’s Foundry data analytics platform.
Meta-backed Scale AI has filed a lawsuit against the U.S. Department of Defense in a case expected to involve classified documents, after losing a major National Geospatial-Intelligence Agency contract worth up to $708 million. The company previously filed a bid protest that was dismissed before bringing the case to the Court of Federal Claims. The dispute follows tensions over procurement in key U.S. military AI programs.
SpaceX, meanwhile, has cut Russian forces’ access to Starlink terminals in Ukraine from 1 February after Kyiv requested that only devices approved by the defence ministry remain active. Ukrainian officials and soldiers said the shutdown has disrupted Russian drone operations, logistics and frontline coordination, with some units switching to radio or wired communications. A volunteer group said it identified more than 2,400 Russian-linked terminals through a phishing campaign, enabling Ukrainian forces to target some locations.
The Australian Financial Crimes Exchange’s Ben Scott argued in The Strategist this week that Australia needs faster and more systematic intelligence sharing to build a national defensive network against organised scam syndicates. Scott’s analysis cited recent raids in Palau and Timor-Leste as highlighting regional expansion and increasing sophistication, including links to cryptocurrency and online gambling. It said the Scam Prevention Framework will be critical over the next two years in strengthening collaboration across government, industry and law enforcement.
🕵️ Surveillance states
Iranian authorities are using phone location data, facial recognition and SIM registration to identify and detain protesters involved in recent antigovernment demonstrations. Human rights groups said protesters have received warning texts, had SIM cards suspended and faced banking disruptions after being tracked. Researchers reported Iran has expanded surveillance capabilities over the past decade, including spyware, centralised digital identity systems and cooperation with Russian and Chinese companies.
The U.S. Department of Homeland Security has issued hundreds of administrative subpoenas to Google, Meta, Reddit and Discord seeking identifying data behind social media accounts that criticise or track Immigration and Customs Enforcement. Some companies have complied while notifying affected users, and several subpoenas have been challenged or withdrawn in court. Civil liberties advocates said the expanded use of administrative subpoenas is being used to unmask anonymous speech.
The U.S. State Department is developing an online portal that could include VPN functionality to allow users in Europe and elsewhere to access content banned by their governments, framing the initiative as support for digital freedom but potentially straining relations with allies; the site has not yet launched and internal concerns have reportedly been raised. At the same time, the European Parliament has disabled built-in AI features on staff and lawmakers’ devices over cybersecurity and data-protection risks, citing uncertainty about what data is sent to cloud services. Meanwhile, The Guardian reports that U.S. funding for the State Department–backed Internet Freedom program has been largely cut following staffing reductions and grant freezes, with many grants halted in 2025 despite the program having distributed more than $500 million over the past decade to support censorship-circumvention tools, prompting warnings that reduced funding could weaken access to uncensored internet services as authoritarian governments expand digital controls.
⚖️ Platform accountability
ByteDance is facing escalating copyright pressure over its Seedance video generator and Seedream image model, with the Motion Picture Association, Disney and Paramount accusing the tools of producing copyrighted characters and franchises without authorisation. Studios say many outputs are indistinguishable from their intellectual property and have issued cease-and-desist letters demanding removal of infringing content and stronger protections. ByteDance says it will tighten safeguards but has not disclosed details of the measures or the training data behind the models.
European regulators are rapidly tightening scrutiny of AI chatbots and generative tools, driven largely by concerns over harmful and sexualised content. The UK plans to extend the Online Safety Act to cover chatbots, introduce strict takedown rules for nonconsensual intimate imagery and consider faster limits on children’s social media use, with penalties including major fines or service blocks. Ireland, France, the EU and the UK have launched overlapping investigations into xAI’s Grok over data protection and explicit image generation, including raids and formal probes. UK Prime Minister Keir Starmer has said tech companies must remove nonconsensual intimate images within 48 hours of being flagged or face heavy fines or being blocked in the UK, while Spain is seeking criminal investigations into X, Meta and TikTok over alleged AI-generated child sexual abuse material.
West Virginia’s attorney general has sued Apple, alleging it knowingly allowed iCloud to be used to store and share child sexual abuse material by declining to deploy available detection tools. The complaint argues Apple’s privacy practices facilitated illegal content and violated state consumer protection law, seeking damages and changes to detection and product design practices. Apple said it prioritises user safety and privacy and offers parental control features, but has previously declined to adopt tools like Microsoft’s PhotoDNA and abandoned its own NeuralHash detection system after privacy concerns.
The European Commission has opened a formal Digital Services Act investigation into Shein over the sale of illegal products and concerns about addictive design features. Regulators will assess safeguards against illegal items, including products amounting to child sexual abuse material, and examine transparency of recommendation systems and engagement reward mechanisms. Shein said it will cooperate and has invested in compliance and risk mitigation. The case comes as EU member states have delayed empowering national regulators under the Digital Services Act, contributing to slower implementation two years after full enforcement. According to Tech Policy Press, limited national-level regulatory capacity, political shifts and reduced funding for platform governance research have also affected rollout, leaving enforcement concentrated at the European Commission.
Meta is facing mounting scrutiny across regulation, litigation and product development. In Australia, Meta and TikTok executives defended their moderation systems before a Senate committee on climate misinformation, while in the United States Mark Zuckerberg testified in a social media safety trial over claims the company designed addictive products for young users. Court testimony revealed Meta’s own research found parental controls had limited impact on teens’ compulsive use, and Zuckerberg defended features such as beauty filters and engagement metrics. At the same time, internal documents show Meta is preparing to add facial recognition to its smart glasses, with employees raising privacy concerns as the company weighs the timing of the release.
France has stepped up efforts to counter misinformation it says is undermining Western alliance cohesion, write Eric Frécon, a visiting fellow at ASPI, and Fitriani, a senior analyst at ASPI, in The Strategist. The authors said Paris established VIGINUM to monitor and report on foreign digital interference and launched the “French Response” account on X to counter disinformation with humour and factual rebuttals. They also cited legal action and enforcement measures, including raids on X’s Paris offices and support for EU fines over alleged platform misconduct.
Snap co-founder and CEO Evan Spiegel has written that Australia’s new law banning under-16s from selected social media platforms has led Snapchat to lock or disable more than 415,000 Australian accounts believed to belong to minors. Spiegel argued the policy may push teenagers toward unregulated apps, relies on imperfect age-estimation technology, and could remove beneficial forms of online connection. He advocated for app store–level age verification to create a single age signal across platforms rather than platform-by-platform enforcement.
Roblox has confirmed the suspect in a school shooting in Tumbler Ridge, British Columbia, created a game simulating a mall shooting. The company removed the user’s account and related content and said it is cooperating with law enforcement. Roblox said the game could only be accessed via Roblox Studio and had received seven visits. Meanwhile, Australia’s eSafety Commissioner has issued a formal warning to Roblox over in-game chat and child safety controls, according to an explainer by Infinite Lives. The report said Communications Minister Anika Wells requested a meeting and sought advice from the Classification Board regarding the platform’s PG rating. Roblox has introduced safety measures including age verification, AI chat monitoring and proactive avatar moderation, while eSafety is investigating whether protections are adequate, with potential penalties of up to A$49.5 million.
Conservative influencers have been producing viral posts alleging fraud and corruption that researchers say are increasingly influencing political discourse and policy attention. The phenomenon has been labelled slopulism, describing content that prioritises emotional engagement and viral reach over evidence and traditional journalism. Researchers said the trend reflects closer alignment between online political content creation and government policy agendas.
Researchers publishing in Nature found X’s “For You” algorithm favours conservative content over liberal posts and traditional news media, based on a seven-week experiment with U.S. users. The study reported conservative posts were about 20% more likely to appear in algorithmic feeds, while traditional news appeared roughly 58% less often. Users switched from a chronological to an algorithmic feed reported a measurable conservative shift in issue priorities and attitudes on topics including Trump investigations and the Russia–Ukraine war.
A rumour in Silicon Valley alleging a network of influential gay executives and investors exerts outsized influence over venture capital and startup culture has circulated online and at industry events for years, according to WIRED. The article said speculation intensified in 2025 through social media posts, conference gossip and viral photos involving prominent founders and Y Combinator president Garry Tan. Tan denied suggestions of impropriety, saying a viral sauna photo was mischaracterised and stemmed from a private dinner gathering.
A U.S. federal judge has ruled prosecutors can access Claude AI chat transcripts created by finance startup founder Brad Heppner, rejecting claims they were protected by attorney-client privilege. The judge found the chats were disclosed to a third party and were not confidential under the AI provider’s policies. Lawyers said the ruling increases legal risk as AI chats become part of civil and criminal discovery.
💰 Tech business & markets
Binance has fired several compliance investigators after they reportedly uncovered more than $1 billion in transactions linked to Iranian entities that may have violated sanctions between March 2024 and August 2025. The departures occurred while the exchange remained under a U.S. government monitorship following a major 2023 settlement over anti-money-laundering and sanctions violations. Binance said it remains committed to compliance and declined to comment on personnel matters.
The U.S. Federal Trade Commission has accelerated an antitrust probe into Microsoft’s cloud and AI businesses, including Copilot, by issuing civil investigative demands to competing companies. The requests seek information on Microsoft’s licensing and business practices. Regulators are examining whether the company is monopolising enterprise computing markets.
Queensland Premier David Crisafulli has signalled subdued support for the state’s A$470 million investment in PsiQuantum, according to InnovationAus. The report said Crisafulli omitted the project from a National Press Club speech on the economy and only acknowledged the investment in response to a question. He said his government will honour the existing contract inherited from the previous administration.
TikTok’s U.S. daily active users have remained about 95% of pre-joint-venture levels despite initial fears of a mass exodus after the platform’s U.S. ownership restructuring. The joint venture, created to comply with a Trump executive order, gave Oracle, Silver Lake and MGX stakes while ByteDance retained 19.9%. Analysts said engagement has largely rebounded, while the new ownership introduces expanded data collection and algorithm control over U.S. users. Confused? ASPI’s Fergus Ryan explains:
California tech billionaires and companies are ramping up political spending ahead of the 2026 state elections to oppose proposals such as a billionaire tax and support AI-friendly candidates. As part of this broader push for state-level influence, Meta plans to spend $65 million backing bipartisan super PACs and candidates it sees as supportive of the AI industry, starting in Texas and Illinois, amid concerns about a patchwork of state AI regulations.
The U.S. Commodity Futures Trading Commission plans to file an amicus brief supporting Crypto.com in the Ninth Circuit amid state-level litigation challenging federal oversight of prediction markets. Nearly 50 active cases argue event contracts constitute gambling subject to state law, while the CFTC maintains they fall under its authority as derivatives under the Commodity Exchange Act. The agency said exchanges including Kalshi, Polymarket, Coinbase and Crypto.com are federally regulated and subject to anti-fraud and anti-money-laundering rules.
A U.S. federal appeals court has rejected Kalshi’s request to halt Nevada’s enforcement case seeking to block the prediction-market platform unless it obtains gaming licences. The ruling strengthens Nevada’s position in a wider dispute over whether event-contract trading is gambling or commodity derivatives regulated by the Commodity Futures Trading Commission. The case could force Kalshi to leave Nevada, according to the report.
🌏 Global policy
🇦🇺 Australia
Australia’s Assistant Minister for Science, Technology and the Digital Economy, Andrew Charlton has said the Albanese government is developing an approach to AI regulation aimed at ensuring the technology delivers broad economic and social benefits. Charlton said governments must intervene to turn technological progress into social progress. He spoke ahead of the AI Impact Summit in India.
The Australian Government Digital Transformation Agency said Australia ranked second among 42 countries in the OECD’s 2025 Digital Government Index with a score of 88%. The ranking reflects performance across governance, shared platforms, user-focused services and responsible use of emerging technologies, including AI adoption in the public sector. The agency said the result builds on a fifth-place debut in 2023 and reflects ongoing investment in digital transformation.
In The Strategist, Good Ancestors’ Emily Grundy and Greg Sadler urged the Australian government to update the Australian Government Crisis Management Framework to include a dedicated AI crisis plan, pointing to China’s 2025 national emergency response plan, which classifies AI security as a national disaster risk and outlines severity levels and a four-phase response process. They argued Australia currently relies on cyber-style crisis arrangements that are poorly suited to AI incidents and called for legal and institutional reforms to bring AI companies, compute providers and international partners into crisis planning and response.
In an opinion piece in the Australian Financial Review, technology investor Rohan Silva argued Australia is missing major AI investment and jobs because copyright laws prevent training AI models locally, discouraging large-scale data centre development. Silva argues this reduces Australia’s potential as a regional AI hub and limits renewable energy and economic benefits.
OpenAI has lobbied Australian officials across multiple departments seeking co-investment in U.S. AI infrastructure and changes to copyright and AI regulation, according to FOI documents reported by Crikey. The company held a two-hour meeting with the Office of National Intelligence and secured small federal contracts and some policy outcomes, including the removal of proposed mandatory guardrails for high-risk AI from Australia's National AI Plan. Government briefings showed internal scepticism about OpenAI's economic claims and highlighted infrastructure resource concerns.
VIQ Solutions has subcontracted court transcription work to India-based e24 Technologies without notifying Australian courts, according to an investigation by the ABC. The report said thousands of sensitive court files were accessed by offshore staff, potentially breaching federal law and contractual obligations. It said the access raised national security concerns and prompted calls for an audit and termination of the contract.
South Australia Police have used drones, high-resolution aerial imaging and AI from Australian companies to analyse a 15-kilometre radius in the search for missing four-year-old Gus Lamont. The system processed over a trillion pixels, identified 13 areas for investigation and detected one human among thousands of animals, with no evidence of the child found. Authorities have declared the disappearance a major crime investigation and identified a suspect living on the property.
🇺🇸 United States
The White House is urging a Utah Republican lawmaker to drop a state AI transparency bill that would require frontier AI companies to publish safety and child-protection plans and offer whistleblower protections. The intervention follows a Trump administration executive order to challenge state laws seen as conflicting with federal policy. The report said the move sets up potential disputes with both Republican and Democratic states pursuing AI guardrails.
The Trump administration is recruiting about 1,000 software engineers for a two-year “Tech Force” program to modernise federal agencies using AI and advanced digital tools, according to the Financial Times. The report said the initiative has partnered with companies including Apple, Meta, Microsoft, Nvidia, OpenAI, xAI and Palantir, whose senior executives will provide talks and training. Recruits will be overseen by managers seconded from the tech industry, with ethics arrangements reportedly under review.
🇪🇺 Europe
Google’s chief legal officer has warned the EU against restricting access to foreign technology in its tech sovereignty push. The comments came as the EU prepares a tech sovereignty package and amid growing transatlantic tensions over regulation and potential tech decoupling. Google urged a model of open digital sovereignty that allows local control while maintaining access to global technologies.
📰 Obituary
David J. Farber has died in Tokyo at age 91, according to a NYT report that described him as a computer networks researcher and policy adviser often described as a grandfather of the internet. Farber helped shape early internet development by mentoring key figures behind the Internet Protocol and Domain Name System, and by advocating federal support for early academic networking projects. He also served as chief technologist at the U.S. Federal Communications Commission and advised organisations shaping internet policy.
That’s all for this week. For more timely analysis and commentary, check out The Strategist and ASPI’s Stop the World podcast—or our other Substack newsletters:



