My research explores the philosophical foundations of AI systems and infrastructure sovereignty.

AI Epistemology (Primary Focus)

Research Question: Can we meaningfully separate epistemic AI (truth-tracking) from instrumental AI (goal-pursuing)?

Core Problem

AI safety often focuses on aligning AI systems with human values - but this assumes AI must be goal-directed (instrumental). What if we could build AI systems whose primary function is knowledge acquisition rather than goal achievement? Can we design systems that are fundamentally epistemic (truth-tracking) rather than instrumental (goal-seeking)?

This distinction matters for safety: instrumental AI systems face alignment problems (deceptive alignment, instrumental convergence, goal misgeneralization). Epistemic AI systems might avoid these risks by not having goals to misalign in the first place.

My Approach

I’m investigating this through philosophical analysis:

  • Defining the distinction: What makes an AI system “epistemic” vs “instrumental”?
  • Conceptual viability: Is this a coherent distinction or a false dichotomy?
  • Practical implications: Can we build systems that are primarily epistemic?
  • Safety properties: Would epistemic AI actually be safer?

Current work: IACAP 2027 conference paper developing these definitions and exploring the conceptual foundations.

Strategy: Testing viability of this research direction through conference paper before PhD application to University of Birmingham.


Infrastructure Sovereignty & Digital Vassalage (Background)

Research Question: How do nations without economic leverage negotiate fair terms for AI infrastructure in an oligopolistic market?

Core Problem

Middle-power nations like the Philippines face a critical strategic choice: adopt AI built entirely on foreign-owned “rented” infrastructure (AWS, Microsoft, Google), or pursue digital sovereignty. This creates a 21st-century form of Digital Vassalage where national wealth flows continuously to foreign cloud providers as subscription fees for essential services.

The Philippine government gets worse deals for compute and AI services compared to how the UK negotiates. This pattern extends to all uncompetitive economies - the global AI infrastructure market replicates existing power hierarchies. Wealthy nations with regulatory leverage negotiate favorable terms; smaller economies pay retail prices with no strategic benefits.

The Philippines has ~1.3 million jobs in the BPO industry (call centers, back-office services). AI threatens this directly, creating a brutal trilemma:

  1. Adopt AI aggressively → unemployment crisis in BPO sector
  2. Resist AI adoption → become economically irrelevant
  3. Become AI-dependent without infrastructure → permanent vassalage with no control over the tech displacing your workforce

My Approach

I’m systematically analyzing the terms of AI infrastructure dependency (not just whether it exists). This involves documenting what deals are being struck and quantifying cost differentials between nations, analyzing why markets can’t fix bargaining asymmetries in oligopolistic infrastructure markets, and identifying strategic options for uncompetitive economies through hybrid models, regional cooperation, and policy creativity.


For updates and progress, see my research blog.