I’m an AI safety researcher exploring the philosophical foundations of AI systems and the geopolitical dimensions of AI infrastructure.
Research Interests
AI Epistemology (Primary Focus)
Can we meaningfully separate epistemic AI (truth-tracking systems) from instrumental AI (goal-pursuing systems)? I’m investigating the conceptual and practical viability of this distinction, examining whether AI systems can be designed primarily for knowledge acquisition rather than goal achievement, and exploring the safety implications of prioritizing epistemic over instrumental functions.
Infrastructure Sovereignty (Background)
Can nations achieve “Sovereign AI” without owning the physical or cloud infrastructure? I’m examining the power dynamics between nation-states and the cloud provider triopoly (AWS/Microsoft/Google), comparing the UK’s “Tiered Sovereignty” model with the Philippines’ “Client-State” model to understand infrastructure as territory and the implications of cloud rent-seeking.
Current Work
I’m preparing a paper for the IACAP 2027 conference exploring the epistemic/instrumental distinction in AI systems. This work tests the viability of my research direction before pursuing PhD applications.
Background
- AI Safety Researcher (Interpretable World Models, Catastrophe Prevention, Infrastructure Sovereignty)
- Teaching Fellow at University of Warwick
- SFHEA (Senior Fellow of the Higher Education Academy) candidate
Contact
- Email: [email protected]
- GitHub: joealcantara
- Twitter/X: @joealcantara_
This site shares research progress, pilot studies, and mini-articles on AI safety. All work is pre-publication and subject to revision.