Applied AI/ML Practitioner · GenAI & Multimodal AI · GxP Regulated AI · Doctoral Researcher in VLA Models & Embodied AI
I build AI systems in regulated environments — and I've been doing it at GSK. Contributed to LLM evaluation for regulatory document automation, supported RAG pipeline POC design, and deployed production Python tools in live pharmaceutical supplier workflows. Independently building VATSA — a unified five-modality AI architecture — with a published preprint on Zenodo (April 2026). Doctoral research focus: VLA models for safety-critical autonomous systems. Long-term mission: a safe embodied AI that walks amongst humans.
I am an Applied AI/ML Practitioner with 8+ years of experience across IT and GxP-regulated pharmaceutical environments, actively transitioning into hands-on AI/ML engineering. My background spans electronics engineering, enterprise systems, an MBA, and now a doctoral programme in AI & ML.
At GSK, I contributed to AI product development for regulatory document automation — defining evaluation criteria for LLM comparative analysis (Azure Document Intelligence vs GPT-4o), collaborating on RAG pipeline POC design, and deploying production Python utilities used in live regulated supplier workflows.
That experience made one thing clear: I did not want to keep specifying AI systems. I wanted to build them. So I invested deliberately — a DBA in AI & ML delivered via Great Learning (Walsh College, 2025–2028), with Year 1 at Texas McCombs School of Business, alongside hands-on independent project building.
My long-term research focus is AI in Robotics — specifically Vision-Language-Action (VLA) models for autonomous systems in safety-critical environments. Independently building VATSA — a unified five-modality AI architecture (Video, Audio, Text, Sensory, Action) — with a published preprint on Zenodo (April 2026) and a proposed novel output routing mechanism, SAMOS (Safety-Aware Multi-Output Selector).
The long-term mission: a safe embodied AI that walks amongst humans — capable but fundamentally correctable, transparent, and subordinate to human intent.
"What I bring that most AI engineers cannot: eight years of understanding how regulated businesses make decisions, how to frame the right problem before touching a line of code, and how to communicate technical trade-offs to a board-level audience — combined with genuine hands-on AI development."
Independently built agentic AI system using LangChain orchestrating two specialised agents — an Extractor and a Reviewer — for pharmaceutical regulatory document processing. Containerised with Docker and deployed live on Azure Container Apps.
End-to-end NLP classifier for pharmaceutical deviation categorisation (critical/major/minor). TF-IDF feature engineering, Logistic Regression, FAISS cosine retrieval, and Explainable AI layer using SHAP for GxP regulatory audit acceptance.
End-to-end ML project addressing class imbalance in churn prediction. Deployed live on HuggingFace Spaces with Streamlit UI and FastAPI backend. Evaluated using F1 and AUC-ROC on minority class — not accuracy.
Python and Jira API automation tool with sprint analytics, automated email delivery, and visual reporting. Deployed live at GSK — eliminated manual reporting effort for a 25+ member cross-functional team.
Unified five-modality AI architecture for human-level perception and action. Each modality encoder projects into a shared 512-dim latent space for cross-modal fusion. V-Module complete: EfficientNet-B0 fine-tuned to 96.31% accuracy on CIFAR-10, integrated with YOLOv8 for real-time object detection at 22 FPS. Benchmarked at 1,336 embeddings/sec at batch 16. Proposes SAMOS (Safety-Aware Multi-Output Selector) — a novel output routing mechanism using asymmetric safety-weighted sigmoid thresholds per modality, enabling safe parallel multi-modal output generation in physically embodied AI systems. Architecture published as a preprint on Zenodo (April 2026).
Presents the VATSA conceptual architecture and four core principles: shared latent space, cross-modal attention, temporal coherence, and closed-loop action. Introduces SAMOS (Safety-Aware Multi-Output Selector) — a novel output routing mechanism using asymmetric safety-weighted sigmoid thresholds for safe parallel multi-modal generation in embodied AI systems. Covers motivating applications in healthcare, pharma, autonomous systems, and adaptive education.
Open to AI/ML Engineer, Applied AI, and GenAI roles across any industry where AI is the core solution. Available with 3 months notice.