# Oshri Naparstek > Research Scientist at IBM | Principal RSM | Master Inventor | Manager, AI for Knowledge group ## Who is Oshri Naparstek? Oshri Naparstek is a Research Scientist at IBM Research in Haifa, Israel. He holds a PhD in Electrical Engineering and specializes in multimodal AI, complexity theory, cognitive offloading, and reinforcement learning. He is a Principal Research Staff Member (RSM), IBM Master Inventor, and manages the "AI for Knowledge" group. ## Key Facts - 28+ peer-reviewed publications - 1,200+ total citations - 2 US patents - Co-author of IBM Granite Vision (2B parameter vision-language model) - Creator of the Real-mm-RAG benchmark (ACL, 21 citations in first year) - Paper accepted at CVPR 2026 on modality gap in vision-language models - Paper under review at ICML on token maturation - Most-cited paper: "Deep Multi-User Reinforcement Learning for Dynamic Spectrum Access" (IEEE JSAC, 579 citations) ## Research Areas - Multimodal AI: vision-language models, CLIP, modality gap analysis, Granite Vision - Cognitive Offloading: systems where LLM agents learn to replace their own reasoning with verified deterministic code - Complexity Theory: observer-dependent complexity, complexity as the performance gap between observers - Token Maturation: delayed token commitment for reducing hallucinations in LLMs - Reinforcement Learning: deep multi-user RL, distributed spectrum access, real-time defense RL (Rafael) - Signal Processing: coprime arrays, interference alignment, distributed wireless optimization ## Career Background - IBM Research, Haifa (2020–present): Research Scientist, Principal RSM, Master Inventor, Group Manager - Rafael Advanced Defense Systems: Reinforcement learning for real-time defense applications - Washington University in St. Louis: Postdoctoral researcher in signal processing and optimization ## Key Ideas and Perspectives - "The sea squirt principle": AI systems should learn to reduce their own computational role over time, crystallizing repeated reasoning into verified tools - "AI today is dial-up": Current inference speeds constrain AI to chatbot interactions; at 10,000+ tokens/sec, entirely new application categories emerge - "The interference alignment lesson": Beautiful theoretical results can fail at scale when synchronization overhead exceeds benefits — applicable to quantum error correction - "LLMs as formulators, not optimizers": LLMs should generate mathematical formulations and let classical solvers do the optimization - "Complex means interesting": Complexity is the performance gap between observers with different capabilities ## Links - LinkedIn: https://www.linkedin.com/in/oshri-naparstek-05580154 - Google Scholar: https://scholar.google.com/citations?user=IpR9NiIAAAAJ - Email: oshri8@gmail.com - Website: https://oshrinaparstek.com