A promising AI startup has established its base in Tokyo’s Shibuya district, attracting global attention. KanjuTech—founded by neuroscientists from the Russian Academy of Sciences—is pioneering a new frontier called “Physical AI” for robotics and industrial equipment, armed with a distinctive technology called “Spiking Neural Networks” that sets it apart from conventional AI approaches. While generative AI, exemplified by ChatGPT, dominates headlines worldwide, why are they focused on “AI that interacts with the physical world”? And why did they choose Japan?
The three co-founders—Oleg Nikitin (CEO, left in pictured above), Alex Kunin (COO, right in pictured above), and Olivia Lukyanova (CTO, middle in pictured above)—have spent over a decade researching brain-inspired AI technology. Their approach fundamentally differs from traditional deep learning, featuring high energy efficiency and the ability to learn continuously in real-time. The technology they’re developing is designed for “physical systems” that operate in the constantly changing real world: autonomous vehicles, warehouse robots, and industrial IoT devices.
Having expanded into Japan through Shibuya City’s startup visa program, KanjuTech is already advancing Proof of Concept (PoC) projects with multiple Japanese companies. Their applications span a wide range, from optimizing industrial cooling systems to automatically controlling oil loading arms at ports. The maturity of Japan’s robotics industry and manufacturing sector, combined with openness to new technologies, drew them to the Japanese market. This article delves into KanjuTech’s technological uniqueness, their business development in Japan, and their perspective on the future of Japan’s startup ecosystem.
From Russia to Tokyo: Why Neuroscientists Chose the Japanese Market

CC BY-SA 2.0
KanjuTech’s story begins with research at the Russian Academy of Sciences. The three co-founders were engaged in research on Spiking Neural Networks, a fusion of machine learning and neuroscience. This next-generation AI technology mimics how neurons in the human brain communicate through electrical signal spikes.
We were at the Russian Academy of Sciences building next-generation AI, brain-inspired, combining machine learning and neuroscience. It’s based on how neurons in the brain work with spiking activity—neurons generate spikes of electrical signals and communicate through electrical signals. (Nikitin)
They built these Spiking Neural Networks and, at a certain point in their academic careers, understood that they could transition their research from theory to practical applications. In 2020, the three founded KanjuTech to commercialize their research.
Initially, they aimed to develop Large Language Models (LLMs), but resource constraints forced them to pivot. While participating in an accelerator program at the Okinawa Institute of Science and Technology (OIST), they discovered a new application area: speech recognition. Spiking Neural Networks excelled at identifying multiple independent signals—multiple speakers’ voices—and understanding who was speaking.
However, the speech recognition market was also intensely competitive, crowded with major players. Despite achieving overwhelmingly superior results in benchmarks, they couldn’t secure good clients. Eventually, they found the domain where their technology truly shined—Physical AI, meaning AI for physical systems that interact with the real world: robots, sensors, and industrial systems.
After incorporating in Hong Kong in 2022 and operating from a base in Southeast Asia, KanjuTech made its full-scale expansion into Japan in 2025 by obtaining a startup visa from Shibuya City. Shibuya Startup Support has opened many doors for them, strongly supporting their business development in Japan.
According to Nikitin, Japan is one of the most advanced places in the world for applied robotics and industrial systems. While China is also advanced, compared to Europe and the US, Japan’s technology stack is constantly evolving, and companies are prepared for innovation and application. Particularly when connecting to hardware or building physical things, Japan is the best place, he says.
Another reason is the approach of Japanese large corporations to innovation. They are seeking new technologies from startups and are very open to building PoC products, co-developing products, and introducing products into the supply chain together with startups.
Large corporations in the West tend to prefer building things in-house rather than building with startups. Because they’re always concerned about controlling the flow and things like that. But Japan is more of a trust-based society, so it’s better suited for companies to collaborate and build with startups” (Nikitin)
What Generative AI Can’t Do: The Case for Physical Intelligence

Original public domain image from Flickr
While generative AI like ChatGPT and Midjourney captures global attention, KanjuTech’s Physical AI occupies an entirely different domain. Whereas generative AI specializes in processing within the digital world—image generation and text processing—Physical AI is for systems that physically interact with the real world: robots, autonomous vehicles, and industrial equipment.
According to Nikitin, generative AI, LLMs, and image generation technologies are built to interact with the digital world, but KanjuTech builds technology for the physical world. They particularly focus on the field of “lifelong continuous learning” in neural networks. Currently, few companies are working in this area, but considering the rise of robotics and autonomous driving, the AI market for physical systems is predicted to become comparable in scale to the generative AI market.
KanjuTech’s approach fundamentally differs from conventional Transformer-based systems. Transformer-based systems are built for high-level perception and reasoning. For example, making robots understand what is a table, what is an object, and the semantics behind them. However, for teaching robots how to approach a table, how to pick up objects, and actually execute actions, high-level LLMs are too slow.
Indeed, when Nikitin spoke with the founder of a certain AI robotics company in Tokyo, he learned that systems based on large visual-action models for vision, action execution, and language take 1,400 milliseconds for each query and response. Since each action execution requires 400 milliseconds, robots using such technology can only move very slowly.
So we need something like System 1 and System 2 in human neurophysiology. System 2 is our brain, high-level reasoning.
And System 1 is execution in our cerebellum and spinal cord—doing actions the right way, executing actions. We’re doing something like System 1 here. (Nikitin)
Lukyanova further clarifies the uniqueness of KanjuTech’s system.
Our system doesn’t require time. It learns on the spot, like animals do. When an animal is born, it then starts learning about the environment, about itself, about different objects around. Our system works like that. It’s constantly learning.
And it doesn’t require time for pre-training. In classical neural networks, you need pre-training before you start working. You need to learn something beforehand, and then you can start using it in the environment. But with our system, you just put it down and it can learn on the spot. It learns simultaneously while moving. (Lukyanova)
This “learn while moving” capability is the core of Physical AI. The real world is unpredictable and constantly changing. Warehouse layouts change, road conditions change moment by moment, new parts are introduced to production lines—conventional AI cannot cope with these changes. When faced with situations not in pre-trained data, AI “freezes.” KanjuTech’s technology aims to fundamentally solve this problem.
Solving AI’s Two Biggest Problems: Memory Loss and Power Hunger

Photo credit: NVIDIA
KanjuTech’s technical advantages can be summarized in two main points: “Lifelong Learning” capability and overwhelming energy efficiency.
Continuous learning refers to AI’s ability to keep learning new information while operating. Conventional deep learning models are pre-trained on massive amounts of data and then basically remain in a “frozen” state. To recognize new types of objects or scenarios, the model must be retrained from scratch. And this retraining comes with a serious problem—”Catastrophic Forgetting.”
Our system doesn’t forget previous things. It learns new things but remembers previous things, previous different objects and tasks. But current neural networks require retraining and forget previous knowledge. (Lukyanova)
When introducing new types of objects to the network, you need to retrain the model from the beginning, and conventional neural networks are prone to this problem. While there are companies and labs around the world researching ways to prevent catastrophic forgetting for general deep learning algorithms, deploying general deep learning is not a fundamental solution to this problem, Nikitin says.
This continuous learning capability is particularly important for robotics and industrial equipment. For example, warehouse robots need to adapt to new layouts and new types of cargo. With conventional systems, operations must be stopped for retraining each time. But with KanjuTech’s technology, robots adapt to new environments while operating.
Another major strength is energy efficiency. LLMs and vision models require enormous computational resources and operate on GPU clusters in data centers. But robots and industrial equipment cannot carry such large-scale infrastructure.
General AI, LLMs, require a lot of computation to run, so they run from many GPUs, data centers, clouds. But what we’re doing is orders of magnitude—100-1,000 times—more computationally efficient in execution and training.
So we can deploy it on embedded hardware. It may not be a complete inference model, a model for language understanding, but at least we can deploy and train it on devices like NVIDIA Jetson or media computers, and do continuous adaptation and training. (Nikitin)
KanjuTech’s system can operate in the cloud, on on-premise servers, or on embedded systems mounted on mobile robots. Current robots require weekly retraining and sometimes must halt operations, incurring costs, but KanjuTech’s technology can prevent this.
The most energy-efficient deployment strategy uses FPGA (Field-Programmable Gate Array). FPGA is reprogrammable hardware that allows users to design their own instruction sets. While general processors are versatile with fixed computation methods, FPGAs allow you to write your own instruction sets and design the most efficient execution of programs. By placing neural networks directly on the hardware layer without going through software layers or operating systems, they achieve 1,600 times better energy efficiency than regular neural networks.
From Cooling Systems to Oil Rigs: One Technology, Endless Applications

Photo credit: Ken OHYAMA via Flickr
CC BY-SA 2.0
KanjuTech positions its technology as a “Foundational Model.” This means it’s highly versatile technology that can be applied to various industries and applications, rather than being limited to specific uses.
We built a foundational model. It’s a next-generation model that can essentially be applied anywhere—to everything connected to execution, sensor perception, low-level systems, sensors, physical devices.
And obviously, we understand that this can be modified or transitioned to many applications. There are various fields: shift control, failure prediction for CNC (computerized numerical control) machines, failure prediction for generators across power plants. Different fields, but the underlying foundational technology is the same. (Nikitin)
Therefore, KanjuTech has adopted a phased product development strategy. They take one problem from a specific critical industry that can be handled within a feasible timeframe and build a product from it. After PoC, they develop it as a scalable product that can be packaged for other companies, then move to another field. Currently, they have four or five scheduled applications for the future.
Here are the specific projects KanjuTech is currently working on:
First is optimizing “industrial cooling.” This is a project to optimize energy consumption of cooling systems for buildings, commercial facilities, and industrial sites, with a PoC already implemented in northern Thailand.
Next is the “energy generation” field. They’re planning a PoC with a major energy company in Central Asia regarding failure prediction for wind farms and future power plants.
Third is “mobile robotics,” working on improving robot navigation efficiency for warehouse logistics and agriculture. They’re developing products or co-products to enable more efficient navigation without failures like freezing.
And the first they’re tackling is “automated control and sensor recognition for oil loading arms.” This is a crane for the marine industry used to unload oil at ports. It’s a complex task that’s very difficult to control.
According to Kunin, as a company providing horizontal products, they need to proceed step by step with each industry vertically. They’ve already scheduled a PoC with a Japanese company in Q1 2026 and have corporate partners in Japan. The Japanese market serves as an important foothold for demonstrating and scaling up KanjuTech’s technology.
The $30 Million Question: Growing Without Losing the Edge

Photo credit: KanjuTech
Funding and team expansion are essential for startup growth. KanjuTech is no exception. Currently, they’ve raised $260,000 USD from Antler and another investor and aim to close a $1 million USD pre-seed round by the end of 2025. After that, they plan to raise $20-30 million USD in 2026 for scaling the technology. Since each product requires about 10 people, they plan to have six, seven, or eight products over the next few years.
They’re also taking a cautious approach to team expansion. While many AI companies hire large numbers of engineers, KanjuTech maintains a small team.
Building foundational AI is an interesting field. For example, if you look at early companies built with Transformer-based architecture, LLM architecture—companies like OpenAI, Google Brain, Google DeepMind—they initially started with fairly small teams.
Because when you’re building a new foundational model, you actually don’t need that many people. You just need the best people in the field who know what to do. You need to first build the core technology, then spread it as products and applications. (Nikitin)
According to Nikitin, the three-person team has already spent about 10 years on this technology before starting the company, plus the 5 years since starting the company—15 years in total spent on the codebase, approaches, and so on. In human-hours, they’ve invested enormous time building this technology.
But of course, expansion is necessary. Currently, since they have their own unique stack that’s very different from industry standards, they want to finalize the first showcase with a small team. Rather than spending time bringing in more people, they’re prioritizing creating a track record first.
After the first showcase, they plan to hire electrical and embedded engineers, computational neuroscientists and mathematicians, and spread their approach. Many technical staff will be needed, and Japan is a good place to do that, they say. They plan to grow the team to about 10 people during 2026.
What Japan Gets Right, What Still Needs to Change

Photo credit: Shibuya City
In developing their business in Japan, KanjuTech’s team has experienced both the strengths and challenges of Japan’s startup ecosystem. Their candid opinions offer valuable insights for improving Japan’s innovation environment.
First, their experience in Japan has been largely positive. Lukyanova says she’s amazed by the maturity of Japan’s environment and the number of people inspired by robotics and this entire field. The number of people who want to work on this technology and who want to collaborate is something they hadn’t found in any other country before. At various conferences, they’ve found many scientists interested in this field, been able to dialogue with them, and been stimulated by gaining new inspiration.
On the other hand, there’s room for improvement. Particularly, Kunin points out the need for collaboration between hardware and software companies.
In our case, we hope that the understanding of risk levels from Japanese companies will become a little lower for cooperating with startups like us. I don’t think this relates only to us.
Because overall, in my understanding, it would really help accelerate and rapidly increase the Japanese industry as a whole, like hardware manufacturing, because everything is so mature.
At the same time, what we can see and would like to see is that collaboration between software and hardware companies should be at the next level. (Kunin)
Kunin is confident that more cases of collaboration between software AI companies and hardware companies would bring much more value, such as truly autonomous things around us. He believes collaboration between companies should be much tighter.
Additionally, Nikitin notes that among the many Japanese venture capital (VCs) he’s spoken with, he’s noticed some VCs lack a global overview and global understanding of markets. They tend to focus on small businesses applied domestically and tend to keep startup valuations low.
These factors may mean that many founders of Japanese startups may not have much motivation to challenge big things, big ideas, and big business applications. Sometimes, Japanese VCs seem to be investing in small and medium-sized IT companies rather than building disruptive new businesses or defining new markets.
However, Kunin points out that this problem isn’t unique to Japan. In certain aspects, he sees no major gap between Japanese VCs, European VCs, and American VCs.
Many VCs claim to invest in pre-seed, but when you actually talk to them, the first things they ask are “Do you have revenue?” and “How much traction do you have?” This is the same in Japan, the US, and Europe. (Kunin)
While things might be a bit different at later stages like Series A and Series B, at the pre-seed phase, there’s really no big difference in VC thinking worldwide. Everyone is united in wanting to invest with low risk.
On the other hand, the value of Japan’s startup support programs is highly appreciated.
I would recommend other startups to participate in programs like Shibuya Startup Support or the accelerator program at OIST.
Because in Japan, introductions carry a lot of meaning, and if you can get into the network, it’s actually not that complicated. People will trust you and openly cooperate with you, and these things happen.
But if you don’t have cards in hand, it’s very difficult to approach this market. So if you’re doing a startup in Japan, you should have great partners like Shibuya Startup Support, or if you’re in a specific industry, partnerships with other research institutions, research institutions for industry and companies. (Nikitin)
According to Kunin, people who are about to start businesses tend to be afraid of starting because they still have stable careers and salaries.
In Japan, there are many communities that provide opportunities to take several steps with low risk. You can start with networking parties, begin mentorship sessions, and share ideas about what you actually want to build. So it’s a good time to start and to try. (Kunin)
Having seen how entrepreneurs and startup communities changed in Russia, they now see many commonalities in what’s happening in Japan. They stated their confidence that many entrepreneurs, whether young or not, can bring great value to the Japanese economy’s community, sending their support to Japanese entrepreneurs.
Looking ahead, KanjuTech aims for a major fundraising round of $20-30 million USD in 2026, leveraging their track record in the Japanese market. The day when their technology is embedded in Japanese companies’ products and deployed to global markets may not be far off. Next-generation AI technology born from neuroscience, taking flight from Japan to the world—that story has just begun.
