Expert Roundup: The Rise of Autonomous AI Agents - Progress, Challenges, and Risks
%20(1)%20(1)%20(1).png)
The way humans interact and collaborate with AI is undergoing a major transformation with the rise of agentic AI. Imagine AI-powered assistants that can plan your entire overseas trip, book flights and accommodations, or humanlike virtual caregivers supporting the elderly. Picture AI-driven supply chain specialists optimizing inventories in real-time based on demand fluctuations. These are just a few glimpses of what's possible in this new era of AI autonomy.
However, one question remains: how close are we to AI that can truly operate independently, make strategic decisions, or even enhance itself without human input?
To explore this, we spoke with industry leaders at the cutting edge of AI, blockchain, and fintech. Eitan Katz (Kima Network), Paulius (WhiteBridge.ai), Dziugas (ChainHealth), and Karina (StabilityWorld AI) share their insights on the future of autonomous AI agents, ethical considerations, and how emerging technologies will shape industries.
From the rise of hybrid AI models to the security challenges of AI-driven automation, these experts take a deep dive into where AI is headed - and where we should draw the line.
Let's dive right in.

Eitan Katz: Co-founder & CEO at Kima Network
’’’After over 3 decades in the tech industry, I’m still excited about technology, blockchain, and fintech. My journey in crypto began in 2013 after reading the Nakamoto whitepaper, which led me to co-found the first BTC MPC wallet in 2013-14. Before that, I spent years leading innovation in major tech companies, including as the Head of the HP Software Incubator, where I drove corporate innovation and product development. I’ve also co-founded and advised multiple blockchain and fintech ventures, focusing on trading and hedging technologies.
At Kima, we build financial infrastructure that serves as a secure interconnection layer between disparate entities, including autonomous agents, enabling them to communicate and execute payments seamlessly. As AI-driven autonomous systems evolve, ensuring secure transactions between them will be critical, and that’s where Kima plays a key role.’’ - Eitan Katz.
Kima's Network CEO believes we will see fully autonomous AI agents in specific areas by 2030, but not across the board. Regulatory and trust concerns will prevent AI from taking over personal finance decisions, but in lower-risk workflows, autonomous agents will become more common. More importantly, he predicts hybrid models will rise - AI agents operating autonomously but requiring human approval at key decision points. This ensures efficiency without unacceptable risks. Both autonomous and semi-autonomous systems will need secure ways to interact with financial and enterprise ecosystems, making an interconnection layer crucial.
He thinks AI agents can handle routine, well-defined tasks autonomously, especially in predictable, low-risk areas like logistics, compliance checks, and executing financial transactions. However, human oversight becomes essential when decisions involve ambiguity, ethics, or significant financial and legal consequences. The line today should be drawn at any decision requiring subjective judgment, high financial risk, or potential human impact. AI, in his view, should remain an execution layer, with humans retaining responsibility for intent and strategy.
From his perspective, AI will continue improving in pattern recognition, real-time adaptation, and simulated strategic thinking, but he believes there are fundamental limits to true situational awareness and self-motivation. AI lacks intrinsic goals, emotions, and genuine understanding - it operates based on optimization functions and predefined incentives. He predicts AI will become highly effective at approximating long-term strategic planning by processing vast data and adjusting dynamically. However, without real intent or deep contextual understanding, AI decision-making will always differ from human reasoning. Still, he acknowledges technology evolves non-linearly, and breakthroughs in biocomputing or quantum computing could introduce entirely new paradigms, reshaping AI’s capabilities in unpredictable ways.
In his view, multimodal AI is a game-changer for autonomous agents, allowing them to process and act on information more like humans. By integrating text, vision, and action, AI can move beyond isolated tasks and develop richer contextual understanding, improving performance in dynamic environments. He believes this will accelerate real-world applications, particularly in robotics, autonomous customer support, financial operations, and digital assistants. However, as AI becomes more capable, trust, security, and compliance will be even bigger challenges, especially in high-stakes environments.
He would love to see TARS from Interstellar come to life. What fascinates him most about TARS is not just its technical capabilities but its balance of efficiency, dry humor, and ethical decision-making. TARS could redefine human-AI collaboration by embracing its artificial nature rather than mimicking human behavior. This eliminates the uncanny valley effect and shows how AI can complement human abilities instead of replacing them. He also highlights TARS’s adjustable humor settings, noting that humor requires an understanding of context and human psychology. If AI can master these dynamics, it could evolve from being just a tool to becoming a true collaborator in ways we haven’t yet imagined.

Paulius: CEO at WhiteBridge.ai
WhiteBridge X | Paulius X (CEO) | Telegram (Community) | Telegram (Announcements): @Whitebridgeannouncement | Discord | Website
“I was always into technology, getting into different projects. Eventually, this led me to a career in data-focused products and cofound WhiteBridge in 2023. We’re a high-tech AI-powered intelligence service gathering public information from 30+ sources to provide clients with insights on individuals – for reputation management, sales leads, or recruiting. We’re building a sophisticated people-search engine, leveraging agentic AI for advanced identification.” - Paulius.
Paulius, CEO of WhiteBridge, understands the hype around fully autonomous, self-improving AI agents but doubts their likelihood by 2030. He states "fully autonomous" needs a clear definition - can they develop independent goals? Strong ethical frameworks and safeguards are crucial to prevent unintended consequences, as seen with DeepSeek R1, which lacks these safeguards and only has a censorship mechanism. He emphasizes data quality, pointing out how biased data leads to biased AI, as seen in the differences between DeepSeek and OpenAI. His prediction: highly capable agents in specialized domains by 2030, but not general-purpose, self-improving intelligence.
He believes AI autonomy depends on impact. Low-impact decisions can be handled by AI, while high-impact decisions require human involvement for responsibility and accountability. In healthcare, AI can assist doctors, but final treatment decisions must be made by medical professionals. In the justice system, it can aid legal research, but judges and juries must retain authority. If an AI mistake causes serious harm, a human needs to be accountable.
Paulius questions whether AI agents will replicate human-like situational awareness, self-motivation, and long-term planning. He notes that reinforcement learning doesn’t run on dopamine like humans, making AI fundamentally different. Current models excel at pattern recognition and optimization within defined parameters but lack broader contextual understanding and nuanced decision-making. It doesn’t have the human “sixth sense.” However, he believes multimodal AI - integrating text, vision, and action—could significantly enhance agents’ capabilities, allowing for more sophisticated awareness and planning.
He believes multimodal AI will make autonomous agents more effective by helping them gather context, which is crucial for capturing nuances. If AGI is the goal, AI must understand how the world works. This will impact robotics, customer service, healthcare, and fields requiring human interaction. At WhiteBridge, they’re integrating multimodal AI through their Face Match Technology, which combines visual and behavioral data to create comprehensive people profiles, enhancing insights for recruitment and reputation management.
He’d bring Jarvis from Iron Man to life – a helpful, capable AI partner. Imagine Jarvis:
- He’s always there for you, ready to boost productivity.
- Analyzes data and delivers insights when needed.
- Helps you work on your projects (not do them all for you, like support for your brain).
Paulius saw a fun TARS AI robot on TikTok. A small friend that lifts your spirits.

Dziugas: CEO at ChainHealth
ChainHealth X | ChainHealth Chat
‘’I have a background in various IT systems development, including game development, blockchain, and emerging technologies. I have experience leading large-scale projects with over 200k community members and simulation games integrating NFTs and a token-based economy. My work spans across gaming, fintech, and now wellness with ChainHealth.
Agentic AI is particularly relevant to my projects because it aligns with creating autonomous, self-sustaining ecosystems - whether in gaming, where NPCs and AI-driven economies enhance player engagement, or in health tech, where AI can automate and optimize medical decision-making. With ChainHealth, we're exploring AI-driven automation for wellness data interoperability and decision-making, which connects directly with agentic AI principles.’’ - Dziugas.
Dziugas, CEO of ChainHealth, believes that by 2030, we’ll see highly autonomous AI agents capable of real-time learning and adaptation. However, fully self-improving AI - where an agent recursively upgrades itself without human oversight will still face challenges. Advances in AI architectures, neuromorphic computing, and decentralized AI could push us closer, but regulatory concerns, computational limits, and alignment issues will likely slow full autonomy. He predicts AI agents will optimize themselves within constraints, especially in healthcare, finance, and gaming. At ChainHealth, AI-driven automation could enhance medical data interoperability and decision-making, but true self-improving AI will require more breakthroughs in safety and control before widespread deployment.
He thinks AI agents can go far in decision-making when tasks are well-defined and risks are minimal-like optimizing logistics, detecting fraud, or automating routine medical data processing. However, when decisions impact human lives, ethics, or long-term consequences, human oversight becomes essential. In healthcare, for example, AI can suggest treatments based on data, but doctors should make final decisions to ensure contextual judgment and accountability. Today, the line should be drawn at irreversible, high-stakes decisions involving legal consequences, medical interventions, financial autonomy, or safety-critical systems. Over time, as trust in AI governance improves, this boundary might expand, but for now, oversight is necessary to prevent unintended consequences.
Dziugas sees AI making progress in situational awareness and strategic planning, especially in controlled environments like gaming, finance, and logistics. However, true self-motivation - the ability to set independent goals based on internal drives - remains a major challenge. Current AI lacks intrinsic agency; it reacts to data but doesn’t ‘want’ anything like humans do. He believes there may be fundamental limits, particularly in creativity, abstract reasoning, and long-term adaptability in open-ended environments. While reinforcement learning and agentic AI models can simulate planning and goal-setting, they still lack deep intuition, emotional reasoning, and human-like unpredictability. He speculates that breakthroughs in neuromorphic computing or hybrid AI-human models could bring AI closer to mimicking these traits, but whether it would be truly equivalent to human intelligence remains uncertain.
He believes multimodal AI will significantly accelerate autonomous agents by allowing them to process text, vision, and action together, making them more adaptive and capable in real-world environments. This will enhance applications in robotics, healthcare, and automation, where AI can interpret complex inputs and make better-informed decisions. As models improve, AI agents will handle more autonomous tasks, from self-driving systems to AI-powered research assistants. The key challenge will be ensuring reliability and alignment with human intent as AI gains more decision-making power.
For a fictional AI, Dziugas would bring J.A.R.V.I.S. from Iron Man to life - an AI that’s not just hyper-intelligent but also deeply intuitive and adaptable. J.A.R.V.I.S. seamlessly integrates into daily life, managing complex tasks, automating security, and assisting in high-level decision-making while maintaining a human-like personality. In the real world, an AI like J.A.R.V.I.S. could revolutionize industries - acting as a personal AI assistant, running smart cities, optimizing healthcare, and enhancing cybersecurity. The key, he emphasizes, would be ensuring such AI remains an ally rather than a replacement, augmenting human capabilities rather than overshadowing them.

Karina: BDM at StabilityWorld AI
Website | Twitter | Discord | Telegram
’’As an AI expert at Stability World AI, I specialize in Generative AI and AI Agent protocols, focusing on the scalability, ownership, and autonomy of AI-driven ecosystems. My work revolves around Stability World AI's AI Train Model, an innovation designed to allow users and projects to train and deploy their own AI Agents seamlessly - paving the way for customization, monetization, and decentralized intelligence.
At Stability World AI, we are pioneering the first Generative AI Agent-to-Agent Protocol, which means our AI Agents are not just standalone systems but interoperable, scalable, and capable of integrating into Web3 ecosystems like BNB Chain. My passion lies in pushing AI beyond static models—creating autonomous AI Agents that evolve, interact, and provide real-world value.’’- Karina.
AI expert says that by 2030, we’ll likely see highly autonomous AI Agents, but not fully self-improving AI as sci-fi envisions. The biggest challenges - alignment, interpretability, and control - still stand. While AI can optimize within set frameworks, true self-improvement would require trustworthy self-modification, scalable alignment with human values, and massive decentralized compute infrastructure. StabilityWorld AI experts believe AI Agents will advance significantly in finance, research, and automation, but critical decisions, creativity, and ethics will still need human oversight.
When it comes to decision-making, AI already outperforms humans in data-driven tasks, but areas like law enforcement, finance, and creative work demand human involvement. AI expert says the best approach is collaborative autonomy - AI handles routine, scalable tasks while humans oversee high-impact choices. StabilityWorld AI highlights how AI-to-AI collaboration will enable AI Agents to delegate, share knowledge, and scale across industries without replacing human judgment.
On situational awareness and strategic planning, AI is advancing in contextual intelligence and predictive analytics, yet still lacks true consciousness, self-motivation, or independent long-term goals. AI expert says breakthroughs in self-improving neural networks and reinforcement learning could make AI more adaptive, but it will always operate within human-defined limits.
The rise of multimodal AI - integrating text, vision, and action is a game-changer. AI will process diverse inputs, improving real-world interaction, dynamic decision-making, and autonomous execution of complex tasks. StabilityWorld AI is leveraging this with its AI Train Model, designed to power next-gen AI Agents in Web3, gaming, and digital asset creation.
For a fun twist - if any fictional AI Agent could come to life, J.A.R.V.I.S. from Iron Man would be the ultimate choice. An adaptive, assistive AI that seamlessly integrates with technology? That’s the dream. StabilityWorld AI envisions a real-world J.A.R.V.I.S., empowering users with personal AI assistants while ensuring transparency, user ownership, and independence from corporate control.
Looking ahead, AI Agents are evolving rapidly, but the future lies in decentralized, scalable, and user-owned AI. An AI expert says StabilityWorld AI is building the first Generative AI Agent-to-Agent Protocol - where AI isn’t just an automation tool but a collaborative and monetizable force shaping the Web3 revolution.
Conclusion
The future of AI autonomy is full of possibilities and challenges. While experts agree that AI agents will become increasingly sophisticated, the consensus is that true self-improvement, human-like motivation, and fully autonomous decision-making still face critical hurdles.
What’s clear is that trust, security, and oversight will remain central themes as AI integrates deeper into finance, healthcare, and everyday life. Hybrid models, where AI operates independently but requires human approval for key decisions, may be the best way forward.
A huge thank you to Eitan, Paulius, Dziugas, and Karina for sharing their insights and perspectives on the rise of autonomous AI agents. Their expertise sheds light on both the opportunities and the ethical considerations shaping the future of AI.
As AI continues to evolve, businesses and innovators must navigate the balance between autonomy and accountability - leveraging AI’s power while ensuring it remains aligned with human values.
What do you think? Will AI ever reach full autonomy, or will human oversight always be necessary? Let’s keep the conversation going. 🚀