Monday, March 3, 2025

The autonomy and control dilemma – Technology News

Must read

By Uma Ganesh

Agentic AI is touted to be the next game changer in 2025 in the AI domain. While the concept as a standalone intervention has demonstrated its effectiveness  in solving problems, the next level of efficiencies are expected with the multiple AI agents working together and interacting with one another to arrive at resolution of complex problems at hand. Successful applications are emerging in finance, logistics, defence, healthcare and manufacturing.

Amazon uses swarms of robots in its warehouse operations to enhance processing speed. In defence, AI controlled drone swarms are being used for surveillance and security. Hindustan Aeronautics is developing the combat air teaming system (CATS), integrating manned fighter aircraft with a fleet of swarming drones with the view to minimise human risk while executing complex missions. Indian Institute of Science (IISc) is developing algorithms that enable multiple robots to collaborate for monitoring of environmental factors.  Indrajaal, an anti-drone system, enables real-time threat detection and neutralisation without human intervention.

Diagnosis of cancer patients, drug discovery, clinical trials and customising drugs are time consuming and involve huge costs. AI-driven automation is resulting in significant benefits through shorter time spent on molecular analysis, acceleration of clinical trials and better predictions with drug efficacy with targets. In power grids, AI agents optimise energy distribution, preventing blackouts without human involvement.

Although there are successful examples of multi-agent collaboration, there are many challenges we face. The complexities involved in coordination and interactions could lead to conflicting decision making by individual agents. The occurrence of autonomous vehicle accidents is one such example where decisions made by agents with no human involvement have raised questions of the efficacy of autonomous agents.

In financial trading, giving control to multiple agents and not centralising it makes it difficult to track rogue activities leading to market manipulation. Faulty diagnosis on the basis of limited information available could lead to another agent prescribing incorrect medication. Therefore pinpointing accountability and ethical responsibility become difficult.

Developing a multi-agent AI system requires high investment and computational power. Continuous training and ensuring seamless updates between agents involves further efforts, investment and vigilance in order to avoid the pitfalls or unexpected situations. Multiple AI agents are normally considered for deployment in high volume or highly volatile environments. Letting loose multiple AI agents with several unknown parameters without human supervision should be avoided.

In order to ensure no harm would be caused by their deployment, it is important to strike a balance between providing autonomy and exercising control. This requires robust verification processes, security checks supported by human oversight at critical decision points .

The writer is chairperson, Global Talent Track.

Disclaimer: Views expressed are personal and do not reflect the official position or policy of FinancialExpress.com. Reproducing this content without permission is prohibited.

Latest article