BIS working on standards for AI platforms; industry partners in tune

From diagnosing diseases to creating cool images, artificial intelligence (AI) has long had free rein. That may be about to end.

The Bureau of Indian Standards (BIS) is preparing a comprehensive set of standards for AI-related applications in India, two people directly involved in the process said. The bureau is consulting ministries of consumer affairs, information technology and education as well as AI industry partners to develop these standards, the people cited above said on condition of anonymity.

In addition to generative AI, which generates realistic images and text, AI is used in a wide range of fields, including healthcare, finance, transportation, education and customer service. However, the explosive growth of AI applications has brought with it growing concerns about ethics and trustworthiness, necessitating a robust regulatory framework to ensure its responsible use, the people cited above said.

The BIS initiative, which comes under the Ministry of Consumer Affairs, aims for a structured approach to regulating AI, focusing on the entire lifecycle of AI applications from development to deployment and their ultimate impact, said one of the two people cited above.

Queries sent by email to BIS spokespersons as well as the consumer affairs and IT ministries remained unanswered by the time this article was published.

Unintended consequences are feared

“AI technologies are advancing rapidly, but the absence of clear rules and regulations could have unintended consequences, especially in areas where trust and transparency are paramount,” the second person said. “We aim to create a framework that not only guides developers but also protects users and stakeholders from potential risks,” he added.

The BIS framework will aim to ensure that all parties, regardless of their level of AI expertise, understand the processes they need to follow to enable consistent and effective stakeholder engagement. The framework would provide guidance for AI applications based on a common set of rules that include “manufacture,” “use,” and “impact” perspectives, the people cited above said. These perspectives are intended to offer a holistic view of AI applications, taking into account both functional and non-functional characteristics of AI, such as trustworthiness and risk management.

Experts believe that the introduction of new BIS standards will bring much-needed uniformity to the sector and establish consistency across the industry.

“This is a welcome move. The introduction of these rules will bring much-needed uniformity to the sector and establish consistency across the industry. Currently, in the absence of a formal framework, those in power set the rules, but this initiative will put an end to that,” said Pawan Duggal, a cybersecurity expert.

“As a result, it will benefit consumers, make the ecosystem more robust and help eliminate inconsistencies in cybersecurity practices and procedures,” Duggal said by phone.

Wide applications

The impact of AI on society has been transformative. In healthcare, AI-powered systems can diagnose diseases, analyze medical images, and develop personalized treatment plans. In finance, AI algorithms can detect fraud, predict market trends, and optimize investment portfolios. In transportation, AI-powered autonomous vehicles have the potential to improve safety and efficiency. In education, AI-powered tutoring systems can provide personalized learning experiences and adapt to each student’s needs.

However, the advancement of AI has been accompanied by controversy.

Last year, the New York Times sued ChatGPT creator OpenAI for allegedly using millions of NYT articles to train the chatbots that now compete with it. In May, AI chipmaker Nvidia was sued by three authors for impermissible use of copyrighted works to train its NeMo AI platform. In the same month, a professional model in India sent a legal notice to the Advertising Standards Board of India claiming that travel portal Yatra Online had used her facial features in an advertisement. In December, the Delhi High Court asked the Centre to respond to a public interest litigation against unregulated use of AI and deepfakes. Earlier this month, the Bombay High Court granted an injunction to singer Arijit Singh in his copyright suit against AI platforms, which he claimed violated his personality rights.

“While AI presents unparalleled opportunities for innovation, economic growth and public welfare, it also presents significant ethical, privacy and security challenges. Without well-defined laws, unchecked deployment of AI could lead to misuse, discrimination and violations of fundamental rights,” said Dr Vishal Arora, Head of Business Transformation and Operational Excellence at Artemis Hospitals, Gurugram.

“Clear regulations are essential to guard against bias in AI algorithms, protect personal data and ensure transparency in AI-based decision-making processes. As India aims to be a global leader in the AI ​​space, creating a legal ecosystem that supports responsible use of AI is not only necessary but urgent, ensuring that the benefits of AI are maximised and potential risks mitigated,” Arora said.

The growing AI market

According to a recent report by Boston Consulting Group (BCG) and Nasscom, India’s AI market is growing at 25-35% annually and is projected to reach around $17 billion by 2027. The report predicted that the demand for AI talent in India may grow at 15% annually by 2027 as investments in AI continue to increase.

“Since 2019, global investments in AI have grown at an annual rate of 24%, reaching nearly $83 billion by 2023. The majority of this investment has gone into AI technologies such as data analytics, generative AI, and machine learning platforms,” the report noted.

Consumer experts have expressed concerns about the social impact of AI and stressed the importance of transparency and accountability in AI applications.

“AI is not just a technological innovation. It has profound implications for society. Ensuring that AI applications are trustworthy and that stakeholders are aware of their roles and responsibilities is crucial to fostering public trust in these technologies,” said Manish K. Shubhay, partner at The Precept-Law Offices and consumer rights advocate.

Source link

Disclaimer:
The information contained in this post is for general information purposes only. We make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability or availability with respect to the website or the information, products, services, or related graphics contained on the post for any purpose.
We respect the intellectual property rights of content creators. If you are the owner of any material featured on our website and have concerns about its use, please contact us. We are committed to addressing any copyright issues promptly and will remove any material within 2 days of receiving a request from the rightful owner.

Leave a Comment