Is AI the real threat to jobs and privacy? Expert sheds light on critical issues | Technology News

New Delhi: AI is revolutionising sectors around the world – from healthcare to technology and creative industries – by automating tedious tasks and opening doors to new opportunities. While there are concerns about job losses, AI offers avenues for growth through upskilling and the creation of jobs that didn’t exist before.

Ethical AI governance and public-private partnerships with appropriate cybersecurity infrastructure can ensure that AI serves the interests of humans. As AI evolves, it is transforming the global landscape while simultaneously balancing progress, security, and opportunity.

In a recent email interview, Anand Birje, CEO of Encora and former Digital Business Head of HCL Technologies, shared his insights on the existential risks posed by advanced technologies.

How does generative AI impact job creation?

AI is transforming the employment landscape, but it’s not a simple matter of replacement. We can see big changes in healthcare, technology, creative fields, and all verticals as AI increases the scope of existing roles by reducing repetitive and mundane tasks. However, while a percentage of roles that involve routine tasks may be phased out, AI will also create entirely new roles, responsibilities, and positions that don’t currently exist.

For both businesses and individuals, the key to coping with these changing times is adaptation. According to him, “we must focus on training people and creating a culture in which skills improvement and retraining are constant. This cultural change requires a change in individual mindset and must be an essential part of change management strategies for companies.”

Forward-thinking companies are already helping their people understand and appreciate the true scale of change that AI brings, and the challenges but also the opportunities it presents for advancing their careers.

AI is not the existential threat to jobs that many fear, but it will force us to reinvent the nature of work and evolve as individuals in the process to harness its full potential. A parallel can be drawn with the wheel.

Humans could travel and transport goods before its invention, but the wheel allowed us to save energy and time to focus on other areas and opened up new avenues of progress for our civilization.

End-to-end encryption fails to prevent data leaks?

Trust in social media platforms is a huge issue today, affecting millions of users around the world, including all of us. Encryption helps, but it is not enough; it is just one piece of a complex puzzle. What we need is a multi-layered approach involving transparency, compliance and accountability. In recent times we have seen a shift in this direction, as companies reveal geolocation and how they plan to leverage user data.

As for regulations, the right balance needs to be found. According to him, “We need frameworks that protect users while also enabling technological progress. These frameworks must address the unique complexities of different geographies, comply with local regulations and global standards, and protect user privacy while leaving room for innovation and creativity.”

The tech industry needs to step up its efforts and adopt a “privacy by design” approach. This means building guardrails into products and services from the ground up, not as an afterthought.

This is truer than ever in a world where AI is being used for identity theft, disinformation and manipulation. Ultimately, building trust will require deeper collaboration between tech companies, regulators and users themselves, and this is a key factor to consider as we redesign digital channels to fit an AI world.

Should we be worried about the existential threat posed by AI?

We need to take these warnings seriously, but it is also critical to differentiate between immediate, concrete risks and long-term, speculative concerns. The real threats we face today are not science-fiction scenarios about AI dominance, but more subtle ones: issues like AI bias, privacy violations, echo chambers, and the spread of misinformation. These are real problems affecting real people right now.

To address these issues, we need collaboration. It’s not something that a single company or a single country can solve alone. According to him, “We need governments, technology companies and academics to work together to ensure that standards of ethics, transparency and compliance are in place in areas involving the use of AI. Public education about the benefits of AI, as well as the risks associated with it, is also important to ensure safe use.”

But the important thing is that, while we work through these risks, we cannot forget the benefits that AI can bring. It is a powerful tool that could help solve major global problems. We must be cautious about AI, but also hopeful about what it can achieve. It is a huge challenge for our generation and we must rise to the challenge.

Where is the government failing to tackle digital fraud?

Online financial fraud is a growing concern. While the government has made efforts, we are still trying to catch up. The main challenge is speed – cybercriminals move fast and our legal and regulatory frameworks often struggle to keep up. With the advent of modern technologies such as Gen AI, cybercrime continues to grow in sophistication, scale and speed.

Regulators and government agencies need to work together with tech companies and make the best tech talent available to fight cybercrime. According to him, “We need to think outside the box, for example, creating a real-time threat sharing platform between tech companies and government agencies that can help identify and stop financial cybercrime.”

We also need a more proactive approach and an update of the legal framework. Conventional laws are not prepared to deal with modern cybercrime and this can lead to apathy or lack of speed in dealing with it.

Digital literacy is also crucial, as many frauds are successful simply because people are not aware of the risks. This is especially true in a country like India, where widespread internet penetration in rural areas and therefore the majority of the population is a new phenomenon.

In short, the risk of AI being used to commit cyber financial crimes is very real. To combat it effectively, we need better technology, smarter regulation, better education and closer collaboration across sectors.

Should governments regulate AI?

In my view, some level of government oversight of AI is not only advisable, but necessary. Ideally, it would be created through public-private partnerships, and such oversight is necessary to ensure the safety and ethical use of AI, even as the technology rapidly becomes ubiquitous in our effort to infuse creativity and innovation into all areas of work.

We need a framework that is flexible and adaptable and that focuses on transparency, accountability and equity. The regulatory approach would largely depend on local government bodies; however, it can be tiered so that the level of oversight and regulatory requirements are directly proportional to capabilities and potential impact.

For example, an AI used to help marketers make their copy more engaging does not require the same level of oversight as an AI that helps process insurance claims for the healthcare industry.

According to him, “we also need to think about the broader societal impact of AI and take active steps to address issues such as job losses and data privacy. If we keep these in mind, we can ensure that the policies being developed to regulate AI are in the best interest of the public and in line with our values ​​and human rights.”

Effective AI regulation will require ongoing dialogue between policymakers, industry leaders and the public. It is about striking the right balance between innovation and responsible development, harnessing the full potential of the technology and protecting our civilisation from its side effects.

Are AI and robotics a danger to humanity?

Look, ‘Terminator’ is great entertainment, but we’re far from that reality. For the first time, AI can make decisions and has evolved from being a ‘tool’ to being an ‘agent’, and the real and immediate risks are not AI taking over the world, but how humans can misuse the enormous potential it offers. Today, we should be more concerned about AI being used for privacy invasions, autonomous weapons, misinformation and disinformation.

According to him, “We are at a crucial point in shaping its development, just moments before technology becomes ubiquitous. We need to prioritize global security and governance frameworks, create clear ethical guidelines and safety mechanisms, invest in AI literacy, and keep humans in control of critical decisions.”

Prevention is about being proactive. The goal should be to use AI intelligently. We should not fear it, but we should guide it in the right direction. It is about finding the right balance between progress and responsibility.

How vulnerable are military AI systems to cyberattacks?

This is an important question. As AI becomes more tightly integrated with our existing infrastructure, there are some areas where it has the potential to cause the most chaos. According to him, AI in military systems is one such area that requires us to tread with extreme caution.

From data poisoning to decision manipulation and adversarial attacks, to theft of sensitive data and unauthorized access, there are many ways in which AI integration can create vulnerabilities and challenges for the military and cause significant damage in the process.

For example, evasion attacks can be used to change the colour of some pixels in a way that is imperceptible to the human eye. However, AI will now misclassify images and do so with confidence. This can be used to attack AI systems involved in face detection or target recognition, with disastrous consequences.

How can we address this problem? We need robust, cyber-secure AI systems that can explain their decisions so that they can be verified by humans. In this area, government agencies are advised to work closely with technology companies to implement AI systems that can identify and resist manipulation, implement a zero-trust architecture for sensitive digital infrastructures, and involve humans in the decision-making process in important situations.

AI should support military decision-making, not replace human judgment.

Source link

Disclaimer:
The information contained in this post is for general information purposes only. We make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability or availability with respect to the website or the information, products, services, or related graphics contained on the post for any purpose.
We respect the intellectual property rights of content creators. If you are the owner of any material featured on our website and have concerns about its use, please contact us. We are committed to addressing any copyright issues promptly and will remove any material within 2 days of receiving a request from the rightful owner.

Leave a Comment