top of page
Search

Paving the way for trustworthy AI

Updated: Jul 19, 2023

Artificial intelligence has today become the new rage. It’s literally everywhere. With the emergence of new software, especially from American firm Open AI – the popular ChatGPT (GPT meaning Generative Pre-trained Transformer), AI has become the new buzzword. AI today needs no introduction and it is interesting to note that it has been in development since more than a decade – to be more precise since the 1980’s.


ChatGPT and its later integration with Bing has been giving Google tough competition. ChatGPT is great at generating new content since it is based on language models and is good at writeups. It can be used to write essays, reports, articles, etc in a matter of minutes. AI’s are also available that can produce slides from text, so your hours of work you used to put in creating a presentation just got a lot easier. News is out that Microsoft will integrate it to its other products, such as its Microsoft software suite. It is revolutionising the world of work and firms are already revising their recruitment practices. Blue-collar jobs in the thousands are going away and new jobs are being created. Only those people who will be able to employ AI effectively at work will be relevant. Image AI software such as MidJourney can replace a professional photographer’s work and produce professional photo shoots in less than half an hour.


AI is also useful in the domain of virtual assistants. In the future, the more AI is made trustworthy, the more it will make our lives easier. We will see more and more AI assistants being developed such as Apple’s Siri and Samsung’s Bixby, and websites all over the world will use them to make their websites more informative. FAQ’s can easily be answered by virtual assistants or AI assistants. We have already seen sci-fi films in which humans talk to their machines which do a whole lot of work for them, including information processing and data analytics. Our future houses will be equipped with IoT devices and robots that can do household work such as cleaning and monitoring our electronic devices such as refrigerators, TV’s, etc. Robots have already been invented which can cook a great meal for you, so in the future all you have to do is to sit back and relax and let machines do all the work for you…!


Well… not really! This is because if you think of it, there has to be people who will manage the machines and troubleshoot them in case they stop functioning. More importantly, we will need people who will safeguard us our data and defend our systems against cyberattacks! Hacking can become common place and we will need people who will defend our computer devices against all sorts of malware. Nowadays, a secure computer system comes with a general-purpose antivirus such as McAfee, an AI-based defender (such as Sophos or Windows Defender) and a VPN or Virtual Private Network provider (such as Kaspersky VPN). These are essential items for anyone’s IT toolkit. There are many ways one can secure a computer system and I myself offer Consultancy services in the matter of IT Security since I am a qualified, professional member of the British Computer Society (MBCS).


In my previous articles, I have broached the more existential problem we face with AI and this is if AI goes out of control creating what we call dystopia. The more we make machines intelligent, the more we face the danger of them outsmarting us. In the eventuality this happens, they will invariably seek to supplant us, unless we take certain measures to invent trustworthy AI. We have already seen, recently, how AI control is being employed in order to adversely influence society. The European Parliament has moved to legislate against AI control by introducing draft resolutions and then adopting these resolutions in later sessions. This is the first instance of a country creating legislation specifically for AI, and it has been named The Artificial Intelligence Act. This will pave the way for making AI more trustworthy and making it a tool for boosting business and organizational productivity, rather than undermining people and nations. Countries are also looking at placing safeguards in the development of all AI software, such that we can understand its functioning and its repercussions.



Figure: AI and humans (Image Credits: Analytics Insight)


The European Parliament has a risk-based approach to AI and it seeks to establish regulations for AI depending on the level of risk that it presents. I am reproducing certain sections of the European Parliament regulations since these do the most justice to these new much anticipated rules.


“AI systems with an unacceptable level of risk to people’s safety would be strictly prohibited, including systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities or are used for social scoring (classifying people based on their social behaviour, socio-economic status, personal characteristics). MEPs substantially amended the list to include bans on intrusive and discriminatory uses of AI systems such as:-


  • “Real-time” remote biometric identification systems in publicly accessible spaces;

  • “Post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization;

  • Biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);

  • Predictive policing systems (based on profiling, location or past criminal behaviour);

  • Emotion recognition systems in law enforcement, border management, workplace, and educational institutions; and

  • Indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases (violating human rights and right to privacy).

MEPs expanded the classification of high-risk areas to include harm to people’s health, safety, fundamental rights or the environment. They also added AI systems to influence voters in political campaigns and in recommender systems used by social media platforms (with more than 45 million users under the Digital Services Act) to the high-risk list.


MEPs included obligations for providers of foundation models - a new and fast evolving development in the field of AI - who would have to guarantee robust protection of fundamental rights, health and safety and the environment, democracy and rule of law. They would need to assess and mitigate risks, comply with design, information and environmental requirements and register in the EU database.


Generative foundation models, like GPT, would have to comply with additional transparency requirements, like disclosing that the content was generated by AI, designing the model to prevent it from generating illegal content and publishing summaries of copyrighted data used for training.


To boost AI innovation, MEPs added exemptions to these rules for research activities and AI components provided under open-source licenses. The new law promotes regulatory sandboxes, or controlled environments, established by public authorities to test AI before its deployment.


MEPs want to boost citizens’ right to file complaints about AI systems and receive explanations of decisions based on high-risk AI systems that significantly impact their rights. MEPs also reformed the role of the EU AI Office, which would be tasked with monitoring how the AI rulebook is implemented.


AI systems that negatively affect safety or fundamental rights will be considered high risk and will be divided into two categories:-


1) AI systems that are used in products falling under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices and lifts.


2) AI systems falling into eight specific areas that will have to be registered in an EU database:-


· Biometric identification and categorisation of natural persons

· Management and operation of critical infrastructure

· Education and vocational training

· Employment, worker management and access to self-employment

· Access to and enjoyment of essential private services and public services and benefits

· Law enforcement

· Migration, asylum and border control management

· Assistance in legal interpretation and application of the law.


All high-risk AI systems will be assessed before being put on the market and also throughout their lifecycle.


Generative AI, like ChatGPT, would have to comply with transparency requirements:-

- Disclosing that the content was generated by AI

- Designing the model to prevent it from generating illegal content

- Publishing summaries of copyrighted data used for training


Limited risk AI systems should comply with minimal transparency requirements that would allow users to make informed decisions. After interacting with the applications, the user can then decide whether they want to continue using it. Users should be made aware when they are interacting with AI. This includes AI systems that generate or manipulate image, audio or video content, for example deepfakes.”


The European Parliament has adopted these new rules that aim to regulate AI and ensure that AI systems are developed in a safe way and that AI systems that present "unacceptable levels of risk" are strictly prohibited. We hope that this legislation will pave the way for more trustworthy AI and will create the opportunity to preclude the excesses of AI.


References: -



https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence


By KLP (19 Jul 2023)



Figure: Regulating AI (Image credits: ieee.org)

45 views0 comments

Recent Posts

See All
bottom of page