• Legis Scriptor

Need to Regulate Artificial Intelligence (AI)

Authored By- Shreya Venkatesh

Keywords: Regulating AI, Autonomous weapons systems


Artificial Intelligence is essentially still a work in progress with R&D being a continuous process. It is a comparatively new topic of research and discussion and we still don’t have complete knowledge as to the scope and capabilities of this technology. This essentially makes the future of artificial intelligence fluid. We can only predict what happens to a certain extent. Considering that these machines are built to be smarter than humans, there is no way to measure the possibilities. This is also why we need to regulate these artificial intelligence machines. This paper deals with the legal, social and moral implications of artificial intelligence, thereby establishing a dire need to regulate this new advanced technology.


A popular way of understanding AI is that it is a means for a computer to have similar thought processes to that of humans. The famous Turing test considers that AI is achieved when the AI’s answer to a question or a problem cannot be distinguished from a real person’s response. Computers use various advanced algorithms to essentially mimic thought processes, understand human nature and adapt accordingly. They are made to draw conclusions from various situations using the information they draw from the servers.

Effective Regulation

The first thing that we think about when it comes to the regulation of AI is effective regulation. This entails us keeping an open mind and actually considering the opportunities and great benefits it opens us up to. This comes from the understanding that anything can be dangerous if not regulated and nothing is beneficial in excess. Over-regulation or improper regulation on the other hand might hinder us from reaching our full potential with regards to R&D in this field. Data that is being used for application purposes has to be unbiased data. The idea is to essentially make sure that these automated machines do not make the same mistakes or have the same prejudices as human beings. It has been hypothesized that due to the AI’s inherent lack of bias to racial minorities, gendered, and non-gendered people, they can be more effectively deployed in more sensitive situations. Unfortunately, this cannot be borne out in most cases because AI in effect learns from a sample data set that reflects the biases of the developers. This can be understood with reference to the black lives matter movement and police brutality. While proponents of AI deployments in racially sensitive situations like the apprehension of George Floyd say that such violence would not have been resorted to by AI, there is a high probability of them being incorrect if the developer of the AI software shared a similar bias. This brings us to the two domains for regulation. The first one is privacy. It is necessary to loosen strictures on what is considered private so that the database can extract valuable information from its users. This however means that the data collection has to be carried out on an above-board, ethical basis by legitimate entitled organisations. The second domain is the explainability. The AI system uses its algorithm to arrive at an answer but quite often, the reason for making that decision is unknown. If we keep lobbying for rational explanations to decisions, we might end up invalidating decisions. So again, we need to loosen the strictures so that we can widen acceptance standards. The intent od regulation should therefore be to prevent credible information from getting into the wrong hands (It should also be to regulate the access to this data for commercial interests). Data preservation is in the interest of companies for both individual and corporate benefit so private sector enterprises can definitely come up with a feasible plan layout. The government, on the other hand, has pretty much no clue when it comes to standards, procedures and storage of regulations.

Legal Implications

Since AI is a relatively new field, there are no well-delineated laws on the subject matter. Even regulations that are supposedly imposed are not stringent and aren’t properly enforced. This is evident in the case of Shawn Hudson v.Tesla, Inc. and Oscar Enrique Gonzalez-Bustamente[1] where the plaintiff’s car crashed into a vehicle at around 80mph all because the autopilot AI did not recognise the presence of the vehicle. Instances like these make us question the liability of the parties and the extent of liability. Often, the driver cannot be held liable if there is a defect in the development stage of the AI. Since there are no defined laws in place, these matters are decided on a case to case basis and can often be discriminatory. This emphasises the need to establish as well as regulate laws on artificial intelligence. The prospect of enforcing regulations on an international level doesn’t even come into question because it is definitely not feasible for all the countries to come to a consensus. National regulation however is definitely something to be considered. Certain aspects of regulation can be made international in the future if required.

Socio-Economic Implications

Artificial Intelligence has facilitated machines to perform various tasks that are unimaginable to the human mind. They are also very versatile and can perform tasks that humans are capable of doing. For instance, automated medical procedures are much more precise than those done by human surgeons. Similarly, autonomous defence equipment is extremely advantageous to national defence protocol. The usage of artificial intelligence in these fields and various others has essentially led to job cuts and job losses as humans are being replaced by machines. The already pre-existing unemployment rates are now skyrocketing. With the advent of the ongoing COVID19 pandemic, unemployment rates have worsened still. When you look at the current scenario from the context of AI, the jobs generated by this advanced technology come nowhere close to the jobs been lost. Considering that the pandemic has rendered even more people unemployed and in light of the facts already stated, unregulated artificial intelligence devices have the potential to cause more future harm than good.

Moral Implications

Off late, humans have started taking their social and emotional problems to machines who, with their lack of emotional connect and empathy are supposedly more effective. Patients resort to automated machine therapists for the sole reason that they are not human and thus, do not have any human qualities. They seek help from machines in the hopes that they won’t be judgemental like their human counterparts. Similarly, AI dogs have become emotional support dogs, thereby replacing real dogs trained for this specific purpose. Every time a person chooses an AI dog over a real one, there is one more homeless dog left in the pound. The question we need to ask ourselves at this point is, are humans resorting to machines for the simple reason that they can more or less take care of themselves? Is the prospect of having to care for a pet throughout its lifetime so daunting that one would rather invest heavily in a machine?

Social media platforms have diminished friendships to mere virtual presence. So, the only real-life people we have with us are the ones we are surrounded with. If we choose machines over people and pets, we are indirectly losing our human touch. We are highly probable to develop problems of social anxiety when confronted with real-life situations and will probably become incapable of showing some empathy. The human existence will thus be taken over by AI without them even having to be autonomous and more powerful than us.


Regulating artificial intelligence will need to analyse what information is sensitive enough for it to either be inaccessible or accessible only after informed consent as well as the parties that can access this information. Furthermore, we have to question what is being done to ensure that those with a commercial or political agenda are unable to access this data without justified reason and consent. Other factors to be considered are the inherent biases among developers being projected on the AI and the loss of human touch and human opportunities. If countries focus on the economic development and deployment of AI without proper or sufficient regulation, it may give rise to a hegemony of AI interests and abilities.

[1]Shawn Hudson v. Tesla, Inc. and Oscar Enrique Gonzalez-Bustamente, Filing # 80052957 (2018). References