AI: Our inevitable doom???


A look into what the future of AI has to offer!

By Pablo Rubio


In this day and age, if you ask anyone about AI the first thing that will probably pop into their mind is an army of killer robots destroying the world. Even though this idea has been popularised by science fiction movies, it might not be so far-fetched after all…

A quote by the Future of Life Institute: “Technology is giving life the potential to flourish like never before or to self-destruct”. This quote is full of meaning because it states exactly what AI is doing, it is giving us the ability to advance and improve in every aspect of society, but if we are not careful, our own invention could be our demise!

The big question I will be talking about in this blog article is will AI take over society in the future? But before we dive in, what really does it mean for AI to ‘take over’? Many people have different opinions on the subject, but I think there are various ways we could say AI has or has not taken over society. Firstly, I think that a big war between AI and humans is not likely, but also not impossible. Personally, from my point of view, AI will find a much more intelligent way to ‘take over’ the world, a way that we will have no idea is happening until it is too late. In my opinion ‘taking over’ means that AI will have enough power to control large aspects of our society to a point where humans will not be in charge any more. Of course this is an open question, so I will leave it up to your imagination…

In recent years, AI has been getting more and more attention, and more and more people have been starting to warn about the potential harm it could cause society in the not-so-distant future. One example of this is Geoffrey Hinton, who sees AI as an “Existential Threat to Humanity”. Already basic forms of AI have had a great impact on us all, such as for example ChatGPT, a large language model that can generate human-like text because it was trained on immensely large datasets. This and other similar tools are creating a huge amount of controversy, due to the fact that they could potentially take many people’s jobs and also because of the issue that no one knows if the AI generated responses are accurate or have been based on copyrighted material.

These problems in the short term cannot compare to the potential problems we could face in the long term. Narrow AIs will soon develop into AGI (Artificial General Intelligence), but no one knows when this could occur. It could be in a few years, a few decades or more, or it might already exist and be planning to take over the world! The AGI would eventually become ASI (Artificial Super Intelligence) once it gains the ability to self-improve and this moment is known as the ‘Singularity’. The ASI would be so intelligent that it could do whatever it wants, and if the entity that developed it did not implement the sufficient amount of safety mechanisms, it would only be a matter of time before it took over society. Just imagine a super-intelligent machine that knows more than every human combined with no safety measures to control it and that learns at an unimaginable speed!


During World War II, the US and Germany were in a heated arms race to develop the first atomic bomb, now something similar is going on with AI. Many companies such as Google or OpenAI are trying to create this AGI, but this time there is more on the line. The most important thing they should be doing is making sure they train their models carefully. There are many challenges, for example, AI could be biased or discriminatory, so there is a need to incorporate ethics and laws into AI development to ensure fair outcomes. But the biggest problem every group working towards AGI is facing is not the vast amount of data needed or the insane computing power required, but actually the ‘alignment’ or ‘control’ problem. It basically consists of making sure that the AI’s goals are aligned with our human goals. This means that they have to train the AI models in a way to ensure that when AGI does come around, it will not see humans as an obstacle in their path, but as a companion to help achieve a common goal. However, the biggest fear of them all is that we will not know what decisions the AI will make because it will be so much smarter than us to a degree we can not possibly comprehend.

One of the most debated topics in the AI industry, is whether AI is or will be conscious or sentient. Many people argue that AI is not like us humans and can not feel emotions, while others argue that AI could possibly develop a state of consciousness in the future, if it already has not. In any case, the AI would still be able to take over society because regardless of it being conscious or not, it would still know how to control other people’s feelings and emotions and it could convince people to do things for it, just using words. This is very worrying because the AI could essentially manipulate anyone it wants and would not have to recur to to force, such as the killer robots I talked about at the beginning, to take over the world. This is the main reason why I think AI will overpower us in the future, because intelligence enables control. Throughout history, no intelligent species has ever been ruled by a less intelligent species.

In conclusion, AI is probably the last invention humans will ever need to make. AI could help us advance technologically, accelerate scientific research and overall improve the economy, society and every aspect of our every day lives. Nevertheless, it could still be a potential threat to our existence because of the simple fact that we do not know what it is going to do next. So if you take away anything from this blog article, let it be this: The future of humanity is in the hands of AI!!!

Comments

Popular Posts