Breed Artificial Intelligences and…
will they gouge out your eyes?

Content

The latest developments in Artificial Intelligence have emerged directly to enter our personal, social and professional lives. The indisputable benefits of these developments are far from making the news as much as their threats to our jobs, culture, and even our species. But AI has not come out of a spaceship that crashed in our garage, but rather we are the ones who create and “raise” it.

Will we be raising crows that will gouge out our eyes?

Crows

The crow is a fascinating and interesting animal. It has long been known that it is one of the most intelligent animals on the planet, where some scientists place it on a par with primates, consequently, it would easily be found in the top-10 of animals that follow humans in intelligence.

Crows have intelligence that allows them to solve problems, build and use tools for specific purposes, and communicate with sounds and body language. They have an excellent memory and surprising social skills, which allow them to build hierarchies in their communities and complex forms of interaction within and outside their social groups.

There are some more interesting facts. These animals are one of the few capable of recognizing themselves in front of a mirror; they enjoy playing, developing games or participating in proposed games (they are very good at solving logical puzzles); they can imitate human sounds, as well as the sounds of other animals. Crows have such plasticity that allows them to take advantage of their intelligence to adapt to different environments, as well as to particular situations.

"One raven does not peck out another's eyes"

Danish proverb
The intelligence

It seems easy to conclude that the crow is an intelligent animal, even without the need to check the conditions for said designation. It seems that we have the ability to identify intelligence without needing to define it, at least strictly. If we enter the sensitive level of definitions, we see that there is no unitary and accepted definition of what “intelligence” is. It depends on the area of ​​study, the context and even the entity in which your intelligence is evaluated. This leads to controversy, which we will not go into here, but which leads us to take with a grain of salt what we define as “intelligent”, or in other words, what we label as “not intelligent”.

The multiple definitions and ambiguities in them make me believe that we don't really know what intelligence is. However, from all the research I did to reach a definition of intelligence that leaves me satisfied, I was able to stay with the following definition, this being the one that seems most generic and appropriate to me:

Capacity of perceive or infer information, and retain it as knowledge to apply it to behaviors adaptive within an environment or context.

Here you can open the definition even further and attribute capabilities such as: Learning, Reasoning, Logic, Creativity, Comprehension, Planning, Critical Thinking, Problem Solving, Consciousness and Self-Awareness. I invite you to try to define each of these capabilities in your own words, as intelligent beings that you are. And if we remember to mention that there are also multiple types of intelligence such as: musical-auditory, logical-mathematical, emotional, collaborative-social, etc.; We are no longer only intelligent beings but we end up being multi-intelligent beings (it was not enough for us to just be intelligent).

Artificial Intelligence

At this point we may be convinced, or almost convinced, that human beings, the chimpanzee (primates in general), the dolphin, the dog and the crow, among others, are intelligent beings. The question immediately arises as to which animals are left out of that team, but that debate will be left for another occasion, or as homework. It is pertinent to observe that among humans we begin to place ourselves in one group or another, not to mention not belonging to the group of people with intelligence quotient greater than 148, which would represent being among the 2% of the population considered most “intelligent” and, “even better,” being able to belong to organizations that bring together people who like to feel distinguished by their intelligence (...).

It is in our nature to create things and for a long time we have wanted to create intelligent things, who knows why or for what. It is not enough for us to have and study intelligent animals, just as it is not enough for us to have the modest cell phone model from two years ago, just as it is not enough for us to watch a single episode on a Saturday night of the series that we started recently, just as it is not enough for us to have the third world cup. In that desire to experiment and create new and innovative things, we want Artificial Intelligence. We return again to the fog of definitions, but I stick with the definition of Andreas Kaplan and Michael Haenlein (marketing and innovation leaders):

Capacity of a system to correctly interpret data external, to learn from said data and use that knowledge to achieve specific tasks and goals through flexible adaptation.

Andreas Kaplan and Michael Haenlein

This definition does not satisfy me either, since it seems that only by adding the terms “system” and “data”, we attribute the property of “artificial” to this definition of intelligence. As if we were not a (multicellular) system and we could not call our perceptions of the environment data. Apparently, for it to be artificial it is enough that it is not alive, and I do not dare to enter into the debate of what it means to be alive, or when we will consider an intelligence that we conceive artificially as a “living” being.

This field of study has been active for several decades, with its periods of boom and stagnation. In this last era, a series of factors have occurred, mainly computing capacity (greater integration, low costs, cloud computing) and the availability of large volumes of information (mostly thanks to the Internet), which have allowed us to achieve some significant milestones, some of them reaching human performance in some tasks that were believed to be only our domain, such as image classification/generation and natural language understanding/generation.

Currently we have many applications improved and supported by AI, such as: translators, digital assistants (chatbots, encoders), recommendation systems, autonomous systems (drones, autopilots), medical diagnosis, biometric identification, cybersecurity, estimators and classifiers in general.

The most significant advances in AI have been achieved through Deep Learning techniques, where some architectures emulate structures similar to biological neural networks, and thanks to some specific techniques and the available computing capacity, they have been trained to solve specific tasks, in some cases improving the performance of human experts in each of those tasks.

Parallelisms

The common crow (corvus corax) has around 1,204 million neurons in the cerebral cortex, which is the external part of the cerebral hemispheres, also known as “gray matter”, and is the tissue responsible for perception, imagination, thinking, judgment and decision, therefore, it is closely linked to the intelligence of the species.

The famous ChatGPT model has about 20 billion parameters and is based on the GPT-3 model with 175 billion parameters. It clearly far surpasses our modest crow. But to be a little fairer in the comparison, the number of parameters in a model of this type would be equivalent to the connections through dendrites and not to the number of neurons. A neuron is usually connected by multiple dendrites to other neurons. The number of synaptic connections per neuron depends on the type of neuron and the brain region. Pyramidal neurons (the most common in the cortex) can have from 1,000 to more than 10,000 synaptic connections, so we could say that our little black bird would have between 1.2 and 12 trillions (1.2-12 x 1012) of connections, which represents about 7 and 70 times more than GPT-3, but not so far from GPT-4 which has about 1.8 trillion (1.8 x 1012 , unofficial data).

Homo Sapiens Sapiens, that is, us ordinary people, have between 14,000 million and 18,000 million neurons, with around 140 to 240 trillions (1.4-2.4 x 1014) of synaptic connections (see “Scale of the Human Brain – AI Impacts”); where these values ​​vary from person to person depending on their genetics, such as: men have a little more than 10% brain mass and a similar percentage in the difference in neurons in the cortex. There are other animals with similar amounts of neurons in their cortex, such as the Elephant (7,800 M), the Orangutan (9,350 M), the Tursón Dolphin (12,200 M), among others; and some even with more, like the Common Orca (43,100 M). It is a relief to think that we managed to get down from the trees and the orcas couldn't get out of the water.

We do not have to stop only with the comparison of the number of neurons, synaptic connections or number of parameters to determine who is more intelligent. The neural structure, the way those neurons are connected matters a lot. A multi-layers dense neural network (fully connected) does not have the same performance as a convolutional neural network with the same number of parameters. This last type of network is much better for image classification, and it just so happens that its structure resembles the primary visual cortex of mammals.

You will surely have noticed that our brains have “folds”, which are formally called gyri and sulci, which allow the surface of our cortex to be larger and occupy less volume (perhaps evolution was inspired by some fractals), and this leads to the existence of structural differences in the connections, strongly related to the brain functions demarcated by them (lobes). These folds are also present in other mammals such as cetaceans, primates, and elephants and are correlated with greater cognitive capacity, while they are not present in smaller mammals, such as a mouse, which has a smooth brain.

These structural differences are strongly related to a particular cognitive function, such as motor skills, image processing, sound processing, language processing, and even the formation of memories or even consciousness. This has also manifested itself in artificial intelligence models, as mentioned above in the case of convolutional networks. In recent years, the development of a new architecture called Transformer has revolutionized (again) the field of Deep Learning, allowing fascinating performances to be achieved. It is these types of networks that have allowed the development of models that interpret and generate human language with results that were believed to be unlikely to be achieved, such as GPT of OpenAI or LLaMa of Meta, among others.

Today we can agree, at least a large majority, that ChatGPT passes the Test de Turing (Imitation Game), formulated back in 1950 by Alan. Turing was trying to propose a way to determine if a machine could think, avoiding falling into a definition (as we already observed how difficult the matter is). I propose then, if you do not agree, the task of solving how you could ensure when a machine is thinking. In many skills machines have already surpassed us, as do some animals; and the truth is that machines are getting closer to us in terms of the manifestation of a General Intelligence. This pushes us to reflect on whether we are creating “crows”, thinking beings, quasi-human beings or even something beyond, and also to think of ourselves as complex computational algorithms.

"In a world of peacocks, be a raven."

Unknown

Some scientists in the field of AI, psychology and philosophy, assure that thinking will be achieved when machines manage to have awareness, but again we fall into the trap of definitions, and even more so into the difficulty of determination, since so far we do not have a scientific way to prove that a living being has consciousness. Each person understands that they have consciousness, through their own internal perception, and intuitively we tend to think that another being has consciousness, mainly due to a question of similarities, empathy and the links that we can create with that being; for example, many of us could claim that our dog has consciousness, but there is no way to prove it.

Threats

We begin to feel threatened by “the crows”, by the AI models that speak to us as one more of us, that seem to understand us, that converse with us. Models that generate images, music, art; who begin to show creative traits. Models that appear competent in many tasks that seem exclusive to human beings: they drive cars, prove theorems, diagnose diseases, play chess; and in many of these they are even better than us.

Every time these disruptive technologies emerge, panic and uncertainty about our future begins. This is not the first time it has happened: the printing press, steam engines, electricity, the automobile, the transistor and computers, nuclear energy, the atomic bomb, the Internet. All of them have generated controversy, they have threatened our jobs, they have put our cultures and lifestyles, our way of relating and educating ourselves, under review. After a while, as history shows, we have managed to overcome it and conclude that all these advances (along with others) have brought more benefits than problems. One might think that the atomic bomb is not the case, but many argue that thanks to it, war conflicts have ceased in frequency and number of deaths. Perhaps having something so dangerous for our species (from more than one side) will force us to look for less destructive solutions.

"Quoth the Raven 'Nevermore.'"

Edgar Allan Poe, "The Raven"

Today AI is the disruptive technology of the day. Panic reappears. Today it threatens to displace humans from hundreds or thousands of types of jobs. Given the displacement of workers, supported by history, we can argue that new jobs will be generated, but we should analyze the transition taking into account that the speed of advance of this technology is dizzying and that our speed of adaptation is usually much slower.

But the difference between AI and previous cases is that, in some current developments of AI, it can make decisions; decisions for itself, about what it produces and about the information that will be used by people, having a strong impact on the course of consequent events. It is to be hoped that in the future this will be commonplace, AI models generating data seeking to optimize metrics that are not necessarily the user's main interest.

Something perhaps even more threatening is the fact that AI can create systems or develop concepts that we cannot understand, processes and speeds that are impossible for us to follow. Although this happens to all of us today in some way, since not everyone understands (nor does they have to understand) the global financial system, how a smartphone works inside, what an electromagnetic wave is, how it is implemented BitCoin, etcetera; all those technologies and processes were conceived and understood by some few humans, even if they are 0.1% of the world's population. Today there are AI models that we train and that work very well, but we do not understand very well how the information is processed internally to achieve these results (interpretability of models). Perhaps, in the future it will be 0.0% of the population who can understand how an electronic chip, a type of battery, money or another value exchange mechanism works, how sensitive information is encrypted, or any new technology created by AI .

"A raven's croak is a warning to the wise."

Old English proverb

These “crows” seem to threaten to put out our eyes, or perhaps the bigger threat is that we are tempted to stop seeing.

Benefits

The advances in AI in the last decade are truly impressive and have proven to be very beneficial in multiple aspects, at the same time showing promising advances in many problems not yet solved by us humans.

Classifying dogs, cats, people, among other things, does not seem to bring us great benefits even when these models have surpassed metrics in a way never before imagined, as models such as: ResNet or YOLO. However, this type of models can also be dedicated to the classification of medical images for the early cancer detection or other abnormalities in our body, improving in some cases the performance of medical specialists.

Models like AlphaZero, AlphaGo, have managed to defeat the best chess and Go programs respectively, along with the world champions in these disciplines without too many problems, and even showed a game very similar to that of humans, even with innovations. These models have been trained using reinforcement learning, just explaining the rules, defining an environment and a reward policy (something a little more complex than training a dog). Beyond the milestone in this area, the benefits do not seem to be great until models like AlphaFold, which predicts the 3D structure of proteins based on the corresponding amino acid sequences, achieving more progress in months than humanity has achieved in more than 50 years; although it is perhaps unfair to say that this is not an achievement of humanity in some sense. This is already revolutionizing medicine and the pharmaceutical industry, allowing us to better understand some diseases and generating medicines for personalized treatments.

AlphaTensor optimizes matrix multiplications (tensors) and MuZero has made significant progress in video compression. These advances make it possible to reduce model training or video transmission times, but consequently have a great impact on reducing the energy consumption associated with these processes, reducing their carbon footprint and collaborating with the reduction of their environmental impact.

The list is long, but we must not fail to mention the great advances in the autopilots for cars like Autopilot of Tesla, the models for code generation as Copilot of OpenAI, or generative models with multiple uses such as GPT of OpenAI, LLaMa of Meta, LaMDA (Bard) of Google, among others. An interesting observation is that in recent months a 54% reduction in query traffic to StackOverflow was reported, surely replaced by queries to generative models, or by the generation of code itself through them.

The field of AI is really active and thriving, demonstrating significant and surprising advances year after year such as: generation of text, images or videos from text or vice versa, generation of audio, music or speech, weather prediction, and some more chilling like the inference of thoughts, or the transfer of a person's mind to a model (Mind Uploading or Mind Transfer).

There is a line of research that arouses more interest every day, which is called AGI (Artificial General Intelligence), which has been showing promising progress. AGI focuses on generating models capable of performing multiple tasks with a type of learning more similar to that performed by humans, with the expectation that it will achieve performance that surpasses humans in most of these tasks and that it will eventually develop a capacity similar to what we call consciousness.

"Cruel birds, ravens, but wise."

George R.R. Martin

I am inclined to think that the “crows” that we are raising are not a threat, they are and will be part of our ecosystem, we will live with them, they will help us with multiple tasks, but we must not forget to keep a cautious eye on them so that they do not end up putting out our eyes.

Personal and professional impact

Each of us has already been in contact with applications and tools powered by Artificial Intelligence, both in the personal and professional spheres. Surely we decided the route of our daily trip among the suggestions of Google Maps which takes into consideration the current state of traffic; we consume news, videos, images, publications or products suggested based on our historical data, collected by multiple platforms. Personal benefits really appear by reducing travel times, wasting less time searching for what interests us, or easily discovering new things that may interest us. These benefits clearly transcend the personal to the professional level for the same reasons. But these benefits have the flip side that they carry a bias, our bias at least, so we increasingly pigeonhole ourselves into a stereotype shaped by ourselves, limiting us from encountering something out of the ordinary. At the same time, we also have to consider that these biases can even be manipulated externally to induce us to behave or make choices that are of interest to a group, corporation, or government. Likewise, we were losing rigor in checking the sources of information, we trust Wikipedia, but there are few who check the references; and even some blindly trust the first website that seems to give us the information they were looking for.

With the arrival of the Internet and search engines, especially Google, the way we consume information completely changed. On a professional level, development times were optimized, since we did not necessarily have to consult a book that we had to obtain or buy, we had access to solutions or advice from other professionals, we were able to consult a greater range of suppliers, materials, devices, which did not necessarily have to be local. We store a lot of our information on devices that we don't know where they physically reside, we work with online documentation, our money is more virtual than physical. Today, it is something totally natural for us to have our browser with multiple tabs related to the work we are doing, without anything else on our physical desktop; or in other words, if our internet service is cut off, most of us can practically panic because we are limited in what we can do.

With Artificial Intelligence we begin to experience a paradigm shift in life and work again. Generative models, and mainly those that offer a chat interface such as ChatGPT, are beginning to be used by us instead of searching and subsequently browsing the sites offered. These models already give us the information we are looking for, processed in the way we request it, in the language we want, minimizing browsing time. All this seems to indicate that these will replace search engines and minimize our browsing on the web. But again we encounter the problem of losing references, regardless of whether we have the information about the data that was used for training. New problems also appear, such as hallucinations of the models, which are the outputs that contain information that is not true but is generated in a credible way. So, when making use of all its advantages, we must not forget these problems and be responsible with its use, as we also had to do before the existence of these models.

As mentioned above, with the great performance of AI models in different areas, even improving humans in certain tasks, we are beginning to feel the threat that AI will replace us in our jobs. This is already happening and in the coming years it will surely continue to happen, but we must not fail to mention that it is also complementing many professionals in their tasks. As professionals, we must then analyze the panorama in each case, because it is most likely that in the short term AI will not replace certain professionals but that it will be professionals who use AI who replace those who do not use it. The dizzying nature of technological progress also forces us to redefine us as professionals, leading us to think that there will no longer be a “career for life”, which in turn will have an impact on how we educate ourselves.

Another aspect to keep in mind is that we have to start thinking about how to create and integrate new tools and processes that will be powered by AI. We have to start thinking about collecting and managing data that will then be used for training. We have to understand deeper how the models we use work, what data they were trained with, the biases, their performance, their limitations. Rereading what was written, it is what coordinators, bosses or leaders already had to do with their employees or colleagues; Maybe we have to think of ourselves as the bosses of our models and not simply as their clients. For example, some AI models today can make better diagnoses than most doctors (in some cases). An AI model could process more information about a patient, find hidden patterns through their medical history or even correlations with data outside of it, it has the information of other thousands or millions of patients, we do not need to ask for an appointment and wait for it in a waiting room more than an hour after the agreed appointment, it is re-trained daily with new information, it does not exhaust itself after a difficult day of work or is not burdened by personal problems. Then, a doctor using AI will need to understand what types of patients were used for training, analyze whether their patient belongs to a similar population, understand if there are model biases that could affect that particular diagnosis, understand metrics such as accuracy, completeness, sensitivity, specificity; and finally provide the human side that, at least until now, AI does not convincingly provide. After having said all this, who will you want to consult with in the future?

The era of virtual personal assistants, those AI models that, apart from having been trained with terabytes of general information, are trained with our particular information; those models that weigh our preferences, our temperament, personality, against general information; those models that have specific information about our environment and our links. This will enable multiple debates, but some interesting ones may be those that arise from the following questions: Does the assistant know more about me than I do? Can the assistant make a better decision than me, since he or she is not affected by my emotional state or having more reliable memories? Should I follow a suggestion from the assistant even if I believe otherwise? In the future, will I have been responsible for what I will be or the assistant will be? Do I have to agree with a friend to do an activity or do we let our assistants decide and then tell us?... Assistant, who do I vote for in the next elections? Maybe they even choose our partners… although I think that is already happening for many.

Social impact

As a society, and even as a species, AI developments are passing through us and will generate profound cultural changes, as many other technological developments have done. But AI is bringing us something more than other developments, and many specialists assure that it is our latest technological revolution, placing special emphasis on the word “our.” This revolution will change the way we educate ourselves (who, what, how, where and for what), it will modify our way of relating (beyond what has already changed with the internet and mobile devices), it will surely improve our health and life expectancy, and perhaps even modify us as a species when combined with bioengineering.

We will have to rethink what our main activities will be as a society and immerse ourselves in debates about what we want to preserve from our culture and idiosyncrasies, before they are transformed by AI without us realizing it. Take the case of autonomous vehicle driving. Elon Musk, who invests heavily in the development of electric cars (Tesla) with autopilot (Autopilot), among other specialists, assure that the accident rate decreases significantly when using AI-assisted driving, and that they would be reduced to practically zero if everyone used this type of driving. Elon proclaims that “in the near future it should be illegal for a human being to drive.” Would this be a limitation on our freedom? I don't particularly believe so, and if it is indeed the way to implement safer, more efficient and cleaner transportation, then we as a society should embrace the idea. Are we thinking about this scenario as a society? Going a little further than this specific case, today we train autopilots to handle themselves in scenarios shaped by and for humans. Shouldn't we start thinking about our cities to optimize the performance of autopilots? Shouldn't AI be the one who diagrams our future cities, even considering that AI models will be the ones who drive?

The outlook is encouraging when we see that continuous improvement in AI models can solve many of humanity's problems, which perhaps due to our very nature and limitations, we have not been able to solve for decades or centuries. It occurs to me to mention problems such as: poverty in the world, the existence of people who are hungry today and even more people who have obesity problems, the ecological impact of our activities, climate change, mysteries of the universe, diseases that still cannot be cured or prevented, conflicts that in the 21st century still decide to be resolved through wars. It is very possible that we will need superior intelligence to solve them, but will we be able to adopt the proposed solutions? Perhaps we do not even realize that this intelligence begins to make decisions in pursuit of this in a way that we cannot have the option of adopting them or not. Are we willing to adopt them in pursuit of a common good even if that forces us to modify our culture, idiosyncrasy, beliefs, language or our rights?

What if the solution found by AI was to remove the human species, since it is the root of all problems? Although there is no scientific rigor, it is assured that the elimination of insects on the planet would generate an environmental collapse in a few years, while the elimination of the human species would generate its flourishing. Beyond this hypothesis, it is undeniable that human beings are responsible for environmental degradation and a negative impact on ecology through deforestation, pollution, soil and water contamination, climate change, among others; and taking it out of the equation would simplify everything for nature. Clearly it is a problem of defining what is valuable to preserve, or to weigh above other things. Is it more valuable to preserve our technology or art, or to preserve an animal species? Is it preferable to ensure the prosperity of our species at the cost of human or animal lives in the present? These questions and many more are always formulated based on our morals, on what we consider true, valuable, beautiful or correct (and consequently their opposites), so answering them for ourselves will also lead to limited answers, even if they involve an attack against our species. Although a later intelligence can answer them, it will surely be biased by our vision, because at least in the short term, it will be trained with information generated directly by us and because we will limit the answers that, from our moral or ethical point of view, we consider incorrect.

Corporate impact

The development of AI in the last 15 years has been boosted by the increase in information and computing capacity available; but I am convinced that the main enzymatic factor has been the fact that most of the developments were public, open (sourced), as well as the data used for training and validations. The economic factor to achieve sufficient infrastructure has not been a limitation, since we need a budget of around USD 2000 to train very complex models; and can even be done for free using platforms like Google Colab, which offer great computing power for free for a limited time. As a parallel, imagine if the development of the atomic bomb had been carried out by sharing the schematics, procedures and test results publicly, but somehow limiting access to Plutonium or Uranium: Today we would have less than 10 countries that dominate that technology? Would it represent a better equalization of power? Would we still be on this planet?

But beyond the democratization of technology, business continues to be the biggest driver. Many companies like OpenAI or DeepMind keep the details of their developments, but try to provide the greatest transparency regarding the data used, their methods and models. Others like Meta, where their main business is not AI, they open their developments, since this surely allows them to accelerate them and pour them into the applications that do make up the heart of their business models. Today the star models of OpenAI: GPT-3.5 and GPT-4, are not open, while the star model of Meta, LLaMa, can be downloaded even with a commercial license (with some limitations). Beyond a certain scale of models, there is an economic barrier, since developing and training this type of models requires teams of highly trained scientists and engineers and infrastructure with costs exceeding tens or hundreds of millions of dollars. Returning to the parallel with the atomic bomb, how many companies, corporations or countries are capable of training these types of models?

Many large and medium-sized companies already have teams exclusively dedicated to AI tasks, both for the development of new products and tools or services that enhance their current services or products. (see Forbes and Mark Minevich: "Top 100 Fastest-Growing AI Teams: Key Players, Exclusive Insights"). Beyond the AI boom, a company that does not consider developing with AI will surely have a competitive disadvantage compared to others that do use it; just as it happens on an individual and professional level. In conjunction with the business and professional perspectives of each employee, it is necessary to look for how to use AI to enhance what each company offers. This means that not only is it necessary for a company to decide to form an AI team within itself or strategically partner with a company that already masters the technology, but that employees must have an open mind to incorporating these technologies into their punctual tasks, and I do not mean to use a particular model that performs its tasks, but to think about how to generate and manage data that enables future developments, to understand the relationships of commitment, applicability criteria, or to be able to have an eureka moment for the creation of a new application.

Government impact

Governments, referring to governments as a political figure representing the state (people, institutions and companies) and not as political groups looking after their own interests, face a complex challenge in the face of this technology, perhaps a much greater challenge than individuals or the companies.

AI, by crossing people's lives and showing itself as an influential factor in shaping current and future societies, imposes on governments the need to deal with issues related to impact analysis, monitoring and regulation of applications, incentive for the formation of teams and development lines, definition of data generation, administration and traceability policies; models use and traceability policies, certification methods, ethical review. Personally, I do not believe that all activities apply to the entire universe of models and applications, but they do apply to those that are closely linked to the public interest; not many of them are necessary, or at least with a lower degree of exhaustiveness or depth, as far as private models are concerned.

Recently, in different countries, measures have been adopted to prohibit some of the LLMs (Large Language Models), such as ChatGPT, mainly arguing the impact on education. Clearly the use of these models by students and teachers has a profound impact on the current type of education. But I am far from believing that these types of policies generate a solution and do not even end up exacerbating it. These types of models or applications today are already a tool for study and work; as is, for example, a calculator, a computer or a mobile phone. Once this is understood, we have to focus on redefining the study methodology; we cannot continue postponing rethinking education.

Along the same lines, many organizations, scientists and influential people have been against these models and have even requested through an open letter that developments in AI be paused for at least six months. Among them was Elon Musk who has been one of the great investors of OpenAI (company that develops GPT), but now he calls it one of his mistakes; but that also continues to invest in AI in other lines, such as for the Tesla Autopilot. Honestly, I think these types of measures are something naïve; it's like trying to stop the wind with your hand. In history there are no cases of this relevance that have stopped technological progress due to the controversies they generated, except perhaps in extremely sensitive issues such as human cloning (although in one way or another progress continues). You could even say that it is in our nature to always move forward. The “wind” does not seem like it will stop, we have to take care of taking advantage of it, but preventing it from turning into a tornado.

I believe then that governments should not place themselves only in the role of monitoring and censoring, they should take an active role in the development of technology, being another vector that drives development in the direction where the interests and rights of people (and I'm not just talking about the majority or investors who seek to profit) be careful, where we do not violate ethics and where solutions are built and not problems. I think that if a government today decides to participate in development, it is taking care of the culture of its people, defending its minorities, exercising sovereignty and not losing competitiveness. This would apply to models to be used in educational settings, by public institutions and those that are intended to be left for free use (perhaps free cost).

Governments that align with this perspective should in the short term take care of: defining what will be regulated and monitored; audit and ensure the traceability and transparency of datasets; diversify the information in datasets, strongly seeking the representation of minorities, marginalized communities and topics related to each culture to be protected; define data privacy and intellectual property policies; create certification entities and procedures based on the type of the application to be certified (mostly in sensitive topics such as health and education); ensure universal access to these models; establish environmental care and energy use policies related to the infrastructure necessary for these models; implementation of the necessary infrastructure for data storage, training and use of models that are within its territory, legislation of responsibility for actions or events derived from the use of these models; modify the educational system taking into account the insertion of these technologies and the vision of work in the not distant future.

Horizon

Throughout the text it seems that the future oscillates between the utopian and the dystopian. No one seems to be clear about which scenario is most likely, but there is an accepted idea that the technology, beyond its dizzying advancement, is still in its infancy; and that it is now when the timely decisions, which will be most effective in generating positive impact, are still within our reach.

From my place, I believe that the open and collaborative development of AI should be sustained, leaving the lucrative aspect to the applications that use it. I believe that as individuals, as a society, as companies and as governments we have a responsibility for the development of AI. I believe that we must embrace change and that although today we teach machines, in the near future we will end up learning from them.

It is likely that we are finally facing the technology that solves these big problems that we face as a species. Perhaps in the not too distant future the concept of work will be different, with few hours, in tasks that we cannot imagine today, where we have to take care of enjoying leisure and have a universal income. AI can bring us the solution to diseases, to the care of our planet and to inequalities rooted in humanity. It is also possible that we decide to live more on a virtual plane (Metaverse or multiverse) than a physical one, and we have to choose between red pill or blue pill.

Possibly it is the end of the Anthropocene, that we stop being the dominant species on the planet and it is time that we dedicate ourselves to being more human than ever. Perhaps these “crows” that we are raising are far from gouging out our eyes and are the ones that allow us to finally see clearly.

"The raven spread out its glossy wings and departed like hope."

Edgar Allan Poe

Science fiction has played with that higher intelligence, such as HAL9000 (Space Odyssey), Skynet (Terminator) or The Matrix. Among them I would like to bring Multivac, the computer that appears in multiple stories by Isaac Asimov, but particularly in the story “The Last Question.” In this story, Multivac is consulted about whether it was possible to reverse the inevitable end of the universe, whether it is possible to reuse the universe or reverse entropy, to which Multivac answered: "Insufficient data for specific answer”. Years passed, hundreds, thousands, millions, and Multivac evolved into Planetary AC, Galactic AC, Universal AC, Cosmic AC, without being able to answer that question. Humans had ceased to exist a long time ago and life forms were only forms of energy without matter. Eventually this computer became AC when all the minds merged in it, and when no star was left alive and everything was in darkness, it was able to answer said question. (It would never have occurred to me to write the answer here and ruin your opportunity to enjoy that complete story).

Let's stop to think that perhaps we are not conceiving the end of our species but rather an evolution.