Note: This is my work in duplicate copy. Link to original article: DOI: 10.13140/RG.2.2.26954.32960
-People would be unemployed just to be employed again.
-All this after understanding that complete AGI is not plausible.
-AI has to pass this time for people to understand limitations of AI.
-It would be decided in this time what AI can do what AI cant do.
-To know what AI to keep and what AI to stop.
– The steering is in our hands; we can stop the ship, we can change the direction of the sail.
Keywords: AI, future, AGI, jobs, professions
Introduction
There is a lot of disruption in the world due to AI. AI has created job losses, professions are impacted from the roots out. Some praise AI, while others are hopeless, thinking about what direction to pursue now. Some feel as if they have reached a cliff where they know nothing of where to sail to now. People are losing their jobs rapidly, and those who remain in jobs feel the pressure to compete with AI. Everyone is busy scrolling through the latest AI courses to stay updated. People are working extra time so that AI doesn’t replace their jobs, from writing to decision-making, AI is replacing human intelligence.
Everyone is following the AI hype; those who are not are being given the letter with compensation. The business owners think AI will take over and they won’t need human employees, or would need fewer human employees. Well, what if something goes wrong, they would have to call human employees, then, but now contractual jobs are up too. People are looking for anything they can work on.
But the bigger question is — “Is the full AGI even possible?” Other questions are like, “Are there limitations of AI?” Other questions we shall answer in this article are: “What kind of jobs would AI lead to?”, “Can AI be dangerous?”, and “Can we steer the directions AI has to travel, deciding what is right and what is wrong?”
In the age of AI
People would be unemployed just to be employed again.
Job losses and a lack of vacancies are mounting across all departments worldwide. People are grappling with what to update now and which certification is needed. AI automates coding; coding jobs are automated. Coding was the most sought-after course in the world. Everyone was doing a technical degree in coding or computer science; all this is being automated by AI. Why, and why we don’t want to stop it. Coding requires long hours; it’s a nice and interesting profession, but even 5–10-year-olds are being taught to code. In many places around the world, schoolchildren were invited to coding hackathons. It was too much, childhood is not for coding. Maybe that is why AI is progressing in coding, and there is nothing wrong with it. We still need experienced coders to verify, debug, and fix major problems.
But apart from coding, there are professions that are losing jobs, such as analytics, where AI creates instant reports. However, all this would go on for some time. After that, regulations would be put in place. In the 1950s no one predicted coding would become such a big professions in the 2000s? Who knew? No one thought the whole world would train itself to be coders to write software and earn huge money! No one predicted it! In the same way, new professions would develop! Just as Textile engineering jobs ended and people found jobs in other sectors, new jobs would come. People would lose jobs to AI until, one day, jobs would start rising and then become constant, as shown in Figure 1 below. Note that this figure is based on human prediction, not AI, nor on actual data.

No one has seen the future, but this is how past waves of employment have worked out. Let’s see how the job market would revive. There are other technical challenges to watch for and address in the coming days.
AI has to fail this time for people to understand its limitations.
We have to progress a bit further to show that AI is harmful in many areas, while taking care of the harm it can cause. We need to take it a step further to identify which areas AI is harmful in; only then can we decide which AI to keep and which to stop. We have to move ahead, we have to use, we have to pass by this stage. We have to take small pains while being cautious about the harm it can cause. All this just to understand the consequences of uncontrolled AI use. We can’t stop now, as China is not stopping. But one day, a global consensus would be formed, as AI can be very dangerous if not controlled. At the same time, as AI advances, innocents should be protected from AI-related harm. As we grow in AI, we keep making rules to make AI safer. This would mould AI into a new form. If we look back from the future, we would see that the things and regulations put in place shaped the future of AI. Yes, small efforts yearly can shape the new AI, which some may still happily call AGI, as it would be intelligent but restricted. However, how would it be ensured that notorious players don’t bypass the rules and set up a secret AI laboratory, since all AI needs is a garage.
It would be decided at this time what AI can and can’t do.
For example, you cannot have a robotic machine with near-AGI capabilities. The reason is that robots would learn from the environment. Once it possesses an AGI-like mind and learns to navigate a new environment, no one knows how the robot will behave. No one knows how robots would make a world of their own. Hence, robots must be free from gen-AI yet possess basic conversational AI, as they need to converse with humans. Robots must not be allowed to congregate. We don’t want a “Terminator”- like scenario on Earth, so we need to stop after a while, and stagnation shall set in AI development. We can’t stop alone; world AI leaders like China have to make an agreement on this. We won’t do it; you don’t do it; otherwise we would stop, and Chinese robots would take over the world.
All this after understanding that AGI is not plausible.
One day, we will understand that a fully capable AGI is not plausible. Even if we try to make a complete AGI, AI would be so much regulated by then that a complete AGI won’t be talked off. It would be dangerous to create a fully capable AGI. Nothing to be scared of. All the regulations, year by year and month by month, are preventing us from achieving a perfect AGI. But no one promised that humans need complete AGI; we can work with our model of AI and happily call it AGI, since it would be intelligent enough for our needs. So why run for AGI, leaving other buttons open for mischievous acts? Why a perfect AGI? No, we are happy with what we have. Why burn so much energy to make a fully intelligent AGI that can think like the best humans ever thought of?
To know what AI to keep and what AI to stop.
In this era, we are working on what is being produced; we see it, use it, and then decide whether to keep it or dispose of it. There are so many options people have in the market, of which AI to use and which AI to drop. People are using AI on their own; no one is forcing them to. The AI industry is being driven by the people. And then people can’t say, we are not the players of AI. That AI is in the hands of a few. No, AI leaders use the fact to make some AI more advanced, especially the AI that people like, so people like it more, it is enhanced and the cycle continues. New features are added, people like them, then leaders either emphasize them more or drop them and backtrack on the most-liked AI feature. This is what determines which AI to keep and which to stop. The decision of big firm is based on majority rule. Most people use ChatGPT’s specific features, so it would enhance and pay more attention to them. While in markets like China, the rules are different, they want to lure outside users to their apps. Without much consideration of what people like or not, they want to grow worldwide. So, when the current owners of AI are deciding what to keep and what to stop, we need not worry; they are already taking majority votes into consideration. However, they must also consider minority needs. Governments can ask big players for this data to establish the same, and if this is not being done, they can take steps to stop some unused or harmful features of the apps.
Conclusion and Future Work
The steering is in our hands; we can stop the ship, we can change the direction of the sail. We have to go through these tough times; so many people are living in fear of losing their jobs, and so many fear getting new ones. All areas impacted by AI. But what and where is the world leading now? As I said, steering is in our hands. The big tech and big AI companies look at us, the people, and see what we look for in AI. They develop features and invest more in what we like to do online. The big AI leaders look at what most people on Earth want to do with AI, and we are being taken into account. Apart from that, in the future, governments will take more action against harmful AI. Apart from that, robotics with AGI can be dangerous, who can forget the “Terminator” movie series? Who says we want to get the perfect AGI, no we don’t want a perfect AGI! We would stop just right before that step; we have to.

You must be logged in to post a comment.