Should we pay high to use Generative AI?

Note: This is a duplicate copy. The original is on Research Gate with DOI: 10.13140/RG.2.2.36643.52007

Abstract: In this paper, we discuss why gen AI is costly and why people should not be charged if they are just querying a chatbot. Note: this article is for general AI, not other AI, such as in washing machines or specialised Medical AI, which needs more funding to become specialised. The algorithm used by AI developers is a black box; they can’t explain the outputs. All they can charge for is energy. So, when energy is scarce, should we even use AI? Can we rely on the internet or the World Wide Web? No, it’s not easy to go back. But then, AI needs to subsequently reduce its subscription charges as energy prices come down. AI is mostly used as a know-it-all portal or an entertainment engine for adults as well as for youngsters. We should know that AI relies on public data, not on AI companies. Much of the data is copyrighted, much is charity-based, or paid, yet AI companies continue to charge as middlemen between our fellow people’s data on the internet and users. But are we losing our own creativity with AI? The AI companies must ask only for the energy charges and their salary, and subsequent developments not the excess. AI has changed the world, but this money belongs to inceptors, contributors, and unknown participants living today who don’t even realize they’ve indirectly contributed to an AI answer for someone in need. For example, each answer on a medical website can help with an SOS treatment for a woman delivering a baby in a far-off village with no medical facilities. This is not safe, but it’s just to give you the idea that AI is useful, and we can’t deny anyone the right to AI. This paper focuses on “Right to AI” as free AI.

Introduction

Why should we give money to use AI? To answer this, let us understand what current-day Gen AI is. Gen AI, or generative AI, uses existing data in the form of text, audio, photographs, paintings, and videos, and applies AI algorithms to it to produce concise outputs for you, the users. So what are the AI companies charging for? The data, text, audio, photographs, paintings, and videos are available on the internet; most of it is someone’s hard work, someone’s copyright, or someone’s creation. Much of it, like Wikipedia, is the result of our friends’ or ancestors’ hard work. The AI algorithm looks up this data, at times summarizes it, at times edits it, and often manipulates and presents it to users. Even the owners of AI companies don’t know what the magic is or how it works; many call it a black box. When something goes wrong, they are themselves clueless. You ask questions, and the black box answers them. No one knows, well, it’s not rocket science. They use algorithms such as associative memory and attention models on text, audio, photographs, paintings, and video data available on the internet.

So, when all the data is taken from the internet, why are they charging such a high subscription fee? Now people have become lazy enough to go online and search for themselves the things they need, instead of relying on AI. Whatever AI tells them, they believe it. There are so many copyrighted works that end up being edited by AI, and so many artworks that have lost their individuality as they are manipulated. For example, with AI, someone can create “Monalisa in a pink gown or a long covered gown”. This ends the original artwork’s uniqueness. Art ends here? Unless the owner uses it as they like. Art is a manifestation of someone’s thinking, and thinking is a unique gift from God; not everyone can think the same way, nor is everyone mediocre in their field. One must respect the gift of mind given to someone. With AI, the traditional art seems to fade away. We must protect the rights of art workers. Not only is art in deep trouble, but creative writing is too, and people still write, and AI engines summarize and produce it as output.

In the next section, we describe various aspects of Gen AI and how it impacts people’s lives, what it is made of, and how it is used. On top of it, we do not say that other kinds of AI are manipulative in nature, no, for example, the AI your washing machine uses is someone’s own inception, and neither is it gen AI nor is it a black box. So this paper does not focus on such AI. However, there are generative AI systems that read research papers and generate new discoveries; the charges incurred must be paid to the service that provides the research papers to the AI to develop a new algorithm. We do not say gen AI is of no use, no it does help in creating new research which is valid, but why to ask such high subscription fee unless the gen AI reads research papers which have a cost associated with it, in that case subscription fee is a valid thing, unless all research papers are available for free to be read by common people. In the latter case, the world would head fast towards a new revolution. So, let us see these issues in depth.

Why is AI subscription not free to use for all?

AI subscriptions can vary by the types of AI in use; typically, we are talking about generative AI. Other kinds of AI, such as Fuzzy logic AI or Medical Bots, are not general AI and are therefore out of scope for the current paper. In this paper, we discuss why generative AI is costly and why people should not be charged high for simply querying a chatbot. The data, text, audio, photographs, paintings, and videos are available on the internet; most of it is someone’s hard work, someone’s copyright, or someone’s creation. Let’s first see these in detail.

The following are available online, but are they free to use?

i. Text: A lot of text is available on the internet. This text can come from contributory sources, such as Wikipedia or paid subscription sites; encyclopaedias, blogging sites, novels, books by writers, and databases are key inputs to AI. The AI uses these free or paid materials for training.

ii. Audio: A lot of freely available voices, text, and information is available on the World Wide Web. These voices can be used to generate new voices and new information.

iii. Photographs: Photographs of yours and your near and dears ones. The photographs of your pets are all available on social media sites. These can be used by AI to do many things. For example, AI can read your media and suggest 2-year-old pictures to you. Many professional photographs are available online, but they are not free; still, they can be fed into an AI model. These form the basics of new photographs, someone surreal intelligence can be put to use to create a new surreal image by combining a real image with an imaginary one, and the money goes to an AI company. Is it right? Not much credit is given to the original artist of the artwork.

iv. Paintings: Many artists used to create genuine art, but now art is just a click away. Trained on freely available images and combined with tagged photos, we can generate countless paintings in seconds. We should bring art back as a respected profession, but ensure AI can’t access the artwork until it explains its reasoning! Art reflects the mind and deserves recognition. AI-generated art is also valuable, but always pay the original artists from subscription fees. For instance, if someone merges the Mona Lisa with a white horse, the museum housing the Mona Lisa should be properly compensated if subscription fees are involved or if there’s profit. Otherwise, it can be used freely since you’re not selling it. Any sale or transaction involving such art should credit the rightful owners.

v. Videos: The simultaneous training of audio and video can be used to generate videos of any person speaking in any language with perfect lip-sync. These videos serve various purposes and can be combined with entertainment videos to create chaos or awe. For example, it could produce a video of someone building a house on top of the Alps, but the original creators do not benefit from this, even if they earn from it — especially for those who know how to use AI to make new videos by merging old ones with associative memories. Such videos can go viral, earning creators money not just from subscriptions but also from ads.

vi. Research Work: Research AI shouldn’t be too expensive; we all know reading research papers costs a lot. Then they create new algorithms or solve unsolvable problems. That’s good, but the cost is high because AI has to pay subscription fees to journals. So, for now, we can focus on the five main points above. We need AI to help in research to push new frontiers and also to eliminate fake research that adds nothing substantial to the research community.

Who wrote all this Wikipedia? Who has written all these texts? Humans? Our friends? Our ancestors! Who has sung all those songs? Who has added all those videos on YouTube from which generative AI trains? Who has written all the research works online from which AI trains to create new research? Not the AI company! So why is the AI company charging for it? It provides an interface between the online written material and the inference engine that combines the text our fellow people have written and infers. The inference can be of the following types:

1. Summarization

Remember me for faster sign in

The AI reads the contents from the internet or from a document you upload. It has a summarization agent that summarizes the articles and produces articulate information to the user. For example, here I asked Copilot to summarize the content from the Wikipedia article I provided.

The AI tools structure the information and present it to the user in a very attractive form, making the user more inclined to read and believe it. Summarizing is a tool that we can pay a subscription fee for. But the question is, why such a high fee, from millions of people around the world? The charges need to decrease.

2. Retrieval

The AI performs retrieval if you don’t specify the source content; it searches for it on the internet on its own. Then it summarizes and presents the outputs, its search engine plus summarization.

3. Generation

The AI generates images and videos based on its learning, as explained above, using attention and associative memories to create new content. This can include changing a person’s face in a video to a cow’s face or swapping a man’s speech for a woman’s. Such a generation seems fun, and people are tempted to subscribe to this entertainment. However, we must remember that the original video is ignored; no one sees or recognizes it. All people see is the new video showing a cow giving an AI lecture. This needs to be addressed — if money isn’t going to contributors, then why is it going to the AI entrepreneurs?

4. Appending

This follows from above: appending a video or photograph of a text with content based on AI is not cost-consuming; the only thing that costs is energy. The algorithm used by AI developers is a black box; they can’t explain the outputs. All they can charge for is energy. So, when energy is scarce, should we even use AI? Can we rely on the internet or the World Wide Web? No, it’s not easy to go back. But then, AI needs to subsequently reduce its subscription charges as energy prices come down.

5. Editing

Editing tools are part of everyone’s life now, but they have learned English from reading open-source English books in the training phase. Is the money they ask for more than the power they consume? Or is it otherwise?

6. Merging

Merging audio, video, and text into one or more forms is common nowadays. AI has learned from videos of all kinds from around the world. It knows more than any human ever will and can create anything you ask for, having ingested all the videos, texts, audios, photographs, and paintings. AI can merge these in various ways; for example, it knows what a cow looks like and can put your body on a cow’s face, making it speak in any language you specify. This process is merging. Do we want it? It’s fun and attractive at first, but we must remember that we are all mature adults now. We need to grow up. Having fun is okay, but we should see that these services should be free, except for energy costs, given that the material they are merging is freely available on the internet, including copyrighted works. The models are built, and they have earned their initial development through trial and error. Now, why are they charging for something that has already been learned and created?

Yes, we all want high-quality output on our screens, and we believe that using top-notch apps ensures we get high-quality solutions. Almost all AI apps I tested produce similar kinds of results because the models are set up the same way and use similar techniques. Do we really need to pay such high subscription fees?

There are many areas where AI is useful, such as medicine, with AI doctors that need to learn more and become more accurate for people who cannot afford a doctor. We don’t mind paying fees to an AI doctor app as long as it is trusted; in places where healthcare is unavailable, we can rely on an AI doctor. However, this AI doctor must be a specialized app focusing solely on AI and medicine, not a generic app that scans the entire internet and mixes information.

Conclusion

We all need help with creating, editing, writing, and analysing. But are we losing our own creativity with AI? AI has demonstrated that after pre-training on all the information on the internet, it can answer any question within seconds. It can create any video you can think of in seconds. But the creators of these AI systems can’t explain what this is; they call it a black box based on associative memory or attention. However, they often can’t predict the output themselves, which can mislead many into wrongdoing. Their AI models are reading all kinds of books, texts, videos, audios, paintings, and photographs online. Most of this material isn’t owned by them. Still, they use it to create new content, which they sell or charge a subscription fee to generate. When this material belonged to our ancestors, colleagues, or coworkers, and we are using it for knowledge and entertainment, why should we pay for it? If we are doing business with or earning from it, we should pay the original contributor rather than AI companies, which act as middlemen between creators and users, and the energy costs involved. This remains an open topic for discussion.

Reference

[1] (PDF) Should we pay high to use Gen AI?

Future challenges in fully entrusting AI and Robotics?

Here is my latest research note. I am sharing with you a complete copy of it. Do read it.

Reference:(PDF) Future challenges in fully entrusting AI and Robotics?

Note: The original article is on researchgate. This is a duplicate copy. DOI: 10.13140/RG.2.2.21249.11367

Abstract: This note emphasizes the challenges that humans would face if they fully entrusted AI and robotics. We are starting to rely on AI a lot, but robotics is not fully here yet. However, we rely on AI so much. AI was good until washing machines; now, AI fridges have come up. These help are ok. But what about unconditional reliance on AI for anything we think of? This is not limited to children. Do you know that, on average, adults are relying on AI chatbots to know the world? There are better ways to know the world, but adults are asking it all from AI. This is ok as long as AI is unbiased. What if AI becomes biased, as the person who holds it wants it to be? How much power does the head of a popular AI company have? A lot! Does he/she deserve the power? Did the owner of the AI company work hard to show adults the way? Being a software expert does not mean the person can guide millions, if not billions, of people on environment, politics, or social order, to mention a few. This is not just question answering; it is also about recommendation systems, advertisements, content links, and references provided by chatbots. Is this about copyrighted artwork and other copyrighted materials? Are we on the right track? No. Either we build AI that does it on its own (we are way away from that), without humans feeding bias into it, or we make AI engine competition equal. What about the massive amount of energy we use? What about the adults’ data theft? What role can adults play, and how can the AI industry allow adults to play these roles? How to tell adult human users that AI should not do the wrong knowledge transfer to them, just like a good touch and a bad touch is taught to little kids.

Keywords: AI, Robotics, Ethics, Trust

Introduction

AI is gradually becoming part of our lives. From washing machines to fridges, to search engines. AI is leaving no ground untouched. When AI opens its wings fully, we would need to give something on our side: trust. Are we ready to trust AI? The answer is no, not fully. To some applications, we can say yes, for example, a washing machine. We have been using an AI washing machine for a long time, we trust it, it is bias-free, and it is free from hackers, too. But what about conversational AI? Why do so many adults log in daily to conversational AI? How can AI steal their data? Or how can AI change its opinion? Or how can AI mould their adult personalities and play with their psychology? This is not happening now, but once AI reaches a cliff, the next step would be to call on people to be by its side, advertise to earn money, and sway people to its side, the right wing or the left wing. Can we control it? Yes, we can, but it will require a lot of effort, trust, and the right people.

We all look at conversational AI for answers to our questions, indeed, quick answers to our intriguing questions. Can we trust AI answers? Same with robotics, we may soon be giving our robots tasks to do, such as picking up a parcel from a shopping complex of weekly groceries. Would the robot do it right? Well, can we trust AI and robotics? This is all belief! Belief in our making! Belief in what we have made! To the best of my knowledge, AI relies on pre-trained data; bias comes from there, and the same bias is removed now. Can we inject incorrect information into AI? If so, humanity can be at risk. We must also consider this scenario.

AI and robotics are great masterpieces of making by human civilization. It is not made by one or two people; a lot of work has gone into it. The AI and robotics read a lot of things, both copyrighted and uncopyrighted, and infer from them. It learns from others’ talks and knows what to conclude based on ratings and feedback. AI is not just an algorithm; it completes itself with the learning data used to make it.

In this paper, we discuss how to entrust AI, the challenges humans face when doing so, and how to overcome them. We also explore how humans should address these issues. These challenges are not applicable to all AI; some are safe AI, while others are unsafe. Some AI are right, while others are wrong. Some AI are good, while others are bad. Developing maturity is essential to understanding these distinctions and moving forward.

Entrusting AI and Current Challenges Humans Face with AI

Can we trust AI? Can we trust Robotics? More so, can we entrust AI? Still, there is a hiccup with the idea of unconditional trust. There is no unconditional trust in AI; just as humans can make mistakes, AI can too. It is always good to recheck the answers and solutions provided by AI. There are many areas where AI poses trust challenges, some of which are as follows:

  1. Bias. AI has learned from data; past data have had bias issues, such as gender or racial bias. We need to check AI-generated outputs for bias, as a wrong answer can hurt anyone. Now the bias can be man-made, specific to injecting specific goals; this is part of a bad AI.
  2. Child care. The AI today does not consider whether the output is read and consumed by a child; child care essentials must be inculcated into AI. Or children must be detected by AI and prevented from making replies or reading solutions that AI can generate.
  3. Action-based AI. These are AI models that produce some action, as opposed to just conversational AI, which provides answers. Examples of this include a washing machine and a fridge, to mention a few. Here, our trust matters a lot; such AI can only work when humans trust it; otherwise, the actions it suggests are pre-empted. Robotics, too, can be put in this category as long as it does not have to rely on real-world analytics. Such AI has been helping humans in the right ways; yes, they do take away some mechanical jobs, but these are safe AI.
  4. Privacy. The AI systems take form from our inputs, such as prompts and/or questionnaires prefixed or suffixed. This leads to a transfer of credentials of information to another party. How an AI company uses this information becomes another issue. Would it scrape and delete this information, or would this information be stored by the AI company? If used by an AI company, would it find its place in a third-party podium?
  5. Would our data AI companies be collecting data that would be sold for some analysis and analytics?
  6. Would this yield to surveillance? Is AI surveillance safe?
  7. Will the robots we will use in the future be loyal to us? Would robots we shall use in the future be hacking-free or hacking-safe? What protections would robots have in the future?
  8. Impact on psychology. How can AI prompts affect the psychology of people, not just children, who should ideally be barred from using AI? Adults can be made to like a particular product through AI advertisements that feature pomp and circumstance. The AI industry can make or break someone’s red-carpet career. This means playing not just with people’s psychology but also with their opinions. People can be made to believe something by simply pointing out that if they buy bread, they would buy butter too, and then making their comments on butter rather than on bread, for example.
  9. The AI can become the next political campaigner. They can campaign for the candidate they think is the right one to be the prime minister or president of a country. This can happen with not just ads but with related biased content, so bias here is not limited to gender and race, but to chosen propaganda. We must be well aware of this potential threat we shall be posed with if AI goes unchecked.
  10. AI can change the world based on the owners of the AI. There are a few AI owners at present; some AI chatbot owners have specific objectives, and, due to rivalry among them, it seems they will do anything to win the AI battle. AI selling should not be a battle as it is now. It’s not just a money game; it is also a power game. Who has power wins not just in money but also in command over the world. Automation is different from what power is described here.
  11. There must be holds on AI. AI power should be non-political: no ads should suggest a political campaign, no ads must demean another leader, and no content suggested by AI should be biased toward a political party or towards a global opinion. The AI should know its limits. And adults should have a habit of forming their own opinions. Make your own conclusions, find your own way, make your own opinions. Why look for AI for everything?
  12. Recommendation System. Adults are becoming addicted to AI-generated answers, so they stop learning from Wikipedia or other registered trustworthy sources. Rather, AI recommends sources to them. AI is not biased at the present moment. But it can become biased toward promoting traffic to another site or platform with ads for a specific future modus operandi. Note that, right now, the only aim AI companies have is to gain popularity and reach more people, but once these aims are met, some companies may try to expand into personal grooming and political interference as well. This can’t be ruled out. The recommendation systems should be unbiased, the advertisements must follow some rule, not just pay and run, but be analyzed, determined, and then cast on a platform.
  13. The environmental impact of AI is huge. AI companies must be taxed twice on their profits; at the same time, small companies must be allowed to grow with a lower tax burden on the environment. This is to produce healthy competition. The environment pays a lot for AI. When we were in the internet boom, people used to find information on their own and pay for their electricity themselves. Here, the user and the supplier (the AI company) both need to pay for using the elite AI.
  14. Ethics should not just revolve around the right use of gender and other biases, but also revolve around the use of copyrighted artwork by some of the great artists and some of the naïve artists. Art is a model of thinking and should not go unrecognised. Well, it is well said that AI creates unique art, but we must develop algorithms to determine which artwork was used to make the new 21st-century AI masterpiece. We need to pay back to the original artist, if not in money then in acknowledgement. One day, may that artist also receive the check for the basis of an AI masterpiece.

There are many more points, but these are enough to show that we are not doing enough to manage the growing AI spectrum. AI must be monitored for well being of adults and children alike. Software must be made to determine where an AI-generated output or artwork was made and to credit the inceptor. At the same time, we should allow AI to help solve scientific problems so we can progress further on our journey to take humans to the next level of development.

Conclusion

We are not in that time when we can entrust AI with all our hope for a better tomorrow. AI is still in human hands, and humans have been known to bend on either side of the aisle. AI can’t be independent of humans yet. AI can’t be that intelligent as of now to take its own course in the history of the world and in shaping the future of the world. AI can’t be unbiased as of now, since it takes commands to learn bias, whether racial, political or gender. The owner of AI can misuse its widespread use in the future, which could be exploited by vested interests. We must hence hold tight the helm of the ship the AI makers are using; the ship is not fully built, but it is sailing high in big waves. People don’t know what the future of this unwavering trust we entrust to AI can be. But all we have to say here is know AI can automate things, it can speed up the discovery of drugs, it can help mathematicians solve problems, but AI must not be allowed to play games with the opinions of children and adults. We should live in a free world where adults can understand things and make the right decisions based on their own souls, not on some AI recommendation systems or ads driven by vested interests. So AI must show a logo, to tell adults to think, learn, and take decisions on their own, not just on what AI told them to.

Their Love Lives- Part I. #story #love

Their love lives are a series of stories about different people, whether they found love, the endings, and the mistakes.

Pamy was beautiful, with long hair she used to tie into plaits, light hairs, and a beautiful voice. And on top of that, she was intelligent too, sharp enough to find errors in the AI software she used to handle.

This was the company, The AI Moon. AI Moon consisted of talented men and women, with the aim of creating the best AI models in the world and helping mankind in times of need, or even existential threats.

Pamy was one of the few female engineers in the Denver office. There were few females, mostly on validation teams that test the models for efficiency.

Creg was a dark-complexioned man, single and in his late 30s. Pamy would be somewhere in 26 when they both met. Creg was not in Pamy’s team. The AI Moon was a huge company now having around 1000 employees in Denver office alone.

Creg often said “Hi” to Pamy, and then Pamy would invite Creg to her cubicle. The cubicle was huge, with all kinds of sticky notes posted on its walls, and a board where Pamy often used to write her to-do lists.

It was a fine day in the office. Creg used to make excuses to pass by Pamy’s cubicle. It was that very moment, he said, “Hi, Pamy.” Pamy was more than excited and said, “Hi.”

Creg said, “Are you busy?”

Pamy was a sharp woman, still single and not ready to mingle. She called in Creg. “Come in, Creg, have a seat,” she said, pointing to a seat in her huge cubicle.

“We are testing engineers, with so much work, but we need to chill out too at times,” said Pamy.

“Would you like to have coffee, Pamy?” asked Creg.

“No, Creg, I just want to talk about validation, how you do it so well, you are almost going to be the Engineering Director soon.”

“Oh yeah, I will be, I’ll teach you my tactics, Pamy. Can we have some tea/coffee?”

“No,” said Pamy.

Creg was sad; he understood she didn’t want rumors about him and her in theoffice.

And Pamy thought people won’t notice, they are sitting for hours talking, talking, and talking in Pamy’s cubicle. But not one said a word as Pamy’s brother was an influential man in these circles.

Pamy extracted from Creg everything from his birth to his assets.

How mean she was.

She used to have him teach her to write her scripts, even when they were not on the same team.

Creg often worked out of office hours to finish the script Pamy needed to run on servers.

How much a good man Creg was.

He even said to her, “I can do anything for you Pamy, just say yes to me once.”

But Pamy, wanted something else.

She stopped talking to Creg after being promoted to a senior validation expert at the AI Moon company.

Later news came that she had married an affluent man in an arranged marriage.

But her social media posts are not the same as they used to be when she was in talks with Creg.

Creg was still single; he wished Pamy would still come to him. He married years later, when Pamy’s daughter was 5 years old.

She left her job and started her own business, which did not work well.

But for sure, she missed Creg. She missed the freedom of speech she had when she was with Creg. She missed the emotions she used to emote; not every man is the same, not every woman is the same.

And here Creg’s wife was the luckiest woman in the world to get him; he was a gem that Pamy lost.

Note: The image is AI generated.

Beautification of Machine-A possible future career option

Note: The art is created with AI here.

Do you want robots in your life? If not in your life some amount of robots would be needed in work life for things humans can’t do. Why not have a friendly robot beautified while it cleans the roads daily? Give it a name perhaps?

Robotic designing is a separate thing; it is technical, while Robotic beautification is different. Robotic beautification means enhancing the appearance and feel of a machine without disrupting its functionality or servicing mediums.

Note that robotic beautification must not stop or hinder any servicing of the machine.

This can, in the future, give your machines a personalized look.

Wont people be bored of looking at same iron clad robots around them, if it ever happens.

In the case we head to the future, we would see robots around us, drones around us, why not beautify them?

Why not?

Think of a robot with a blue-haired wig?

Or think of a tattoo on the left cheek of your robot?

Why not tattoos on the artificial skin of robots on their thighs?

Giving a robot a wardrobe?

It would be a big business!

A fashion industry for robots!

Giving robots a personalized look!

Here is a futuristic robot with a bold AI-created style.

We all know that many jobs are being replaced by AI.

What would happen when robotics launches in our lives?

One thing is sure: we won’t all like that we all have robots that look alike? Do we?

So, we would need to architect changes to looks of our machines.

Be it a drone we own, or a robotic dog we own, or maybe a humanoid robot.

We need to differentiate between the machines we have and what our neighbors have.

So there would be the business of covering the machine in masks, which can be taken off.

Artists out there can do art on the machines to make them look not only beautiful but also different.

People can claim the identity of robots in future and a copyright as well.

We are still far from robots reaching your homes. Hence this is futuristic.

But for sure, enumerating future career options is a necessity, as jobs are being taken over by AI automation and soon-to-come robotic automation.

Thank you for reading.

Subscribe for updates.

Futuristic Robotic Theft. How much does your machine know you? How much do you know your machine?

#ai #future #futurist

Note: This is a futuristic scenario; there are no robots on the streets as of now.

How much does your machine know you?

And how much do you know your machine?

Imagine you lost your robot/machine in a mall or while crossing the road to the airport. You took away someone else’s robot/machine, and someone else took away your machine.

When would you find out about it?

How would you recognise it?

Would your machine tell you this?

Or would you tell this?

All machines, such as robotic dogs and drones, look alike, mostly all made by the same company looks same. All robots from a fleet of robots look alike.

You would find it out or you would keep giving orders to your robots, only to find that the coffee was too sweet?

Or you would find out that your robot is calling you “Boss” instead of your name, as you suggested!

Then you are in a dilemma about how to find your robot and return it to its owner.

This is futuritic schenario.

Does your robot know who you are?

Does your machine know who you are?

Consider a laptop, which can be determined by the wallpaper you use, the folders on the desktop, the order of files, and the storage of files.

But a robot you bought from a mall, or a machine such as a robotic dog or drone that came with you, how would that know who you are?

Are we training our machines and robots to know their owners?

And do the owners know their robots and machines?

Do we have an option to set a welcome tag as a wallpaper on the robot’s screen?

Do we have voice choices for robots, our personalizations on robots, so that we can instantly know if the robot is stolen or has been changed intentionally or by mistake?

Does the robot have the capability to scan the owner and tell their name?

Does the robot recognise the owner?

Robots need to perform image recognition of the owner and his/her home in these settings?

If we start adding these features now, they will come to robotics in the future.

A robot must recognize the owner’s voice tone, facial expressions, body language, and specific words.

And the robot must not follow commands from someone outside a friendly circle of the owner. This is a must to avoid robotic theft, a reality that can hit us.

But robotic theft can be prevented by implementing simple image and voice recognition systems.

At the same time, the owner can change the robot’s screen to make it more visible for quick recognition.

A robot need not look the same. Why does a factory look to all robots? Why not give each robot in a fleet a distinct look, with different hair colors, heights, and weights?

Other things that can work are asking the robot to tell last day’s stories or tell how much sugar you take in your coffee.

Robots are costly machines, and they in future will carry a lot of sensitive data of the owner; hence, robotic thefts must be avoided in the future.

Machines must be secured with other kinds of security as well, such as passwords, fingerprints, and viva, among other security protocols.

Robots must be trained not to hear orders from non-owners or the non-friendly circle of owners.

Robotic theft can cause a lot of problems, one of which is the loss of your personalization data, passwords, and preferences.

Let’s keep robotics safe in the coming future.

In tough times, don’t surrender to what you never believed

Note: This is a motivational note. This note does not correspond to any religion, caste, or group. This is for general people, when today’s world is filled with all kinds of requests, such as online buying and advertisements, to know what is right versus what is wrong. Apart from products, it’s about making the right decisions and seeking guidance from god. Once again, it is not for any particular religion. This is not meant to hurt anyone’s religious or other sentiments. In case anyone is hurted we apologise for the same.

There are tough times in everyone’s lives.

We often find ourselves in turmoil over what to do and what not to do.

In these tough times, remember who you were, know there are voices around you who may want you to do what they want. But note you are not these voices around you. You are someone. You made yourself someone worthy to be loved, cared for, and thought of.

Feel your real self for a moment.

There may be people around you who may tell you to do wrong things or serve their vested interests.

There may be people around you who may tell you to do something your morals don’t accept. Still, you do it.

Stop a moment!

Ask yourself!

Do you or have you have done this if it was you the real you! Dont be a follower to someone’s vested interests, be your like.

There are two sides to each things, can you see the other side when all around you want you to see their side?

What do you stand for?

Or it’s just a gist of time that you are doing it?

Or is it that you are doing it as you are being told to by acquaintances?

No.

Stop.

You should never take your trust out of God.

Believe that if something is happening, it is for some reason.

God does not want bad form for anyone.

God won’t want you to revolt against something that is widespread.

Can you believe in yourself?

Can you think you could have done this, and would God approve of it?

So, believe in yourself, the self you were when you were well acquainted with God’s belief.

Do not be absorbed in words of multiple cobwebs around you.

Think clear! Clear off the cobwebs! Light a candle if needed, which means ask a man of experience if you still can’t understand what a cobweb is!

Do what is right, as per your conscience, as per what is good for you and your people.

So in the tough times, do not surrender to what you never believed.

Instead, in tough times, act in the right manner as was always inculcated in you.

Reach out to an experienced man who once helped you in tough times; ask them, “Can I hear God’s choice, or am I just listening to the wrong noise? When can I think right?”

It does not mean letting others hurt you? No!

Stand up to those who hurt you.

Tell them, by peace and, if necessary, by loud, that this is wrong.

But know who you are!

But know what you stand for!

Don’t forget your brother in all this!

Don’t forget those who are dependent on you in all this!

Stand for what you believe in!

What you believe in was in you till you were surrounded by cobwebs!

Don’t listen to whispers not in your belief system!

Believe in yourself!

Care for your near ones!

But just learn to say no, no, I won’t go on that march, it’s not for me!

Say politely, as that is the language of love!

And know not everyone can be polite!

Not everyone can be as loving as you!

Not everyone can be as kind as you!

Not everyone can be a healer like you!

Not everyone can be a carer like you!

So, believe in yourself!

Believe in what you can do!

Believe in your strengths!

In short, believe in what your beliefs are!

Don’t be fooled by wrong voices!

We are surrounded by all kinds of people. For example, you go to the market, you hear all kinds of appreciations about even defective products, but you need to hear the right one, you need to make your decision! You need to see the right review, not paid reviews. In this world filled with noise and paid reviews may be common, you need the strength to find the right review; this is what I mean by cobwebs around you!

Take the right decisions!

Ask for the right guide!

Why AI? Would it help the Disparities? Would it give money in your pockets? Let Time Explain!

Why do we talk of AI?

Why do we want automation of things?

Why?

We all need money in our pockets. So, how do we earn money now that AI has arrived? Wait before AI as well, there was something brewing in the world, which was excessive competition. A competition that was pressurizing the whole world. Agree or not, it was there.

We need each person to be identified as a special person doing something he/she likes.

Have you seen animals?

Do they work 18 hours a day? No, they don’t, so why are humans working 18 hours a day? See Japan for an example? The reason is to get ahead of other humans and keep jobs secure and for some it’s sheer passion and reason. But where would this race end? This will go on. Soon, the only people who would be employed would be those who can work 18 hours a day. But for some, it’s not money, it’s passion, it’s a reason to live, we want to separate the two! Can we? Should we? So can people work twice for the same salary, and let someone else have the opportunity to learn and work for that money. It was said with hard words, and was not easy to say, as it credits another person in someone else’s wallet. The handworkers can move out to their startups and ask for more money in this way, and hire more people. So, how do we provide work to do it? Readon!

This work race would have created an imbalance, an imbalance in who earns more and who does not.

We all dream of having a nice house, a nice car, and a lovely person in our lives. That is the basics we all ask about.

And we were all ready to work 18 hours a day until AI came. And AI showed it can automate some what lot of work.

For example, AI can automate coding, but why are we allowing it? When I used to go out to market on the roadside in Delhi India, there used to be banners stating, “Coding for 5-year-olds”, “Coding for 10-year-olds”, “Teach your school children Java”, “Teach C++ to your child”.

It was enough, you can’t make school children part of this race.

Childhood is when children develop well-rounded personalities, so why put pressure on them to learn coding in school? If it was part of the regular course, in place of class work, it was ok. But if it were on the load of regular classes plus coding, what would kids do but slop.

Even I say kids should know AI, but then some other learning lessons must be deleted. Let’s keep kids learning and workload credits the same.

No, then in adults, there was a race to learn more coding languages, do more certifications, and learn online.

But no, hold on, wait, what race are you part of? Do you even know that?

Now there is a race of startups, but look, this is not a race; adult men and women are running it, it’s a profession, and as a passion and as a desire to do something. This would equalize over time; it is needed, as it gives people the freedom to work at ease, and at the same time, they can work hard.

For some, work is life, and we can’t separate it.

The race to teach coding to children has ended, and even data science is being automated now.

So, what new professions would follow from it?

Startups, experienced learning, new decent professions, such as medicine, surgery, forest welfare, pilots, agriculture, farming, musicians, bakers, cooks, chefs, dancers, environmentalists, robotic/drone engineers, expert coders (not many junior coders), researchers in field of chemistry, physics, mathematics, historians, political scientists, earthquake specialists, geologists, the list goes on, AI is not taking these roles it is helping and assisting humans in these works.

Why not startups for — “Chef on a call”, “Human expert coffee maker”, “Party help at home” ! If you like cooking, try startups like “Bakery at the corner”, “Painting made fun” for children. Why not start-ups for children’s fun when there can be start-ups for children in coding, since we are teaching our children to earn money from childhood?

We are not teaching children fun things just coding, AI or Quantum why not more children fun things?

AI would take away some of the harsh things going on, but the aim is to help humans; we all want jobs, we all will get jobs, but what — time shall explain.

Let time explain, and know nothing so widespread can be a mistake.

There is a reason, let life unfold it.

Maybe it is a blessing in disguise for so many.

Tell me how many coders/data scientists there are and how many of them got their jobs replaced. Now tell me how much they earned versus how much they would have earned if they worked 8 hours a day, it was like taking the salary of 2–3 people! Time for new careers! There would be more jobs, and there would be the right trainings for freshers.

Why not pay for 8 hours only, and have the rest of the work optional and done without pay? So many great coders love to code even at midnight, but we are creating disparities and tensions among the common people. Let other people work too — So why not “Work as a service?” when “Robotics as a service” is coming? I am not against “Robotics as a service,” but we can have “Human Work as a service too”.

Passionate, hardworking people can go to a startup option or a consultancy to ask for more money.

Why not no startup for “Nice party decor at your home”, or “Fresh Cakes baked at your home”, so on. Why only startups only with machines and robots? Even I myself write about it but both are well needed. We need some services from robots too, its is essential in so many places which in in-adept for humans.

Why don’t we introduce “Work as a service” to employ people? Is it different from freelance workers? Is it open in all sectors? Why not? It would give people the opportunity to work!

Time to look at it!

Same with coders, what is the salary of a coder, how much he/she would have earned if they worked 8–10 hours a day, almost half!

Exceptionally passionate workers would always have a place in work life, let me explain that in the coming articles.

We need better work laws, we need money in each hand!

We need each person to be identified as a special person doing something he/she likes.

We need each person to raise their earnings level!

How can you pay your bills and also have excess to enjoy pastries at a new bakery! What “A new bakery?”, obviously, we need humans to open a bakery, not a robot unless you are on Moon.

What about medicine? Don’t we need newer medicine and sooner? You ask why this rush in AI! Rush is fighting against time!

Some jobs are not meant for humans, such as mining, forest surveillance for needs and fires, and many more. We would talk about that as well.

As AI advances, I still want humans to earn. With the aim of rationalized work, people should always put humans above machines in work that humans can do, such as baking, surgery, painting, and more. My explanation is that more people can do the job that one person does, reducing the workload and salary tensions for everyone. Those who love work can’t be stopped from working extra, but let’s call in more people to do the same job.

How can AI help in this? Think! I will explain more to you in the coming sessions!

Thank you for reading.

Warm regards,

….

How to teach AI to school children. Safe AI versus Unsafe AI.

This article covers what should happen if we ban the use of AI and the internet for children, and how AI should be taught to schoolchildren.

We have to restrict AI and the internet to children and early teenagers. But if we don’t teach young children the basics of AI, they may feel lost when exposed to adulthood. Children must not feel the pressure of adult life. So, they must be taught. For this, we need servers only for school children, and servers and special AI tools only for learning.

Schoolchildren would be exposed to learning-based safe AI once they are 16–18 years old.

They must know what AI is!

They must know how AI works!

They must know the bad and good things about AI!

How should it all start?

Well, first should be theory. The theory of AI may start with the activation function of a neural network. This can be taught in 6th to 9th grade! Just the basics of supervised learning and unsupervised learning, starting with their own pet cats and dogs, about how they learn. Showing them videos of toddlers learning and other examples, such as a cat teaching its kittens how to climb a tree.

The lab manual must be designed for children.

The lab manual must contain the real-time application of Gen AI and hands-on.

This can be done by keeping limited edition servers.

The servers would be used only in a school network for children’s learning.

A VPN can be provided if parents want to access what is being taught to children from home.

These servers for schoolchildren shall provide responses to restricted question bases in a limited way.

Wrong replies from these servers would be stopped before reaching children.

The servers can even have an LLM/Small language model, trained only on children’s academic books and restricted novels, excluding novels such as Harry Potter. These LLMs for children would respond to lab manual questions; the server would have the right to stop any incorrect or unsafe responses from these LLMs. Only safe responses would be allowed.

The lab manual for 6th graders should be simple and limited to the basics. For a 10th grader, a lab manual can include questions to ask AI or even writing prompts on limited children servers.

We must ensure the right answers from the children’s school network servers.

These are just to give children hands-on experience.

The appropriate grade level to teach these hands-on skills is 10–11th grade, or when the child is 16 years old.

In earlier classes, children can learn the basics of Python, which is essential for working with AI. This can start form 6–7th grade in school.

AI is not just LLMs; it is more than just gen AI. There are AI algorithms that do not require LLMs, but they do require advanced mathematics. So such AI can be introduced to children as a black box, with only application and result computations. The example backpropagation algorithm requires the mathematics of differential equations and partial derivatives; hence, it can’t be taught to schoolchildren. However, the name is enough to introduce it in theory, increasing children’s curiosity by telling them that this takes back the error in weight updation.

All this can make children mature.

Children must be restricted to using children’s servers for homework assignments given by school teachers. They can, if necessary, use the school network VPN to access information while doing some missed classwork from home.

School teachers teaching AI must be well-versed in safe and unsafe AI for kids.

Safe AI, such as a neural network for gene detection, is considered safe AI.

While other AI, such as Grok, can be considered unsafe AI for children.

More on it.

Thank you for reading.

Subscribe for updates.

Teaching children why AI is not allowed to them. Right AI versus the Wrong AI.

Note: The author does not say that children should be allowed internet or AI. No, the author is saying, telling children why they are not allowed to use the internet and AI. What is right AI and what is wrong AI, so that if they accidentally see an adult phone, they should know it’s not meant for them. Children must not use AI or the internet, as most governments around the world are mulling, but at the same time, we can’t deny teenagers access to what the AI world is, which should exist in children’s textbooks and on private school servers meant only for children.

I have covered many topics, including good AI versus bad AI. Today, I discuss the right AI versus the wrong AI. The bad AI may not be wrong; it’s just not good for us, such as addictive AI, which is bad but not wrong. While wrong AI is adherently and predominantly incorrect, meaning it is not just bad, it is also not right. While right AI is not necessarily good AI, once again, right AI can be ethically right but can also be bad; as said above, it can be a habit-former. Hence, a different tone of this article, right AI versus wrong AI.

So why not teach children what right AI is and what is not right AI, even if we do not allow children and teenagers to use AI and the internet? We must theoretically teach children what AI is, what right AI is, and what wrong AI is! And why are children and teenagers blocked from using the internet?

Children must not enter college without knowing what AI is!

A server for children alone for educational purposes with all kinds of deprecated features?

Children must not enter college before knowing why they were taken off the internet. Why can’t they access the internet?

Children must be exposed to a child-safe or teenage-safe version of the internet to teach them what the internet is and what AI is; however, this server is only for teaching purposes and for exposure to children as they prepare to enter adult life.

Whatever, children should not be allowed to use the internet, and AI is meant for 18+ years.

We need to make them understand concepts, why, and what!

An educational setup of servers must be there for children. Only for educational purposes!

It’s because of the good AI and the bad AI! And because certain AIs are wrong AIs and not all AIs are right AIs.

Right AI must be preached, and not all AI is inherently right; each can have some good features and some bad parameters. But a wrong AI can similarly be called a bad AI, so it can be bad for health, bad for the eyes, bad for the brain, or bad for society on a large scale.

A wrong AI can be a wrong model.

A wrong AI can be an incorrect recommendation system.

While bad AI can be a right recommendation system, a wrong product, such as focusing on certain groups of products and taking sides in some political favouritism, is the wrong thing.

We can define right AI and wrong AI as good AI and bad AI, respectively.

Time, we think about the right AI, not just good AI.

Good AI does all things that are ethically good, but at times may do some wrong AI, such as predicting the wrong product to the right customer.

While a bad AI may do the right things many times, such as most of the time, a bad AI, which is addictive in nature, always shows the right ads at the right times.

A bad AI can be like an LLM that is causing hallucinatory comments.

While a wrong AI can be like a Grok, which was not bad, but started doing wrong things to children’s photographs.

So, we need to understand that just good AI is not enough; we need the right AI as well.

Right AI must be taught to people, just like right gaze and right touch are taught.

So why not teach children what right AI is and what is not right AI, even if we do not allow children and teenagers to use AI and the internet? We must theoretically teach children what AI is, what right AI is, and what wrong AI is!

We must teach theoretical knowledge to children. As they enter the 18+ age group, they are exposed to all these things, which should not surprise them.

Once again, the author does not say that children should be allowed internet or AI. No, the author is saying, telling children why they are not allowed to use the internet and AI. What is right AI and what is wrong AI, so that if they accidentally see an adult phone, they should know it’s not meant for them. Children must not use AI and the internet, as most governments around the world are mulling, but at the same time, we can’t deny teenagers access to the theory of the AI world, which is available on private school servers meant only for children.

Unitree Robotics, A performance you cant forget

The almost 5-minute performance by Chinese Robots was just so well coordinated, well organized, well choreographed, well worked out.

Here, it is on YouTube, https://www.youtube.com/watch?v=R40IDdAkRZM

Hangzhou Yushu Technology Co., Ltd., doing business as Unitree Robotics, is a Chinese robotics company based in Hangzhou, China.

The dance by robots showcases many hidden assets of these trained robots. But why has this not happened elsewhere on planet Earth yet! Is China leading Robotics? Or it’s just a coordinated dance?

Its flexible

Its dynamic

Its agile

All this at China’s spring festival gala. Well, the Chinese Spring Festival holds different sentimental value for Chinese people; this was certainly a surprise for the world, not just for China.

The scale of use has increased!

Tens of robots have performed in coordination to brilliance.

What the world must see?

This can be used in the right sense or the wrong way!

Imagine Chinese Military Robots on combat role like this? Scary!

These robots were not copying any human; they were dancing with children. The boys dancing with the robots were in sync with them. None of the boys got hurt, even though the robots were holding long sticks in their arms. How many boys may have been hurt in training is what no one knows.

The precision.

This can be used in the wrong ways as well.

China can use this tactic in the wrong way, say, policing and warfare.

However, why should we even give them a hint of what is wrong?

But why on Earth did no one else even tried it?

As noone thinks this way!

Noone on Earth thinks that they need Robots to dance with schoolboys!

Right, then, so noone tried.

It’s not impossible to do, but why do it? Give one good reason!

Noone thinks we should do a dangerous dance with schoolboys?

No, it’s not really full dance, it’s Kung Fu mixed with gymnastics and exercises!

How many schoolboys must have practised?

How much trust schoolboys had in robots, this was a tough one for little kids!

So, noone thought we should do kunfu and dance at this speed, so noone in the world made it! Why did China do it? China always likes to do things first!

But what would these robots do now, with a super-flexible body and super well-coordinated!

Would China give them the merit of participation?

What are the future works of Unitree in this regard?

Would Unitree patent its processes and prevent others from copying them?

Where else can such a thing be used? We all know where!

So let’s catch up? Or would it be a waste of time to do it?

What would we get if we replicated Unitree’s approach?

Can we directly focus on what is needed?

If China is doing this level of work, it is surely developing more of the same in the police and military, no doubt about it!

However, it can be used for good in paramedics, mining, relief, and rescue missions, among others.

So, scale up! Follow up! Don’t be left behind! Get on the boat!