Contravertial Elon Musk’s new device- Here is why! Much more than AI Docs Apocaloptimist!

#ai #future #futurist #technology

AI apocaloptimist — This device is the real danger.

It can hack humans.

What I am referring to is this Technology that can hack humans, and it’s not AI. It’s not AGI, you all are fighting over.

Do you want Technology to write your social media? Yes, instead of you doing it? No, not AI!

Do you want Technology to reply to your emails? Yes, instead of you doing it? No, not AI!

Do you want Technology to tell you fake news and make you believe in things you don’t want to? If we see fake news, we can use our brains to detect whether it’s fake, but not if this Technology takes over human brains!

Elon Musk’s new device is being called a revolution in the telecommunications industry.

This device can take orders from your brain to accomplish tasks. This brain data can be collected and patterns can be identified, this can be used in harmful ways to hack mankind. We are free humans created by God not robots in hands of technology.

Why allow a device to decide what you want to see, hear, or write on the internet, leave aside on social media?

Have you watched #THE AI DOC: OR HOW I BECAME AN APOCALOPTIMIST? This documentary is nothing compared to the new device Elon Musk wants to spread around the world.

AI can be sought right, but this can’t be sought.

Humans can be hacked by this way of life. We can open our car gates on our own; we don’t want to use our minds to direct a punching card in our pockets to open our car doors. Are we that lazy? That we rent our brain waves?

We are human; we do have some spiritual content within us, and we are not robots that need to be tied to such technology.

Ok, you want internet free all over the world. They can give us the internet, but we want our devices. We want to operate on our device with our hands, not leak our thinking patterns to a server. We don’t want to be hacked!

Unless the disabled to whom this device can be a help. However, AI can help them too.

Why go this far?

Why rent your brain to a private company if you are not physically at margin?

It can be understood from the video here that it would use your mind waves to take commands.

Elon Musk changes the game again with a device that works anywhere on Earth.

Do you want technology to write your social media?

Do you want technology to reply to your emails?

This device is far more ahead than #AI_DOCs in the Apocaloptimist scenario.

This device can provide internet access, so why not ask it to act as a hotspot for your mobile devices?

It may cost around $800.

Yes, we need the connections in mountains, in deserts, and in the sea, but we don’t want it to operate on brain waves or without our typing, or our speaking unless some are disabled from doing so.

Give us internet connections, if you can. But we don’t want an internet connection that has to operate our brains!

In either case, we don’t want our thoughts to run our phones, reply to our messages, and auto-run our emails.

It’s a controversial experiment on mankind. First test it on other species. Let’s see what monkeys want from AI. Let us test it on chimpanzees. Let’s see if chimpanzees can get the banana inside the car by opening the door with their thoughts. Can we control these chimpanzees then with our orders? Can we make these monkeys who were tested as our pets then? Can these chimpanzees follow our orders then?

What are the side effects on thinking then?

Would humans ever be able to innovate then?

Is it not a form of human hacking!

For the disabled, solutions are coming. Have faith and trust; you can use these devices to a limit, and switch off from them when not in need.

Don’t allow your brain to be mapped.

Be a free human as God has created us to be.

It’s more dangerous than AI as in “THE AI DOC: OR HOW I BECAME AN APOCALOPTIMIST.”

AI can be controlled, but once your brainwaves are lost to tech like this, humans can be hacked.

It’s more than surveillance.

Once patterns of human thought collected by these devices are learned by AI, humans can be commanded.

Forget recommendations systems that you say is collecting data, all that is small fight.

AI apocaloptimist — AI is not that dangerous; it can be corrected, but this technology can be the real danger.

In short, give us the internet, and we will figure out how we want to open our car doors with thought or with physical movement.

However, once again, this is good for the disabled, and new solutions are coming. Just give time.

Reference

[1] Elon Musk changes the game again with a device that works anywhere on Earth.

Support me in this.

Thank you for reading.

Subscribe for updates.

Time to limit Robotics from using AGI and Advanced Conversational AI

#ai #future #futurist

Summary: We need limitations on robotics, so we need to put restrictions on how robots can use to be “AGI”. Yes, limitations on robotics and its interaction with to be “AGI”. If we stop, would China too stop?

We need work to be done.

At the same time, we want equality at work. Some works are not considered equal, and some no one wants to do. Everyone wants equality at work.

Everyone wants dignity at work.

Let’s keep conversational AI separate. In subsequent write-ups, I shall discuss what, how and why conversational AI can be kept separate.

We need robots for locomotive work.

We need robots for mechanical work.

We need robots for work not meant for humans.

Robots are just machines. Machines that require extensive training to perform their scope of work.

But the scope is wide.

Still, we need limitations on robotics, so we need to put restrictions on how it can use AGI. Yes limitations on robotics and its interaction with AGI.

Robots exist as opposed to chatbots.

Robots have an aim against a chatbot.

We need each robot to have a specific aim and a name mentioning its aim.

Say, Grasscutting Robot Robu21, can be the name of a community garden robot.

We can live without AGI chatbots, but not without physical work that needs displacements, which humans can’t do at times. This in no way means there are no human jobs; no, it just means the right to do dignified work.

So let’s separate to be AGI and Advanced Conversation AI advances from Robotics.

Robots do need conversational AI to understand what humans want.

Means we don’t want robots for chats; we want robots for work.

Either bring groceries from the store or fix that garden rolling machine. Or see and inspect a metriods shower. Specialized robots for specialized tasks.

We don’t need sophisticated talking machines.

We need a robot that can understand what we say, without extra chats.

We want robots that are well-behaved.

We want robots that can do work.

And they can be box robots or humanoid robots. It depends on what people want? Not what I want!

What it means is we need robots in the community, say, for cleaning roads of spring leaves and clearing snow off the highways on a blizzard day.

We need robots for so many things.

But we need to separate AGI-based AI, which some call AI God, from essential robotics.

We are in short supply of workers in many places, including Germany.

At the same time, we need rules, guardrails, and to-dos for robots.

We don’t want a robot that punches a human child or even an adult. So, wait, who produced that robot? We need nice robots! Robot manufacturers, please note this!

We want nice robots that can work and also entertain the world as they clean up the roads after fallen trees.

As we progress in the future, we will need some robots for public services. For example, if we had some robots, airports wouldn’t have been seeing long queues. This does not mean we need to take jobs from humans. No, it’s just backup.

Robots are just backups.

As soon as the climate is back, your jobs are back, robots are back in their wooden boxes you made for them.

A climate calamity comes in, and we call in all community robots to help. And they would be taught this way.

Also, the conversational AI is going too far; we don’t want all aspects of it to be robotic for now. Do we?

So let’s separate the displacement work Robots shall do from AGI-based AI.

However, some people would like to have both; for example, Albania appointed Diella, a robotic minister who does both, but that is just to be AGI with a body! Yes!

This article does not mean AGI is not essential; it has its uses, be optimistic. The people handling it would know what responsibility they have in their hands.

If we stop, would China too stop? Did you saw Chinese latest marvel in Robotics, the dances their robots do? Their sword dances? Can you compete with China?

Time to think!

Thank you for reading.

Subscribe for updates.

Regards,

….

Should Robots be on Facebook? Insta? Updates and Entertainment from Robots!

#ai #future #futurist

Should robots be on Facebook in the future?

Robots shall always be owned by a human owner. But do we want them to be on Facebook? For a robot in a factory, get to your owner if you want to read this article.

Would every entry of a robot that is guarded and marked by blockchains be on Facebook as well?

Would these entries on their accounts be made by them when registering on blockchains, or would their owner do it? Who would own such smart robots? Would it be a threat to mankind? That thing we all need to think together, but for sure, robots would start to compete for entertainment then.

They say who use Facebook these days?

So, maybe on TikTok!

Some really funny posts from your robots?

Robots are made to do your work, which other people can’t do for various reasons!

So can your robot entertain you and the world, too?

People like to share their thoughts, and robots shall be part of their lives in the future!

Can your robot take part in TikTok competitions for the most impressive robot?

Or something like the most entertaining robot!

Wait, AI is banned for children!

So, is the internet, so your children can’t see how good your robot is on Facebook!

But wait, your robot can be a genius too!

Entertaining as well as functionally well!

Serious at work, sincere in completing tasks, and then enormously entertaining in acts.

But who owns the FB or TikTok accounts of a robot?

Who will upload these?

Well, what would be the category of the robot on its FB profile?

Maybe a new entry in the gender category? Male/Female/Other/Robot?

But know well, robotics is not at that stage as of now!

Robotics is nascent now!

So robots, as of now, can’t handle their own FB accounts! They should not as of now!

So, no robot is entertaining you on FB! or TikTok! None as of a few years from now.

This is a futuristic view!

Yes, one day robots may be able to understand social media and operate on it!

One day, they may take your pics on their screens.

One day, they may take selfies with you.

And one fine day, the First Robot in the world may post its first post on FB!

That day, we will say, robots will entertain the world, apart from just doing their jobs!

But the question remains open! Should robots be on social media?

Would being on social media distract robots? No, they are meant to learn more and do more!

Robots can’t be distracted in the future. As they learn to do more, they are there to automate some work in places like, say, Germany, which is facing shortages of people to work.

At the same time, entertainment would be an added advantage.

Can we make friends with other robots? Well, what do you think of this? Making friends with a neighbour’s robot? “No, you are luring my robot to your home? Why don’t you be happy with your robot unless I want to sell my robot?” This would be my reply.

Can we allow robots to be friends? Well, no, we don’t want a world owned by robots; we want a world owned by humans. So we should disable robots becoming friends with robots, as a human safety measure.

We want a world of harmony in which robots do work that humans can’t do or shouldn’t do.

We would accept entertainment from robots.

But we should not accept friendship between robots.

However, we can accept cooperation among robots to accomplish tough tasks such as climate-related measures, firefighting, or other calamities.

So, the question is still open! Should we allow robots to be on FB? Insta? TikTok?

Who would make an account for them? And who would access it? Should we or should not we? Vote time!

Thank you for reading.

Subscribe for updates.

Regards,

….

Who are MiddleMen in AI Cycles? Can it be a future profession?

Who are MiddleMen in AI?

The one who made something out of their intellectual or productive capabilities and is neither at the front end nor at the delivery end of AI products.

The AI cycle can be understood as the progression from the inception of a prompt to the AI’s responses.

The AI can generate a lot from the internet and other databases.

Some AI pays news companies for information, but news companies are not MiddleMen; they are companies, and they can survive through deals they make with AI companies.

But MiddleMen are like you and me, who write, create, paint, or capture photographs that go unnoticed.

AI infrastructure and processes, and the reliability required to scale the systems, are not free.

But that does not mean MiddleMen should go unnoticed and unpaid.

For example, some MiddleMen devices, a new term for a gen, say, A characteristics.

Someone prompts it.

AI looks at it in on internet and summarize, edit, modertate, paraphrarse it and provides the output to the user. More users query this. The AI collects and combines points from other MiddleMen and answers the queries.

Should the MiddleMen be paid for this?

In case it becomes popular and trends on AI apps!

If so, how?

Is money enough?

Or is recognition needed?

I think a combination of both!

Why not link the reference to the original articles?

Why not backtrack and locate the original article, which was paraphrased and reconstructed?

MiddleMen makes AI too, it’s not just news companies, individuals too help it!

But what would someone have to do to be a MiddleMan for life?

Some small amount?

Or pay as it hits the query?

Or advertisements?

Another example: someone created a new painting of an alien that many people like.

And this painting is used often now.

So, this person gets famous, what about other versions?

What about other artists?

This is a point to understand.

As not everyone’s work would become so popular, and not all work would be recognised.

We need effective backtracking algorithms to assess the semantic similarity to AI-generated content. In case it trends, a reference must be provided.

As AI summarizes, it creates bullet points and refines the information to the point that it is unrecognizable.

Would AI buy platforms where people can log in?

Would AI read human writing techniques and write on their own, merging two or more stories? Would the original go unnoticed? Should this be ethically right?

All this, given that AI is a utility we are used to now, is needed.

Data is nothing without utilities for using it.

AI itself has to pay huge bills for its infrastructure and processes to provide you with things in such a concrete form. But that does not mean not to pay a MiddleMan.

Can being a MiddleMan in AI be a profession of the future?

Towards a better world.

Towards a world that does justice.

AI to pay for other-party content. All internet data has to have a token/ID to end fake data and create individual earnings for intellectual works

Note: This is a duplicate copy. The original is on researchgate here, DOI: 10.13140/RG.2.2.35817.76649

Abstract- Create an AI that can give back to artisans, sculptors, photographers, painters, authors, vloggers, bloggers, question-answer sites, etc. Find references towards your generated AI, if the reference needs to be paid to the people whose work was used to make that AI. Not all works need to be paid for; for example, if you use someone’s research, you do not need to pay, but you just need to mention it in a reference. For others, say, a photographer, you need to pay for the work if you earn money from it; if you do not earn money from it, it remains entertainment material, and then you, and nobody else, needs to pay for it. However, if you earn from this entertainment, then it needs to be paid to the contributors if it’s not all original AI creation. This ends many problems, such as payment for professions, as well as it ends fake news and fake videos. It’s a way to earn money for intellectual faculties, skills, writing, the arts, and other skill sets. AI can be a source of income for a lot of artisans, sculptors, photographers, painters, authors, vloggers, bloggers, question-answer sites, etc

Introduction

We are living in an age when everything and everyone is on AI. No one questions it, no one thinks beyond it? No one asks where all this is produced from? Who explained it to AI? Whose painting is that beautiful swarm in the background being taken? From where is this swarm doing salsa form? Who took that photograph of that swarm with a pink mark on its beautiful neck? Is no one asking? But what is the future of it? All the money keeps going to AI companies; people who took these photographs don’t know their rights, as they don’t exist yet. Why? As AI came so suddenly, no one has time. People who made things out of it think it all belongs to them and they are selling the contents as if it’s all their own. Much of it is theirs, AI companies, but much of it belongs to artisans, sculptors, photographers, painters, authors, vloggers, bloggers, question-answer sites, etc.

Ok, let’s take an example of a swarm with a pink mark on its neck. A really good wildlife photographer took this photo; a human user told AI to put a mark on its neck and make it do a salsa. Well, then first comes the photograph, the photographer may have put this photo on the internet, but wait, he can earn from this if it is recognised that AI chose this pic to make the video, AI referred to a salsa YouTube video to create a swarm of salsa, so does AI need to pay the contributor as well? What if there are more than one beautiful white swarm photograph on the internet? Since it’s a dance, we need several poses — front, side, and a more of a swarm. So, we need to pay for various photographs. It depends — if this video was made for entertainment, then we don’t need to ask for money. But if it was made to be posted on YouTube and subscribers liked it so much that it generated income, then it’s a moral duty to pay the wildlife photographers their share of the proceeds for entertainment. If the swarm was not used for entertainment but for a business presentation, say, of some apple orchards, then any deal that leads to profit must compensate the photographer.

It is imminent AI knows what it was told, so some of the money must go to the photographer, the one who clicked that this photograph is of a beautiful swarm. This is just an example of how to break the profit made with AI apps. There are now AI-based photo-editing apps, AI-based painting apps, and AI-based modern art-making apps. But all these need to be paid back to the painters if they edit the look and feel, or the backgrounds, of some original works they used on the way to developing their own style. Their own style can become their copyright, then, but with references to whose work they have used in their contribution. And, for each unit, the profit goes up the supply chain.

AI apps, which are feared to end all artistic work, can actually become a source of income in modern times if used in the right way; they can pay back artists and other professionals. Same with music: voice is not the only thing; the background musicians, the music directors, their work can resonate in the same way as we explained for painters and photographers. The only thing is identifying where your work was used to determine the reference person or reference organization, and to credit them. This is not that simple. Our strategy is outlined in the next section, which explains techniques for crediting the original contributors and registering for new credits.

Unique token/id for internet identities

When everything on the internet has a unique token/id, for images and paintings, and document identifiers for text, answers, and more, then things would be simpler, not just for earning for tradespeople but also to free the internet from fake videos and fake news. This is the future; what about today, when there is no ID? We need to plan and move in a way that creates unique tokens, such as NFTs, for photographs, paintings, and artworks; however, we also need tokens and document identifiers for all written text. This can be put on the shoulders of the media provider.

Then comes the notion of composite ID: an AI-generated video that references other IDs is a composite ID, and appropriate references must be provided. This shall be win-win for all. People shall get a source of income, and AI shall thrive as well.

The methodology

Imagine you are a wildlife photographer with IDs and tokens on the Internet. AI accesses it, completely edits it, and adds more features, creating a new token ID. You get paid as it is bought, won’t you give a share to the original contributor? If so, how much? What are the rules? It’s not simple! Initially, it would start with a good-faith contribution and ultimately lead to an algorithm to determine each shareholder’s share.

Same with research papers. AI is proving new things daily, AI is generating new models daily. The thing is, in research, money does not come; references come, so give references to the contributors and head high in research unless it’s a big project that needs contributions, such as generating sun-like power for all the AI you need.

The uses can include entertainment, research, business, and even analytics, in which we must trace back to find references and attach IDs to any analytics AI produces. We must be precise with the IDs used because simply reading someone’s blog and summarizing it with AI is not the right approach. Paraphrasing and bulleting by AI do not count as uniqueness; an ID system must be established. Additionally, infringement issues must be addressed. We need to ensure that fake videos are also linked to IDs. For example, if a BVV channel photographer takes a photo of a famous person, the copyright belongs to the BVV channel. If someone uploads this image to their AI app and asks how they would look fat, an ID is noted in the AI. Then, they create a dance video by noting another ID from which the dance is copied. The AI generates this video with IDs, and if the user likes it for entertainment, it’s fine, but publishing it requires permission from the copyright holder. With the above 2 IDs, this stops fake news and links payments. How can it end fake news if the BVV channel agrees to allow the use of the photograph for money? Here comes ethics: we must set ethical guidelines for AI app software so that such things can be given a new token/ID. If not, a new token is given, no new DOI is given, and the material cannot be published online. This stops fake news and fake videos.

Conclusion & Future Work

We need to understand that AI mostly does not own anything but the algorithm; all data belongs to human owners. It can access paid content in many cases. AI processes this data and creates new things. We must ask, and we must have the right to ask AI to give back to artisans, sculptors, photographers, painters, authors, vloggers, bloggers, question-and-answer sites, etc., some of the profit AI makes. The AI algorithm is based on associative memory and an attention mechanism. For an AI to create a swarm dance salsa, it needs images of swarm forms from all directions and some salsa videos from YouTube. Hence, we need to ask AI companies to pay back to artisans, sculptors, photographers, painters, authors, vloggers, bloggers, question-and-answer sites, etc., some of the profits they make. The AI companies must give references; they must backtrack if needed to generate the original picture they used and the original dance steps they trained the algorithm on. These references must be produced, and people can be called to claim their dues. In the future, we can implement a token system or an ID/document identifier system that routes payments directly into artisans’ or creators accounts. This can produce good income for so many people. However, what would people do now, people such as artisans, sculptors, photographers, painters, authors, vloggers, bloggers, question-answer sites, etc? Given so much is already on the internet? Well, we still have worlds to explore, we still have telescopes to click, we still have love stories to write, and we still have hope to carry on. Towards a world that acknowledges art, intellectual abilities, and more.

References

[1] (PDF) AI to pay for other-party content. All internet data has to have a token/ID to end fake data and create earnings for intellectual works

Should we pay high to use Generative AI?

Note: This is a duplicate copy. The original is on Research Gate with DOI: 10.13140/RG.2.2.36643.52007

Abstract: In this paper, we discuss why gen AI is costly and why people should not be charged if they are just querying a chatbot. Note: this article is for general AI, not other AI, such as in washing machines or specialised Medical AI, which needs more funding to become specialised. The algorithm used by AI developers is a black box; they can’t explain the outputs. All they can charge for is energy. So, when energy is scarce, should we even use AI? Can we rely on the internet or the World Wide Web? No, it’s not easy to go back. But then, AI needs to subsequently reduce its subscription charges as energy prices come down. AI is mostly used as a know-it-all portal or an entertainment engine for adults as well as for youngsters. We should know that AI relies on public data, not on AI companies. Much of the data is copyrighted, much is charity-based, or paid, yet AI companies continue to charge as middlemen between our fellow people’s data on the internet and users. But are we losing our own creativity with AI? The AI companies must ask only for the energy charges and their salary, and subsequent developments not the excess. AI has changed the world, but this money belongs to inceptors, contributors, and unknown participants living today who don’t even realize they’ve indirectly contributed to an AI answer for someone in need. For example, each answer on a medical website can help with an SOS treatment for a woman delivering a baby in a far-off village with no medical facilities. This is not safe, but it’s just to give you the idea that AI is useful, and we can’t deny anyone the right to AI. This paper focuses on “Right to AI” as free AI.

Introduction

Why should we give money to use AI? To answer this, let us understand what current-day Gen AI is. Gen AI, or generative AI, uses existing data in the form of text, audio, photographs, paintings, and videos, and applies AI algorithms to it to produce concise outputs for you, the users. So what are the AI companies charging for? The data, text, audio, photographs, paintings, and videos are available on the internet; most of it is someone’s hard work, someone’s copyright, or someone’s creation. Much of it, like Wikipedia, is the result of our friends’ or ancestors’ hard work. The AI algorithm looks up this data, at times summarizes it, at times edits it, and often manipulates and presents it to users. Even the owners of AI companies don’t know what the magic is or how it works; many call it a black box. When something goes wrong, they are themselves clueless. You ask questions, and the black box answers them. No one knows, well, it’s not rocket science. They use algorithms such as associative memory and attention models on text, audio, photographs, paintings, and video data available on the internet.

So, when all the data is taken from the internet, why are they charging such a high subscription fee? Now people have become lazy enough to go online and search for themselves the things they need, instead of relying on AI. Whatever AI tells them, they believe it. There are so many copyrighted works that end up being edited by AI, and so many artworks that have lost their individuality as they are manipulated. For example, with AI, someone can create “Monalisa in a pink gown or a long covered gown”. This ends the original artwork’s uniqueness. Art ends here? Unless the owner uses it as they like. Art is a manifestation of someone’s thinking, and thinking is a unique gift from God; not everyone can think the same way, nor is everyone mediocre in their field. One must respect the gift of mind given to someone. With AI, the traditional art seems to fade away. We must protect the rights of art workers. Not only is art in deep trouble, but creative writing is too, and people still write, and AI engines summarize and produce it as output.

In the next section, we describe various aspects of Gen AI and how it impacts people’s lives, what it is made of, and how it is used. On top of it, we do not say that other kinds of AI are manipulative in nature, no, for example, the AI your washing machine uses is someone’s own inception, and neither is it gen AI nor is it a black box. So this paper does not focus on such AI. However, there are generative AI systems that read research papers and generate new discoveries; the charges incurred must be paid to the service that provides the research papers to the AI to develop a new algorithm. We do not say gen AI is of no use, no it does help in creating new research which is valid, but why to ask such high subscription fee unless the gen AI reads research papers which have a cost associated with it, in that case subscription fee is a valid thing, unless all research papers are available for free to be read by common people. In the latter case, the world would head fast towards a new revolution. So, let us see these issues in depth.

Why is AI subscription not free to use for all?

AI subscriptions can vary by the types of AI in use; typically, we are talking about generative AI. Other kinds of AI, such as Fuzzy logic AI or Medical Bots, are not general AI and are therefore out of scope for the current paper. In this paper, we discuss why generative AI is costly and why people should not be charged high for simply querying a chatbot. The data, text, audio, photographs, paintings, and videos are available on the internet; most of it is someone’s hard work, someone’s copyright, or someone’s creation. Let’s first see these in detail.

The following are available online, but are they free to use?

i. Text: A lot of text is available on the internet. This text can come from contributory sources, such as Wikipedia or paid subscription sites; encyclopaedias, blogging sites, novels, books by writers, and databases are key inputs to AI. The AI uses these free or paid materials for training.

ii. Audio: A lot of freely available voices, text, and information is available on the World Wide Web. These voices can be used to generate new voices and new information.

iii. Photographs: Photographs of yours and your near and dears ones. The photographs of your pets are all available on social media sites. These can be used by AI to do many things. For example, AI can read your media and suggest 2-year-old pictures to you. Many professional photographs are available online, but they are not free; still, they can be fed into an AI model. These form the basics of new photographs, someone surreal intelligence can be put to use to create a new surreal image by combining a real image with an imaginary one, and the money goes to an AI company. Is it right? Not much credit is given to the original artist of the artwork.

iv. Paintings: Many artists used to create genuine art, but now art is just a click away. Trained on freely available images and combined with tagged photos, we can generate countless paintings in seconds. We should bring art back as a respected profession, but ensure AI can’t access the artwork until it explains its reasoning! Art reflects the mind and deserves recognition. AI-generated art is also valuable, but always pay the original artists from subscription fees. For instance, if someone merges the Mona Lisa with a white horse, the museum housing the Mona Lisa should be properly compensated if subscription fees are involved or if there’s profit. Otherwise, it can be used freely since you’re not selling it. Any sale or transaction involving such art should credit the rightful owners.

v. Videos: The simultaneous training of audio and video can be used to generate videos of any person speaking in any language with perfect lip-sync. These videos serve various purposes and can be combined with entertainment videos to create chaos or awe. For example, it could produce a video of someone building a house on top of the Alps, but the original creators do not benefit from this, even if they earn from it — especially for those who know how to use AI to make new videos by merging old ones with associative memories. Such videos can go viral, earning creators money not just from subscriptions but also from ads.

vi. Research Work: Research AI shouldn’t be too expensive; we all know reading research papers costs a lot. Then they create new algorithms or solve unsolvable problems. That’s good, but the cost is high because AI has to pay subscription fees to journals. So, for now, we can focus on the five main points above. We need AI to help in research to push new frontiers and also to eliminate fake research that adds nothing substantial to the research community.

Who wrote all this Wikipedia? Who has written all these texts? Humans? Our friends? Our ancestors! Who has sung all those songs? Who has added all those videos on YouTube from which generative AI trains? Who has written all the research works online from which AI trains to create new research? Not the AI company! So why is the AI company charging for it? It provides an interface between the online written material and the inference engine that combines the text our fellow people have written and infers. The inference can be of the following types:

1. Summarization

Remember me for faster sign in

The AI reads the contents from the internet or from a document you upload. It has a summarization agent that summarizes the articles and produces articulate information to the user. For example, here I asked Copilot to summarize the content from the Wikipedia article I provided.

The AI tools structure the information and present it to the user in a very attractive form, making the user more inclined to read and believe it. Summarizing is a tool that we can pay a subscription fee for. But the question is, why such a high fee, from millions of people around the world? The charges need to decrease.

2. Retrieval

The AI performs retrieval if you don’t specify the source content; it searches for it on the internet on its own. Then it summarizes and presents the outputs, its search engine plus summarization.

3. Generation

The AI generates images and videos based on its learning, as explained above, using attention and associative memories to create new content. This can include changing a person’s face in a video to a cow’s face or swapping a man’s speech for a woman’s. Such a generation seems fun, and people are tempted to subscribe to this entertainment. However, we must remember that the original video is ignored; no one sees or recognizes it. All people see is the new video showing a cow giving an AI lecture. This needs to be addressed — if money isn’t going to contributors, then why is it going to the AI entrepreneurs?

4. Appending

This follows from above: appending a video or photograph of a text with content based on AI is not cost-consuming; the only thing that costs is energy. The algorithm used by AI developers is a black box; they can’t explain the outputs. All they can charge for is energy. So, when energy is scarce, should we even use AI? Can we rely on the internet or the World Wide Web? No, it’s not easy to go back. But then, AI needs to subsequently reduce its subscription charges as energy prices come down.

5. Editing

Editing tools are part of everyone’s life now, but they have learned English from reading open-source English books in the training phase. Is the money they ask for more than the power they consume? Or is it otherwise?

6. Merging

Merging audio, video, and text into one or more forms is common nowadays. AI has learned from videos of all kinds from around the world. It knows more than any human ever will and can create anything you ask for, having ingested all the videos, texts, audios, photographs, and paintings. AI can merge these in various ways; for example, it knows what a cow looks like and can put your body on a cow’s face, making it speak in any language you specify. This process is merging. Do we want it? It’s fun and attractive at first, but we must remember that we are all mature adults now. We need to grow up. Having fun is okay, but we should see that these services should be free, except for energy costs, given that the material they are merging is freely available on the internet, including copyrighted works. The models are built, and they have earned their initial development through trial and error. Now, why are they charging for something that has already been learned and created?

Yes, we all want high-quality output on our screens, and we believe that using top-notch apps ensures we get high-quality solutions. Almost all AI apps I tested produce similar kinds of results because the models are set up the same way and use similar techniques. Do we really need to pay such high subscription fees?

There are many areas where AI is useful, such as medicine, with AI doctors that need to learn more and become more accurate for people who cannot afford a doctor. We don’t mind paying fees to an AI doctor app as long as it is trusted; in places where healthcare is unavailable, we can rely on an AI doctor. However, this AI doctor must be a specialized app focusing solely on AI and medicine, not a generic app that scans the entire internet and mixes information.

Conclusion

We all need help with creating, editing, writing, and analysing. But are we losing our own creativity with AI? AI has demonstrated that after pre-training on all the information on the internet, it can answer any question within seconds. It can create any video you can think of in seconds. But the creators of these AI systems can’t explain what this is; they call it a black box based on associative memory or attention. However, they often can’t predict the output themselves, which can mislead many into wrongdoing. Their AI models are reading all kinds of books, texts, videos, audios, paintings, and photographs online. Most of this material isn’t owned by them. Still, they use it to create new content, which they sell or charge a subscription fee to generate. When this material belonged to our ancestors, colleagues, or coworkers, and we are using it for knowledge and entertainment, why should we pay for it? If we are doing business with or earning from it, we should pay the original contributor rather than AI companies, which act as middlemen between creators and users, and the energy costs involved. This remains an open topic for discussion.

Reference

[1] (PDF) Should we pay high to use Gen AI?

Future challenges in fully entrusting AI and Robotics?

Here is my latest research note. I am sharing with you a complete copy of it. Do read it.

Reference:(PDF) Future challenges in fully entrusting AI and Robotics?

Note: The original article is on researchgate. This is a duplicate copy. DOI: 10.13140/RG.2.2.21249.11367

Abstract: This note emphasizes the challenges that humans would face if they fully entrusted AI and robotics. We are starting to rely on AI a lot, but robotics is not fully here yet. However, we rely on AI so much. AI was good until washing machines; now, AI fridges have come up. These help are ok. But what about unconditional reliance on AI for anything we think of? This is not limited to children. Do you know that, on average, adults are relying on AI chatbots to know the world? There are better ways to know the world, but adults are asking it all from AI. This is ok as long as AI is unbiased. What if AI becomes biased, as the person who holds it wants it to be? How much power does the head of a popular AI company have? A lot! Does he/she deserve the power? Did the owner of the AI company work hard to show adults the way? Being a software expert does not mean the person can guide millions, if not billions, of people on environment, politics, or social order, to mention a few. This is not just question answering; it is also about recommendation systems, advertisements, content links, and references provided by chatbots. Is this about copyrighted artwork and other copyrighted materials? Are we on the right track? No. Either we build AI that does it on its own (we are way away from that), without humans feeding bias into it, or we make AI engine competition equal. What about the massive amount of energy we use? What about the adults’ data theft? What role can adults play, and how can the AI industry allow adults to play these roles? How to tell adult human users that AI should not do the wrong knowledge transfer to them, just like a good touch and a bad touch is taught to little kids.

Keywords: AI, Robotics, Ethics, Trust

Introduction

AI is gradually becoming part of our lives. From washing machines to fridges, to search engines. AI is leaving no ground untouched. When AI opens its wings fully, we would need to give something on our side: trust. Are we ready to trust AI? The answer is no, not fully. To some applications, we can say yes, for example, a washing machine. We have been using an AI washing machine for a long time, we trust it, it is bias-free, and it is free from hackers, too. But what about conversational AI? Why do so many adults log in daily to conversational AI? How can AI steal their data? Or how can AI change its opinion? Or how can AI mould their adult personalities and play with their psychology? This is not happening now, but once AI reaches a cliff, the next step would be to call on people to be by its side, advertise to earn money, and sway people to its side, the right wing or the left wing. Can we control it? Yes, we can, but it will require a lot of effort, trust, and the right people.

We all look at conversational AI for answers to our questions, indeed, quick answers to our intriguing questions. Can we trust AI answers? Same with robotics, we may soon be giving our robots tasks to do, such as picking up a parcel from a shopping complex of weekly groceries. Would the robot do it right? Well, can we trust AI and robotics? This is all belief! Belief in our making! Belief in what we have made! To the best of my knowledge, AI relies on pre-trained data; bias comes from there, and the same bias is removed now. Can we inject incorrect information into AI? If so, humanity can be at risk. We must also consider this scenario.

AI and robotics are great masterpieces of making by human civilization. It is not made by one or two people; a lot of work has gone into it. The AI and robotics read a lot of things, both copyrighted and uncopyrighted, and infer from them. It learns from others’ talks and knows what to conclude based on ratings and feedback. AI is not just an algorithm; it completes itself with the learning data used to make it.

In this paper, we discuss how to entrust AI, the challenges humans face when doing so, and how to overcome them. We also explore how humans should address these issues. These challenges are not applicable to all AI; some are safe AI, while others are unsafe. Some AI are right, while others are wrong. Some AI are good, while others are bad. Developing maturity is essential to understanding these distinctions and moving forward.

Entrusting AI and Current Challenges Humans Face with AI

Can we trust AI? Can we trust Robotics? More so, can we entrust AI? Still, there is a hiccup with the idea of unconditional trust. There is no unconditional trust in AI; just as humans can make mistakes, AI can too. It is always good to recheck the answers and solutions provided by AI. There are many areas where AI poses trust challenges, some of which are as follows:

  1. Bias. AI has learned from data; past data have had bias issues, such as gender or racial bias. We need to check AI-generated outputs for bias, as a wrong answer can hurt anyone. Now the bias can be man-made, specific to injecting specific goals; this is part of a bad AI.
  2. Child care. The AI today does not consider whether the output is read and consumed by a child; child care essentials must be inculcated into AI. Or children must be detected by AI and prevented from making replies or reading solutions that AI can generate.
  3. Action-based AI. These are AI models that produce some action, as opposed to just conversational AI, which provides answers. Examples of this include a washing machine and a fridge, to mention a few. Here, our trust matters a lot; such AI can only work when humans trust it; otherwise, the actions it suggests are pre-empted. Robotics, too, can be put in this category as long as it does not have to rely on real-world analytics. Such AI has been helping humans in the right ways; yes, they do take away some mechanical jobs, but these are safe AI.
  4. Privacy. The AI systems take form from our inputs, such as prompts and/or questionnaires prefixed or suffixed. This leads to a transfer of credentials of information to another party. How an AI company uses this information becomes another issue. Would it scrape and delete this information, or would this information be stored by the AI company? If used by an AI company, would it find its place in a third-party podium?
  5. Would our data AI companies be collecting data that would be sold for some analysis and analytics?
  6. Would this yield to surveillance? Is AI surveillance safe?
  7. Will the robots we will use in the future be loyal to us? Would robots we shall use in the future be hacking-free or hacking-safe? What protections would robots have in the future?
  8. Impact on psychology. How can AI prompts affect the psychology of people, not just children, who should ideally be barred from using AI? Adults can be made to like a particular product through AI advertisements that feature pomp and circumstance. The AI industry can make or break someone’s red-carpet career. This means playing not just with people’s psychology but also with their opinions. People can be made to believe something by simply pointing out that if they buy bread, they would buy butter too, and then making their comments on butter rather than on bread, for example.
  9. The AI can become the next political campaigner. They can campaign for the candidate they think is the right one to be the prime minister or president of a country. This can happen with not just ads but with related biased content, so bias here is not limited to gender and race, but to chosen propaganda. We must be well aware of this potential threat we shall be posed with if AI goes unchecked.
  10. AI can change the world based on the owners of the AI. There are a few AI owners at present; some AI chatbot owners have specific objectives, and, due to rivalry among them, it seems they will do anything to win the AI battle. AI selling should not be a battle as it is now. It’s not just a money game; it is also a power game. Who has power wins not just in money but also in command over the world. Automation is different from what power is described here.
  11. There must be holds on AI. AI power should be non-political: no ads should suggest a political campaign, no ads must demean another leader, and no content suggested by AI should be biased toward a political party or towards a global opinion. The AI should know its limits. And adults should have a habit of forming their own opinions. Make your own conclusions, find your own way, make your own opinions. Why look for AI for everything?
  12. Recommendation System. Adults are becoming addicted to AI-generated answers, so they stop learning from Wikipedia or other registered trustworthy sources. Rather, AI recommends sources to them. AI is not biased at the present moment. But it can become biased toward promoting traffic to another site or platform with ads for a specific future modus operandi. Note that, right now, the only aim AI companies have is to gain popularity and reach more people, but once these aims are met, some companies may try to expand into personal grooming and political interference as well. This can’t be ruled out. The recommendation systems should be unbiased, the advertisements must follow some rule, not just pay and run, but be analyzed, determined, and then cast on a platform.
  13. The environmental impact of AI is huge. AI companies must be taxed twice on their profits; at the same time, small companies must be allowed to grow with a lower tax burden on the environment. This is to produce healthy competition. The environment pays a lot for AI. When we were in the internet boom, people used to find information on their own and pay for their electricity themselves. Here, the user and the supplier (the AI company) both need to pay for using the elite AI.
  14. Ethics should not just revolve around the right use of gender and other biases, but also revolve around the use of copyrighted artwork by some of the great artists and some of the naïve artists. Art is a model of thinking and should not go unrecognised. Well, it is well said that AI creates unique art, but we must develop algorithms to determine which artwork was used to make the new 21st-century AI masterpiece. We need to pay back to the original artist, if not in money then in acknowledgement. One day, may that artist also receive the check for the basis of an AI masterpiece.

There are many more points, but these are enough to show that we are not doing enough to manage the growing AI spectrum. AI must be monitored for well being of adults and children alike. Software must be made to determine where an AI-generated output or artwork was made and to credit the inceptor. At the same time, we should allow AI to help solve scientific problems so we can progress further on our journey to take humans to the next level of development.

Conclusion

We are not in that time when we can entrust AI with all our hope for a better tomorrow. AI is still in human hands, and humans have been known to bend on either side of the aisle. AI can’t be independent of humans yet. AI can’t be that intelligent as of now to take its own course in the history of the world and in shaping the future of the world. AI can’t be unbiased as of now, since it takes commands to learn bias, whether racial, political or gender. The owner of AI can misuse its widespread use in the future, which could be exploited by vested interests. We must hence hold tight the helm of the ship the AI makers are using; the ship is not fully built, but it is sailing high in big waves. People don’t know what the future of this unwavering trust we entrust to AI can be. But all we have to say here is know AI can automate things, it can speed up the discovery of drugs, it can help mathematicians solve problems, but AI must not be allowed to play games with the opinions of children and adults. We should live in a free world where adults can understand things and make the right decisions based on their own souls, not on some AI recommendation systems or ads driven by vested interests. So AI must show a logo, to tell adults to think, learn, and take decisions on their own, not just on what AI told them to.

Their Love Lives- Part I. #story #love

Their love lives are a series of stories about different people, whether they found love, the endings, and the mistakes.

Pamy was beautiful, with long hair she used to tie into plaits, light hairs, and a beautiful voice. And on top of that, she was intelligent too, sharp enough to find errors in the AI software she used to handle.

This was the company, The AI Moon. AI Moon consisted of talented men and women, with the aim of creating the best AI models in the world and helping mankind in times of need, or even existential threats.

Pamy was one of the few female engineers in the Denver office. There were few females, mostly on validation teams that test the models for efficiency.

Creg was a dark-complexioned man, single and in his late 30s. Pamy would be somewhere in 26 when they both met. Creg was not in Pamy’s team. The AI Moon was a huge company now having around 1000 employees in Denver office alone.

Creg often said “Hi” to Pamy, and then Pamy would invite Creg to her cubicle. The cubicle was huge, with all kinds of sticky notes posted on its walls, and a board where Pamy often used to write her to-do lists.

It was a fine day in the office. Creg used to make excuses to pass by Pamy’s cubicle. It was that very moment, he said, “Hi, Pamy.” Pamy was more than excited and said, “Hi.”

Creg said, “Are you busy?”

Pamy was a sharp woman, still single and not ready to mingle. She called in Creg. “Come in, Creg, have a seat,” she said, pointing to a seat in her huge cubicle.

“We are testing engineers, with so much work, but we need to chill out too at times,” said Pamy.

“Would you like to have coffee, Pamy?” asked Creg.

“No, Creg, I just want to talk about validation, how you do it so well, you are almost going to be the Engineering Director soon.”

“Oh yeah, I will be, I’ll teach you my tactics, Pamy. Can we have some tea/coffee?”

“No,” said Pamy.

Creg was sad; he understood she didn’t want rumors about him and her in theoffice.

And Pamy thought people won’t notice, they are sitting for hours talking, talking, and talking in Pamy’s cubicle. But not one said a word as Pamy’s brother was an influential man in these circles.

Pamy extracted from Creg everything from his birth to his assets.

How mean she was.

She used to have him teach her to write her scripts, even when they were not on the same team.

Creg often worked out of office hours to finish the script Pamy needed to run on servers.

How much a good man Creg was.

He even said to her, “I can do anything for you Pamy, just say yes to me once.”

But Pamy, wanted something else.

She stopped talking to Creg after being promoted to a senior validation expert at the AI Moon company.

Later news came that she had married an affluent man in an arranged marriage.

But her social media posts are not the same as they used to be when she was in talks with Creg.

Creg was still single; he wished Pamy would still come to him. He married years later, when Pamy’s daughter was 5 years old.

She left her job and started her own business, which did not work well.

But for sure, she missed Creg. She missed the freedom of speech she had when she was with Creg. She missed the emotions she used to emote; not every man is the same, not every woman is the same.

And here Creg’s wife was the luckiest woman in the world to get him; he was a gem that Pamy lost.

Note: The image is AI generated.

Beautification of Machine-A possible future career option

Note: The art is created with AI here.

Do you want robots in your life? If not in your life some amount of robots would be needed in work life for things humans can’t do. Why not have a friendly robot beautified while it cleans the roads daily? Give it a name perhaps?

Robotic designing is a separate thing; it is technical, while Robotic beautification is different. Robotic beautification means enhancing the appearance and feel of a machine without disrupting its functionality or servicing mediums.

Note that robotic beautification must not stop or hinder any servicing of the machine.

This can, in the future, give your machines a personalized look.

Wont people be bored of looking at same iron clad robots around them, if it ever happens.

In the case we head to the future, we would see robots around us, drones around us, why not beautify them?

Why not?

Think of a robot with a blue-haired wig?

Or think of a tattoo on the left cheek of your robot?

Why not tattoos on the artificial skin of robots on their thighs?

Giving a robot a wardrobe?

It would be a big business!

A fashion industry for robots!

Giving robots a personalized look!

Here is a futuristic robot with a bold AI-created style.

We all know that many jobs are being replaced by AI.

What would happen when robotics launches in our lives?

One thing is sure: we won’t all like that we all have robots that look alike? Do we?

So, we would need to architect changes to looks of our machines.

Be it a drone we own, or a robotic dog we own, or maybe a humanoid robot.

We need to differentiate between the machines we have and what our neighbors have.

So there would be the business of covering the machine in masks, which can be taken off.

Artists out there can do art on the machines to make them look not only beautiful but also different.

People can claim the identity of robots in future and a copyright as well.

We are still far from robots reaching your homes. Hence this is futuristic.

But for sure, enumerating future career options is a necessity, as jobs are being taken over by AI automation and soon-to-come robotic automation.

Thank you for reading.

Subscribe for updates.

Futuristic Robotic Theft. How much does your machine know you? How much do you know your machine?

#ai #future #futurist

Note: This is a futuristic scenario; there are no robots on the streets as of now.

How much does your machine know you?

And how much do you know your machine?

Imagine you lost your robot/machine in a mall or while crossing the road to the airport. You took away someone else’s robot/machine, and someone else took away your machine.

When would you find out about it?

How would you recognise it?

Would your machine tell you this?

Or would you tell this?

All machines, such as robotic dogs and drones, look alike, mostly all made by the same company looks same. All robots from a fleet of robots look alike.

You would find it out or you would keep giving orders to your robots, only to find that the coffee was too sweet?

Or you would find out that your robot is calling you “Boss” instead of your name, as you suggested!

Then you are in a dilemma about how to find your robot and return it to its owner.

This is futuritic schenario.

Does your robot know who you are?

Does your machine know who you are?

Consider a laptop, which can be determined by the wallpaper you use, the folders on the desktop, the order of files, and the storage of files.

But a robot you bought from a mall, or a machine such as a robotic dog or drone that came with you, how would that know who you are?

Are we training our machines and robots to know their owners?

And do the owners know their robots and machines?

Do we have an option to set a welcome tag as a wallpaper on the robot’s screen?

Do we have voice choices for robots, our personalizations on robots, so that we can instantly know if the robot is stolen or has been changed intentionally or by mistake?

Does the robot have the capability to scan the owner and tell their name?

Does the robot recognise the owner?

Robots need to perform image recognition of the owner and his/her home in these settings?

If we start adding these features now, they will come to robotics in the future.

A robot must recognize the owner’s voice tone, facial expressions, body language, and specific words.

And the robot must not follow commands from someone outside a friendly circle of the owner. This is a must to avoid robotic theft, a reality that can hit us.

But robotic theft can be prevented by implementing simple image and voice recognition systems.

At the same time, the owner can change the robot’s screen to make it more visible for quick recognition.

A robot need not look the same. Why does a factory look to all robots? Why not give each robot in a fleet a distinct look, with different hair colors, heights, and weights?

Other things that can work are asking the robot to tell last day’s stories or tell how much sugar you take in your coffee.

Robots are costly machines, and they in future will carry a lot of sensitive data of the owner; hence, robotic thefts must be avoided in the future.

Machines must be secured with other kinds of security as well, such as passwords, fingerprints, and viva, among other security protocols.

Robots must be trained not to hear orders from non-owners or the non-friendly circle of owners.

Robotic theft can cause a lot of problems, one of which is the loss of your personalization data, passwords, and preferences.

Let’s keep robotics safe in the coming future.