Online Health of Mankind-Online Health Trial – To a machine-read by a machine-not to a human!

In this article I want to present to you my ideas on a new software I am planning to develop viz. Online health of Mankind–based on volunteers who would like to volunteer for this task to predict future problems, even the extents of next spikes in a harmless disease to life threating problems. We, on Earth, have seen that a virus halted all of our lives. And we have to be future ready for not just a virus, just be anything, any medical or even societal problem. The software I am working on does the same. For that we would need some volunteers to engage in “online clinical trials”, no inputs would be needed from them. Just day to day workings. And all that data will go to machine, neither I nor my co-researchers would ever look into your day to day data. Just the machines would guide us like a weather monitoring system as “the human or our kids” key areas of interests to keep the mankind alert for any problems of future. Who know what — but COVID has taught us to be prepared! Its totally your wish to participate in it or not which is my installing this software tool in your device !

We are living in a new world—It all started few decades back. The Online World will continue to exists in our lives whether we agree with it or not, whether you like it or not. This is the new reality. This online world has become a necessity for us as a daily part of our lives. It is a small parallel world we live in and I say can be rightly called “Parallel Online Universe” .  Yes, this is a full Universe on its own! And Yes we all live parallelly in our online presence on World Wide Web -WWW ! Oh yes, but this is not completely what we are, just the way we need and consume things are perpetrated in our daily lives much easily through Internet and now we are all used to it. Lest, we never though, before entering this world, that we need security guards there in and thefts are possible in online Internet life too! Aalas, all we saw was Use, Use and Use of being Online! if WWW has to give you so much comfort, friends, shopping’s, news, sports, entertainments and what not to mention here ! Everything in virtual world –and here you rub your mouse to call in the Internet Jinny! And just tell it — I want to watch a online live music concert. “Your wish my command” says the Jinny!

Well Well Well ! There are so many uses of internet! And yes, there are thieves too. But my today’s article is focused on Online Data and its respect, values, authenticity, purpose and use to society while making sure privacy is retained and maintained to the levels allowed by a particular citizen of Online Universe!   Yes, Dear ladies and gentlemen – This online world where plethora of our data is shared is new, so is the people’s understandings – trying to figure out rules of what is right and what is wrong.  I will explain this with an example, Corona is going on Earth, drugs  were to be made ready to tackle it. The pharmaceutical companies of whatever extent wanted to heal mankind. But they had to do some testing on humans for that. Yes, you may not like to volunteer for a noble cause to test a drug which had already been made very deliberately and prior to that had been tested on animals. AND all was fine with the drug inventors. But what about human testing? Oh, you don’t want to get it done on you! That is ok—But your NO to clinical trial wont mean NO by everyone for clinical trials ! Right ! So, if some one is ready to do clinical trial of a new drug well tested by scientists on all parameters—they respect your “No” and they wont ask you and neither ask you for a reason. But there are folks who want to do what ever they can do for mankind welfare—and go in for human trials—-! Not for money but for mankind.

In the same way—If you don’t want to share your internet online data, such as links you clicked and your eating habits—you would be respected–! And your choice should be respected. But to tell you all major companies do ask your permissions! And if he like to share his preferences for data analytics, its his right to do so. Your independent voice cant stop him. Online data trials are needed for a healthy Online Universe ! Just like clinical trials are needed! If its your right to say “NO” to online trials, some need their voice too to say “YES” too. If you think your personal data would be leaked to a machine doing analytics and you are afraid of it. Some do say we are not afraid as we are not doing anything wrong or some are too open about their personal data—that they even write on their Twitter accounts—this is my after bath pic, now I am going to movie—That too with public view on. So, for some there is nothing to hide on in personnel data as to them it hardly matters if a machine does some algorithmic predictions based on it. No human is reading it for sure !!!!! Guaranteed! So to whom a data is leaked to –A Machine! Its ok if you don’t want a machine to know your preferences, but some are too ok—-that it’s a machine doing a job for a better Online Working of WWW Universe-a parallel world we all live in. And they may like to participate in Online data analytics trials too. These people should have a voice too!

Now I want to express why online data trials are needed for a even 1% of  sample size of population of users and what are its uses.  We all know we are in Pandemic, which is not ending and, its already more than a year. Ever thought of how it is affecting our lives. How we have changed and molded ourselves not only in working, but also in listening to songs, our movie tastes, to mention a few. What if some folks want to contribute to where the best creature on Earth viz Man is going online? Is it not important from scientific and medicinal purpose, for we want to predict if mankind is entering a depression pandemic? or are people anxious of something? or our kids too much into horrors stories-which should not happen, or is it that too many people from sample are involved in reading blogs about change in governments, or may be lot of people in the online trials asking for second marriage and not getting a right mate ? These are serious problems! Oh, yes what if our children start searching – alcohol online! Yes they would for sure not liked to be tracked by machines, but parents would set their privacy permissions! Not them! And we care for the future of our children and would not like to support even teenage pregnancy and sexually transmitted diseases. These are just some of the many reasons why parents would allow—–altleast some for their non-adult children privacy data to be tracked. How else can the machines find out about internet data theft when they click to buy a thing not meant for them. Its ok for parents and teachers to handle it. But this contribution is for the health of mankind not just for some, on the expense of some people donating their personal data and their children personal data for online trials! All to be read and executed by machines! Which will give no personnel information but a summary of the data analytics and where the online world is going to— We hope we will find some online contributors even one family in few hundred families  can help us to lead a happy online life and hence happy life on Earth! Think over it! If you believe in ethics, believe in mankind –believe in trust!  This is the aim of my algorithm to detect “Online Health of mankind”— purely on basis of  personal choice as its your personal data.

These are some of the many problems which if even 1% of the online users agree –to share can really help mankind in several predictions. Even next wave of pandemic, malaria outbreak, or even the overall health of a country, society and Earth as on whole can be measure by these small personal data sharing———-All On Donation Basis not on theft basis.

Well there are thefts also that happen online-so many unpopular applications and website do data thefts.   Sell data and use it in wrong way. Hence, I just recommend you to use authentic applications and websites, which always ask you permissions for your clinical trials of online data—which is safe and secure way to donate some data for analysis by mankind!

I don’t know whether this is being implemented yet or not, but this is my vision and my  mission. If some one wants to donate personal choices what is your problem to it. You can say no. If it had been my company, I wont interfere in it. And you would be free to do what you like. !

Happy Earths Day- Let NATURE BE NATURAL , Let Earth BE MORE Natural !

Earth has is been though a lot with and without humans since its creation and existence. Be it acid rains that marked a start to all the major processes or be it the ice age! It has gone through several turmoil’s and still retain self equilibrium of the essential atmospheric, seismical, polar, geographical, biological processes, to mention a few, to the required levels, if not the best levels.

Not just industry emissions you have to focus on dear Leaders! What about cost of taking raw materials from beneath the earth crust? Is the mobile I use, is this its actual cost ? What is its cost to Earth ? How can we calculate its cost? Find parameters! Else I will let you know in my upcoming articles! How to find actual costs. What people see are visible effects on dear Mother Earth, viz. rise in temperature, pollution and ice melting on Earth making seal levels high. See invisible effects too—-earthquakes, volcanoes, un-necessary cyclones, hurricanes, flash flooding’s, thy are all saying something. Yes, climate change, and what about the permanent changes we are doing below earth crust–by over- mining– What is the limit on mining? Did you check seismic layers before giving permission to mine? Or is it just coal and gold? No, my dear people- Look beyond it. Ecosystems! How will you measure loses to ecosystems and extinct species—the damage is unrecoverable. I will give you metrics to measures these losses and use of Mathematics and AI in upcoming articles.


On this Earth Day, 2021, when we look at Earth and the changes to some of the factors mentioned above, in past 100 years and more precisely 70 years, major changes were made in:

  1. Our way of livings
  2. Our day to day needs
  3. The way we commute
  4. The way we communicate
  5. The way we gain information-be it informative or for entertainment
  6. The way we cook
  7. The way we eat
  8. The way we carry things—Yes its a reference to Plastic we use in everything today
  9. In one phrase “Almost Everything”

Compare living in age 70 years back. Not much of vehicles, no mobile phones, no networks, no plastic, no plethora of gadgets in houses, no 4G, no 2G either! , no microwaves & ovens, no air conditioners, fridges,……………………………… on!

And what we had then?? Birds lots of them singing, plants everywhere in our surroundings, Cattles nearby, fresh milk, fresh butter, fresh flour, fresh and cheap vegetables easily available in neighborhoods. Why so many pesticides, why not set some standards ? Why not use AI based learnings ? Where have birds gone from neighborhood? No trees, no green plants, and lot of cellular networks to disturb them in cities, I am not say mobile technology is not needed, the thing is finding right balance between human needs and birds, animals and insects needs too– they form an equal parts in ecosystem and give back a lot–to Earth and humans! Why most meditation audio recordings have nature sounds, birds chirping sounds? As they have healing effects! But now——real voices are replaced by audio——Is it the same ! Watching a Live Music Concert and watching on TV – Is it same ? Where are honey bees? When breaking a bee hive and eating fresh honey was so common. Who used to buy honey when every other tree around had bee hives. And we used to play with butterflies???? And present home grown flowers to our teachers every week. No! Yes, the average age of man has increased but so have the diseases. The present situation is a clear indication of the proofs of what I am saying. We have to take the best from the modernization but keeping track not to make irrepairable changes to our beloved planet!

Civilizations lived before as well. Sewage was then also and is not a new alian word in our dictionary, so why rivers and seas be allowed to accept sewages as in some part of world–even in good parts of world?? Or is in all our efforts we just focusing on making new cars, tech, robots and earn money and keep economy going without looking at side effects. How many car humans needs? Why existing cars cant be reused with attachable AI components and efficient fuels or batteries?? Do you know dismantling a car and creating a new car what if costs to Earth? To someone it must be few thousands of dollars, but what is the cost of a new gadget on and to Earth ?

Not just a car. I am asking you what is the cost of the following on Earth?

  1. A plastic bottle to store water: Oh yeah wooden or natural rubber is too costly, who will do hardwork with wooden bottles and who will grow so much of rubber, where concrete jungles can be made???
  2. A plastic boat?
  3. A plastic toys? Wooden toys take much longer to be made. But do you know so much wood is being burnt. One could have used it in constructive way and replantation could have been performed.
  4. Untreated Sewage flow in fresh and salty waters?
  5. Gasses emitted from equipment’s such as Fridge and air conditioners.
  6. A new gadget
  7. A new cellular tower
  8. Excess unground MINING!

Its not just green house effect that the world leaders have to reduce in by 2030! Well dear leaders look at- natural preserves, where are our ecosystems? Where are our plants, trees and flowers, where are our animals, the cats that used to come to our houses to steal milk from our kitchens, which our kids used to play with. Or we want our kids to live in a virtual worlds post CORONA times too ? Isn’t that too much.

Oh if you say, these thing are our bread and butter! Well then growing natural rubber, jute, silk(in an nature friendly way), wood, carpentry to produce daily use things from wood, would produce even more jobs than the current sources of employments of manufacturing non-earth friendly un-disposable stuffs. Can we calculate it ? How much employments would be produced ? It is just a way to change – Just a step. No change is ever welcomed by all easily, as humans as complacent with what they are doing and how things are happening. This is human nature. But it is a change in way we think, and gradual change in way things can change. And more employments would be produced for demand is high always for essential things. I am not asking one to replace plastic from medicinal procedures which needs it most of the times, till an alternative is discovered well in our research labs of chemistry departments!

We don’t need to go to aroma therapy to breathe in natural scents of flowers. Each community garden should be having natural aroma therapy via variety of flowers, plants and herbs growing there. Why we don’t see hens, rabbits, kittens in gardens now ? Do we need to go to far off zoos for them ? Why injections are given to make cows give more milk? Hens are given these too? And there was a article I read (I don’t know was it authentic or not ) that eating eggs of such hens can effect hormonal processes of young children–till than switch to organic eggs??? —- We cant avoid everything, but we can let NATURE BE NATURAL ! Let NATURE BE NATURAL , Let Earth BE MORE Natural ! That is what is my message on this World Earth Day!

I can provide you metrics of costs in upcoming blogs–It may take some time as I am busy with my own works too. Well it can be good industry project. I can guide you if you need.

AI: Sentiment Analysis and Fuzzy Sets based Summarization Technique —- Research based Approach (sample explanation with Python)

Here, in this article I am proposing a novel technique to summarize text documents. Well I am not writing a research paper for it but giving you basic explanations of key steps involved. It is a novel technique as sentiments have not been used with Fuzzy Inference Engine to the best of my knowledge for Text Summarization. I started working on this topic in 2016, though given time constraints and constraints, I didn’t publish the work yet. The work I did was in Java using Java based libraries. I have rewritten the code in Python for you to understand the basic steps, to be expanded into. Values and knowledge base in Fuzzy Inference Systems have to be edited as per applications. I have provided sample values for your understandings, not the values used in original research [which shall be send for publications and made available later].

This article can be taken as a (1) Tutorial for learning application of Fuzzy Inference Engine, Mamdani systems in particular, (2) Summarization technique in Python, (3) Novel research proposal for application of sentiment analysis in Fuzzy based Summarization Techniques.

Typically, this article was meant for research publication. But I have enough publications and I want research to reach humble minds for free! Here is the proposed technique in python.

1. Pre-processing documents and sentences

# import important NLP packages

from future import unicode_literals
import spacy,en_core_web_sm
from collections import Counter
nlp = spacy.load(‘en_core_web_sm’)

from spacy.matcher import Matcher
matcher = Matcher(nlp.vocab)

sentence_to_be_ranked = (‘Financially good conference situations.’)
text_document_snippet = (‘There is a developer beautiful and great conference happening on 21 July 2019 in London. The conference is in area of biological sciences.’)

#Load the complete document from a file. This is for illustration only

doc = nlp(text)
sent = nlp(title)

2. Get Largest Noun Chuck

Get the main longest noun chunk from the text fragment. Only most important chuck is being considered by the algorithm and shall be returned. There can be several kinds of chunking possible. Chunking is a kind of shallow processing. Why chucking —as it is useful in summarization techniques to highlight the important phrases. And largest chunks will be highlighted in the summary across all sentences after ranking them all. Other chunking techniques exists, look in my future articles about more on this. Here, just key steps of summarization.

# inputSentence is the input to of which the longest chunk is to be found

def getMainNounChuck(inputSentence):
lenChunk = 0
prevLen = -1
mainChunk = “”
for chunk in inputSentence.noun_chunks:
lenChunk = len(chunk)
print (chunk)
if prevLen < lenChunk:
mainChunk = chunk
prevLen = lenChunk
print(“Main chunk is: “, mainChunk)
return mainChunk

Sample Output:
Main largest chunk in sentence for ranking is: Financially good conference situations
Main largest chunk in given 2 sentences is: beautiful and great conference

3. Get Sentiment Score of Sentence to be Ranked

There are several ways to compute the sentiment of a given sentence to be ranked as a part of collection of sentences to create an extract. The two methods are not exhaustive, but there are plethora of other options to compute the sentiments. For, research purposes I suggest you use a supervised or unsupervised trainer to train sentiment module on the topic of your interest, this means the domain in which you wish to perform the experiments: Is it sports news articles, medical articles, movie review summarization. Such a sentiment analyzer will be more beneficial to the output and relevance of results. But since the article aims to just give snippets of tasks in short. You can look on my other articles of how to learn sentiments per domain of text categories.

def getSentimentScore(doc):
print(“Sentiment is “, doc.sentiment * 10)
return doc.sentiment * 10

Another way to compute sentiment

def getsentiment2(sent):
from textblob import TextBlob
sentimentObject = TextBlob(sent)
sentimentObject = sentimentObject.sentiment
print(“Sentiment”, sentimentObject.polarity * 10)
return sentimentObject.polarity * 10

Sample Output:
sentence= (‘Financially good conference situations.’)
Sentiment 7.0

Preferable Recommended way which I use and suggest: A self trained model on corpus!

4. Similarity between document and sentence

The following small python code is enough to find basic similarity between a sentence and the text document(s) under consideration. Well there are several kinds of similarity measures in NLP some very popular being Wu-Palmer and Leacock Chordorow, path based similarity, Resnik measure, Lin measure to mention a few. As a researcher I suggest analysis on all these fronts before finalizing on any one. Some other similarity measures are lexical similarities, Cosine similarities among one-hot representations, vector space representations or FastText, GLOVE Vector representation or any other word embeddings such as Word2Vec or may be just WordNet based similarities. A lot of options to choose from. So which one I am using in this problem ? Or even in any other problem ? Still the task can be made more simple, the choice of algorithm, you can mail me for details once your basic steps are complete!

def getSimilarity(sentence1, doc1):
return doc1.similarity( sentence1 )

Recommended: More insight in kind of similarity measure used esp. the most useful ones for a current problem. A deep insight is needed here, while choice of technique to be used is being made.

Sample Output:
The similarity between sentence and document is 0.3219

en_core_web_sm, don’t come with word vectors and utilizes surface level context-sensitive tensors. Hence for an efficient toolkit you would have to use more than this! I have implemented this fragment of work in Java using DL4J!

5. Get Part of Speech Count

Part of Speech is another parameter to the Fuzzy Inference Engine that I propose to use in a slightly different way. Being as in typical NLP summarization techniques using Fuzzy Inference Engines, an input is typically used as NOUN count. But here we have proposed to use: Noun Count, Adjective Count, Adverb Count to mention a few. The code snippet for a easy computation of count of a particular POS tag as specified in parameter “inputPOSTag” for text in “inputText” can be written in Python as follows.

def getPOSCOUNT(inputText, inputPOSTag):
nlp = en_core_web_sm.load()
count = 0
for tokenInputText in nlp(inputText):
countPOS +=1
dictonaryInputText= (Counter(([token.pos_ for token in nlp( tokenInputText)])))
print( dictonaryInputText )
return dictonaryInputText [inputPOSTag]/( countPOS+1) * 100

6. Checking Values-So far So Good

The following code is to check the values computed by the NLP Engine is fine or not ! If not change the algorithms and techniques used as suggested in points above. Once done go to next step of setting up Fuzzy Inference Engine for summarization.

print(“the similarity between sentences is”, getSimilarity(nlp(title),getMainNounChuck(doc)))
print(“Noun count” , getPOSCOUNT(text,”NOUN”))
print(“Verb count”, getPOSCOUNT(text,”VERB”))
print(“Adj count”, getPOSCOUNT(text,”ADJ”))

Sample Output for input above.
Adjective count 11.53846

7. Defining Fuzzy Inference System (FIS) for Ranking Sentences

The following steps are performed for ranking the Fuzzy Inference System (FIS)

7.1. Defining the Antecedents and Consequents functions

The following input parameters are used. These are all defined above in the NLP Processing technique in details. Again this is a snippet for understanding, actual research has much more complex components, which are not discussed here in this article.

The Fuzzy Logic toolkits for illustration used are as follows:

import skfuzzy as fuzz
from skfuzzy import control as ctrl

The input parameters in FIS for the explanation are: [See skfuzzy for details on parameters and their detailed implications, which I am not explaining here. Given knowledge of Fuzzy Logic and Fuzzy Systems are assumed to be at considerable levels in this article]

similarity_document = ctrl.Antecedent(np.arange(0, 1.25, .1), ‘similarity’)
sentiment_score = ctrl.Antecedent(np.arange(0, 1.25, .1), ‘sentiment_score’)
nounCount = ctrl.Antecedent(np.arange(0, 110, 10), ‘nounCount’)
verbCount = ctrl.Antecedent(np.arange(0, 110, 10), ‘verbCount’)
adjCount = ctrl.Antecedent(np.arange(0, 110, 10), ‘adjCount’)

The following is the output variable defined with parameters as given below.

rank = ctrl.Consequent(np.arange(0, 24, 1), ‘rank’)

7.2. Define the Input and Output Linguistic variables in FIS

The following is brief description of how to define the Fuzzy Sets involved in the process.

similarity[‘low’] = fuzz.trimf(similarity.universe, [0, 0.3, 0.5])
similarity[‘average’] = fuzz.trimf(similarity.universe, [0.3, 0.7, 1])
similarity[‘high’] = # define as per requirements in your problem

sentiment_score[‘low’] = fuzz.trimf(sentiment_score.universe, [0, 0.3, 0.5])
sentiment_score[‘average’] = fuzz.trimf(sentiment_score.universe, [0.3, 0.7, 1])
sentiment_score[‘high’] = # define as per requirements in your problem

nounCount[‘low’] = fuzz.trimf(nounCount.universe, [0, 30, 50])
nounCount[‘average’] = fuzz.trimf(nounCount.universe, [30, 70, 100])
nounCount[‘high’] = # define as per requirements in your problem

verbCount[‘low’] = # define as per requirements in your problem
verbCount[‘average’] = # define as per requirements in your problem
verbCount[‘high’] = # define as per requirements in your problem

adjCount[‘low’] = # define as per requirements in your problem
adjCount[‘average’] = # define as per requirements in your problem
adjCount[‘high’] = # define as per requirements in your problem

Pictorial Representation of some Fuzzy sets defined for illustration (sentiment_score):

7.3 Output Membership Functions for Linguistic Variable Rank

rank[‘low’] = fuzz.trimf(rank.universe, [0, 0, 10])
rank[‘average’] = # define as per requirements in your problem
rank[‘high’] = # define as per requirements in your problem

Sample Fuzzy Sets for “rank”

7.4 View the Fuzzy Sets



8. FIS Rulebase-Sample

The following are some sample rules for FIS summarization toolkit. These are not exhaustive and are documented here to illustrate the procedure and to pave your understandings.

rule1 = ctrl.Rule(similarity[‘low’] | sentiment_score[‘low’], rank[‘low’])
rule2 = ctrl.Rule(sentiment_score[‘average’], rank[‘average’])
rule3 = ctrl.Rule(sentiment_score[‘average’] | similarity_title[‘average’], rank[‘average’])
rule4 = # define as per requirements and experts views
rule5 = # define as per requirements and experts views
rule6 = # define as per requirements and experts views
rule7 = # define as per requirements and experts views
rule8 = # define as per requirements and experts views
rule9 = # define as per requirements and experts views
rule10 = ctrl.Rule(similarity[‘high’] & nounCount[“high”] & sentiment_score[‘high’] & verbCount[“high”] , rank[‘high’])

Define FIS system now and perform simulations.

rankFIS = ctrl.ControlSystem([rule1, rule2, rule3, rule4, rule5, rule6, rule7, rule8, rule9, rule10])
rankFIS = ctrl.ControlSystemSimulation(rankFIS)

9. Computing Rank (Importance) of a Sentence using the sample FIS

Here for a particular a sentence whose importance or rank in respect to other members of text fragments is to be computed. The NLP properties defined in 2-6 above are computed and set in the FIS system as below:

rankFIS.input[‘similarity_title’] = getSimilarity(doc,sent)
rankFIS.input[‘sentiment_score’] = getsentiment2( sent )
rankFIS.input[‘nounCount’] = getPOSCOUNT( sentNLP ,”NOUN”)
rankFIS.input[‘verbCount’] = getPOSCOUNT( sentNLP ,”VERB”)
rankFIS.input[‘adjCount’] = getPOSCOUNT( sentNLP ,”ADJ”)

The following steps illustrates computation of ranks based on the inputs just provided through computations above.

print (“the answer is”)
print (rankFIS.output[‘rank’])

#view the Fuzzy Inference System

The output of illustrative FIS for input taken in this article is as follows:

Rank = 7.3040, The FIS Rank defuzzification is as follows:

This image has an empty alt attribute; its file name is image.png

10. Conclusions

This was a brief article explaining the key steps in building FIS based system for summarization. Once ranks for all sentences are formed the main chunk identified in top p% of sentences can be highlighted in the application as the output of the tool. This is a novel approach in that sentiments have not been used in any work with FIS for summarization as of now. Also adjective count, chunking, adverb counts have not been used prior to this proposal in this article.

The aim of article is to lay emphasis that research can progress outside research publications too ! At least research proposals can ! Further this article explains how to use FIS in NLP application of text extraction.

History- prior to 5000 BC ! Well Written and Kept Intact even now??? Why are the manuscripts, history, dating several millennium back- IMPORTANT!

What is older than oldest proofs of mankind and mans existence on Earth! We are going to Mars and Moon! Have we explored to our past fully?

One example of millenniums old manuscripts are Vedas, Upanishads (if I am not wrong) Do check the exact dates on internet! What are others. Can we summarize them for easy understanding of common minds!

Here, is an example:

Chandogya Upanishad verses 1.1.1-1.1.9, Samaveda, Sanskrit, Devanagari script, 1849 CE manuscript – Chandogya Upanishad – Wikipedia

How old are the Vedas? – Ancient Science

What are Vedas & How Old are Vedas – HindUtsav

Other parts of worlds manuscripts please ?

Well if you have hard(soft)copy of scripts dating back to 5000 BC intact, they have much more to give us, not just the information, but society structures, living situations, including food, clothing and patterns of climatic behaviors predominant in the regions, in the times! Not just that how the civilization communicated with the world outside its realm –though planes were not there ! And may be forgotten healing and medicinal practices!

It is awesome to have a original view of such old scriptures and even been able to read them. Yes, one can start with English translation to understanding the histories and then can follow the original versions, after gaining the language competencies.

Well if there are around 1200 BC manuscripts, iron embeddings etc. available to us on Earth, well intact??. Then, these must be history much older than 2000 BC. Why? Because if something is written on a good preferable form of media – wooden paper. It must have been written on leaf before that, before being copies to the presently available form, and it must be on voice of people and in the mouths of great preachers and teachers-the Kings and Queens!? How can I say so– Based on Logics! Logic & no fiction here— that this is how the mode of communications must have progressed (if I am not wrong, my logics here, as I am not a historian nor an archaeologist, just a inquisitive learner) while man learned to use printer and computer to print modern day — solutions and discoveries.

Back then there were no computers, as man started writing with feather’s of birds on dried leaves and so on. Wisdom of mouth got a mode of communication through these snippets. This is (may be others too agree, may be noted before too, common sense, that is all), one probable philosophy of transfer of knowledge. Hence, a script written in Sanskrit dating 1200 BC, means the fundamentals were transferred through various communication techniques developed in this process of development of mode of information processing in early eras of pre-historic times. This amounts to fundamentals being even older than 1500 BC and the word of mouth things even much elder to around 5000 BC. Imagine the no paper world—and how the rulers must be transferring knowledge after ——–oral language was invented for communication ! 1200 BC is just for an illustration. I am myself reading how old the oldest scriptures on earth are.

The Aryan language existed in around 10,000 BC. [As per Source on this website: How old are the Vedas? – Ancient Science]. So, are their any manuscripts dating back to 10,000 BC ?

Which other civilizations on earth have such old works preserved till date ? Do comment about your knowledge about these facts. My interest in this area is recent and I am as naïve to the subject as you. This article is my objective analysis of any such old scripture on globe. If I am wrong in my analysis I am open to suggestion to improve my inquisition on this highly interesting area of research. [research->>re –searching].

Several website gave a new date to some archeologically old precious things to study which are human own writings -scriptures. So, even I am wondering the exact date and the technique used to find it. Radiocarbon dating?

As per these articles, Vedas, which are scripts intact written in Devanagari. Check the links on left panel on top of this webpage to have a view of how these old scripts look like. This is just an example. I am curious for all the old articles and snippets and manuscripts all around the globe not just in India! Yes a lot of them from India are available on internet and I try too look into it myself, whenever I get time. Do mail me more such scriptures from your regions!

Call me through Stars & Moons

Here is the song in my voice on youtube:

Here is the song in my voice on my google drive:

Call Me..

Call mee..Caalll me baby

Through the echoing voice

Through the woods and roads

Call Me

Through the bubbles in water

Call me

Call me baby call me

I am waiting, call me

Through the stars and moons, call me

Through the clouds and rains, call me

Call me anytime

Call me everytime

Call me all the time

call me babie

call me

oh just call me

In these dark nights call me

Through the stars and moons call me

Just ask them to call me

tell the clouds to send the rains, to call me

In this busy life

call me

In the deserted dreams call me

call me babie

just call me

caaallll me

call me baby

call me through mountains and dews

calls me

call me through stars and moons

call me

call me anytime


all the time

call me in your dreams

in your arrms

in your heart

all the time



call me baby

tell the stars and moons to call me

tell the clouds and rains to call me

call me baby, call me

tell the winds to whisper in my ears

tell the bird to chirp and call me

tell the flowers to blossom and call me

call me baby call me

tell the stars and moons to call me

tell the clouds and rains to call me

call me baby call me!

call me!

Astro-Robot Nanena!

Robotics is good and very useful in several aspects. In this book Astro Robotics you will find all reasons to use and create robots. But well—Do mankind need to be using so many Robots. Oh the space and astronomy do need it, for humans and animals to visit outside Earth in a comfortable couch with a hot cup of tea served by an intelligent robot.

So far so good. You will meet Nanena in this book, a Robot on Mars for your happily stay there. Well her full family is there and its a big joint family of robots out there on Mars. They will build a customized house for you when and where you want. And, also thy will fly you past the poles or any crater on Mars.

And Miss Nonena, a female robot will help you in your journey to the next palnet which is any next station- viz. planet you choose. Lot of adventures and fun. These Robots will be making your life a real journey!

But here come the question of ethics! Well yes dear reseraches and sci fic lovers! You have to put a full stop, and here in this novel, the full stop is put by Sina, a human and a female! Who edits the encoded software of the Robot to make sure HUMANS dominate ROBOTS and not the other way.

Science works on laws as does governments- SO why not Robots!? Rest check out in book ! And updates on this blog only!

Here are the short stories of this book.

  1. Astro Robot Nanena-Short Science Fiction Story
  2. Astro Robot-Nanena- Part II- Short Fiction Story
  3. Astro Robot Nanena – Part III
Photo by Tamara Velazquez on

Is Creating Artificial Species In Lab Right or Wrong: AI Solution ?

Well well! We all have known dinosaurs prevailed on Earth long back. This article is my response to the following article:

Scientists created a hybrid human-monkey embryo in a lab, sparking concerns others could take the experiment too far (

There are ample of Hollywood movies showing what all went wrong when creatures were created in labs. It is not the brain that is being developed in labs, we all have enough brains and even animals have enough. DO we need more ? It is a new species ! Right now it is not know what it will turn out to be. As the specialist say “may be”, “may be it may have” ! So why not use AI. Let Artificial Intelligence develop to that level and dear wait till that time! Let AI answer questions of “may be”, not just the brain capacity, organ reuse in humans [btw can you take out a specie organs to be used for organ therapy–I say No !! ]

Let the AI system give you image of the new specie one is unsure but willing to make! Give AI time if you really need such a specie! Else——–Humans and animals , birds, insects…..are good enough for Earth. Are they not! So why artificial species created in labs ? We have watched enough movies from Hollywood! Still if one is curious–wait for AI based simulations on how a new specie created in lab will behave !

Well you can consult me or my fellow AI researchers for proper ways and standards for developing ethical solutions to these problems. For sure without prior homework and with a “may be” attitude new species should not be developed. And in any way any non-natural process is un-natural and hence should be as they say un-natural things are against nature ! Isn’t it ?

Photo by Tehmasip Khan on

Sketch II: Theme, Novel– Colored Princess -Fairy Tales

She is colorful Princess in my novel– “Colorful Princess: Fairy Tales” She is born colorful in her Colorful Planet. Where clouds are colored, skies at nights show colored stars, colored Moons and a colored Sun.

Oh how many dreams in this colored planet. She is born colorful, colored hairs, colored lips, colored eyes and colored blush on skin. She is their queen and is detected early on life as soon as she is born in the colored Planet. People skin is colored too but not so many colors in one body! Here in this book you will find her adventures and fictional problems she solve on this fictional planet!

Novel: Story of The Grand Park of Souls

(All characters in this poem are fictitious and does not exist. Its is purely for imaginary dreamily fictional purpose for pleasures, art and entertainment only. Audience age > 21 )

About the Novel

Here is a brief introduction to the story in form of a short poem: Poem of The Love in The Grand Park Of Souls.

Photo by Ian Beckley on

Summary of Story Book

The story is based on fictitious characters of Souls which are born as flowers on trees in the Grand Park of Soul. The Park is located in middle of the Grand Forest of Souls. There two love birds–two beautiful and bright and colorful souls found each other. The story is about the challenges they faced for meeting each other and how they overcame all this. How their soul friends helped them run away from all the hassles and devils in The Soul Garden. They made a new home and a new Garden. Fostered their love kids and lead a happy life for their ancestors to be proud of. How they called people from the Grand Park to their Universe and How they united it all. This is all in the upcoming novel–“Love Story: In the Grand Park of Souls. The story includes evolution of souls on soul trees and initially the characters were monkey faced souls and finally human like souls”

Photo by Pixabay on