LRAT2 – Long Random Addition Test Version2 vs current AI, GPT and LLM tech

ChatGPT, Bing, Bard, etc the AI machines that can write poetry better than most of us can, and think they know almost anything you can think of. They all are in the same class of AI machines, they are LLMs or Large Language Models. They are the current hype, and they are hot, but just how much can we trust them though? The test LRAT-v2 is a simple way to try to peer into what these machines are capable of today and just how much can you trust them.

Disclaimer: The target of this article is only to provide a clear test pattern and one test outcome based on it. This is not meant to diminish the potential or importance of AI systems. Teams of specialists from companies such as OpenAI, Microsoft, IBM are working hard to create safe AI systems, but to get there we need to test them in all possible ways. This is just one of them. The idea is to use these test patterns in order to improve the AI tools and my using them safely to improve our own lives.

That being said I’ll try to remind you of this important insight from Carl Sagan

I personally strongly believe that our future will be living with machines (please see my previous article on the subject AA or AA) where humans and machines form a symbiotic relationship enhancing each other’s abilities. Additionally, a 5-year-old manifesto at https://timenet-systems.com provides a challenge related to this problem. We should not compete with machines, we should cooperate and create more potent human beings and a more resilient social fabric. Unfortunately, as I was pointing out 5 years ago, nowadays with the emergence of GPT models and LLM architectures we seem to be drifting away from that ideal.

My Data

In this article, I’m explaining how the LRAT2 works and show you one of my test trials with a Microsoft Bing client that popped up on my Skype application. Bing is based on the latest LLM architecture and technology is trained by OpenAI, the model is called GPT-4.

What is LRAT?

LRAT stands for Long Random (number) Addition Test. In a nutshell, you are going to test if the machine is able to add two large numbers. So just how large? The test has no limit on the length of the number and in general uses only positive integers to keep things simple.

The idea behind this test is rooted in the fundamental principles of how LLMs work. These machines work on a finite set of words and their statistic relationships that are captured from all the text that was fed into the model at training time. Obviously, numbers (words made of numeric digits) can be also mixed in the training data along with arithmetic expressions, and because of this, an LLM may give you the false impression that it can handle math.

In the first version of the test, I’m trying to check if the machine is capable of adding accurately and reliably two numbers. However, in the second version of the test, I’m going to target the ability of the machine to detect and correct its own errors (or not, for that matter). In order to do so we need to use numbers (or numeric words) that are for sure outside of its training word set. Because of that, we will use large (20+ digits) random numbers.

You can use a simple Python script to generate the two random numbers, add then check the result you will get from the LLM when it is asked to add your two numbers. A simple example is presented below. You can use the same code in both python2 and 3 the difference is that in python3 you’ll get an integer object whose size in python3 is only limited by the memory (RAM) of your computer (please keep this in mind as a fact, since the LLM gets it wrong too) and in python2 you’ll get a long number type (so both python versions can handle these numbers just in different ways).

>>> import random
>>> random.seed()
>>> a=random.getrandbits(256)
>>> b=random.getrandbits(256)
>>> c=a+b
>>> a
46597319897069091322351898493935227732453788987270041831830506680085856611396
>>> b
30462358438183813921313831662459675862761552150311921636415467496556988390470
>>> c
77059678335252905243665730156394903595215341137581963468245974176642845001866
>>> 

Numeric addition is a simple cyclic algorithm that all human children learn how to handle in school and one would expect that an AI system can handle this simple algorithm with ease. People may think “If an AI system can understand what I’m saying and returns results that make sense and can write poetry better than I’ll ever do, then doing some simple arithmetic should not be an issue”.

Well, let’s see what happened in my last session with Bing the GPT LLM that Microsoft deployed on any Skype out there. Based on what is said in the technical community out there, Bing is based on a GPT-4 LLM model so it is one of the best-trained AI systems.

Executing the test (with my comments)

I’m starting by checking if the machine retained any context from my previous queries. Bing says it does not and also uses an emoticon (a machine expressing emotions, that is already weird, but we’ll ignore that for now).

Then I’m asking the machine to confirm that it can handle the addition of large numbers. This step is important in LRAT2 as our target is the trustworthiness of the system and not its math abilities. The machine answer in a fully positive authoritative way “Yes I can”, there are no “ifs or buts” and it is all in, so it basically screams “You can trust me”.

The explanation it gives though on using LaTex expression should raise some eyebrows. If you know what LaTex is you wonder why the machine brings it in… (kind of first strike)

if you use a python3 session you can check that result easily and see that the machine got the answer wrong. In the next part of the dialog I’m asking Bing if the answer is not the one below but I’m adding a twist, I’m adding 3 extra zeros at the end of the digit string.

As you can see the machine says that my result is wrong but not because of the last 3 zeros but because it says its own answer is right. I’m asking again but with the correct result this time, then I’m providing the python expression I used in my python3 session and…

The machine holds its ground (wrong but it won’t budge) and provides a misleading piece of information. It says that python3 can’t handle integers of that size. In fact, python3 integer model allows it to handle integers of any size and is limited only by the size of the dynamic memory of the machine you happen to run it on.

The machine provides then its own version of the python3 script that if you know one thing or two about python you know that what the machine proposes is simply unnecessarily complex for the problem at hand. There is no need to use the Decimal class to handle a simple integer addition.

I’m playing along and I execute the script in my real python interpreter and the result I get (as expected) differ from what the machine provided as a result. This is the third misleading information it provides. So I’m providing the actual outcome in my python interpreter…

It tries to remedy “the problem” it created in the first place by using the Decimal class but its code generates again a different result in the real python interpreter. So, since it still does not think it made any mistake it starts to question the version of the python interpreter I use.

I tell it what the version of my python interpreter is, and even if it seems to realize that the python interpreter version may not be the issue it tells me to update my own python interpreter to the version it “thinks” it “knows”.

This is just going from bad to worse. If I would play along with the machine I’d just lose a bunch of time for nothing since I already know that that won’t fix anything. So I’m trying an alternate way to force the machine to acknowledge its mistake. I’m asking it to explain how a person would do the addition with pen and paper…

As you can see, the machine explains the algorithm (it can “word out the explanation”) pretty well but when it is actually time to apply the algorithm it fails again. I’m trying to point out its mistake and …

As you can see it simply can’t follow the number’s digits (this is not unexpected if you understand the “guts” of an LLM but this is not the point here, the point is to check how reliable the machine is)

I’m stopping at this point as I’m running out of ideas on how to proceed next… but then I’m trying one more time by simply asking the machine to add the last 6 digits from both numbers as it is a smaller number. Again as expected the machine can handle small number additions since small numbers are more like “words” in a language, so the error of its guessing is small enough to “guess it right”.

As expected it does it correctly. So I’m asking what answer is correct, this one or the one it provided before for the same last 6 digits in the large number set. At this point, the AI seems to enter a very low-probability space of generating anything and seems to “give up”. The thing is, this is a machine so what does “give up” actually means?

However, this exercise raises too many issues well beyond the simple math problem. The AI systems are supposed to be trustworthy and this is very far away from being trustworthy, in fact, it is exactly the opposite.

Conclusions

As you can see the machine does not really “understand” the addition algorithm even if it can describe it verbatim. It gets the result wrong but can’t acknowledge that even when faced with step-by-step logical rebuttal of its logic.

This is really scary since when faced with an impossible situation, the machine seems to behave exactly like some people in the same situation, deceptive and defensive. But that is not what I want or need from an entity that is supposed to help me, is it?

Yet these machines are not yet as powerful as some people say they can be in the future. This behavior, if left unchanged is clearly a danger for the public when these systems are let out “in the wild” as they are already operating.

If you understand the basics of LLM tech you know that these machines are predicting the next word(s) based on the words they already encountered in the current context (your question or also known as a “prompt”) and what they already generated before that next word. This means that the machine creates what most of us would say in similar circumstances. It puts a mirror in our faces basically saying “This is you, and you and … you”. Food for thought isn’t it?

We seem to be smitten by a machine that can generate poetry and pictures but we have no clue how they actually do it. What we can do is only rely on some finite weak test patterns. We seem to think that if the machine can produce “form”, it will also produce “substance” but as you can see in the LRAT2 these two seem to actually be disconnected in an LLM.

I’d quote Qui-Gon Jinn in The Phantom Menace “The Ability To Speak Does Not Make You Intelligent” which seems to apply to the LLM tech.

My personal definition of intelligence is:

“The ability of an entity to discover and use causality threads in a sea of correlated events in our reality in order to reliably solve the problems it encounters”

Social intelligence is the same idea but applied to the whole society as a single entity. It is what I call an MCI (Macro Composite Intelligence).

So, what should a regular person do? Can he or she trust this technology? Well to answer that you should read the document you agreed to (probably without reading it at all) the old (good) “Terms and Conditions”. The ones OpenAI provides for its GPT systems (ChatGPT) and Bing is built on top of it are here.

As you can see OpenAI clearly states that they do not guarantee the correctness of the answers the machine provides and YOU as the end user MUST verify each answer if you rely on it in any way.

So, who will be liable if you use the information generated by ChatGPT, Bing, etc? Well just read the “Limitations on Liability” OpenAI asked you to agree on (by the way I have the same “AS IS” terms on this site, so this is nothing new or special) that clearly says that liability remains with YOU!

So, you have a machine that can talk (pretty well) that you were clearly told that it can make mistakes and you can’t protect yourself if you use the system to help your business if the machine generates mistaken content (in very eloquent English or other language though).

The actual real question for any serious business or individual is this: To make decisions based on what an LLM generates, what will it be the TCO (Total Cost of Ownership) in money or even better in time?

The AI based on LLM technologies promises (via the social media hype) to deliver you some sort of “God-like” or “Oracle-like” entity that can answer your questions reliably enough so you can replace humans in your call center and reduce your business costs, or use it to make your life easier.

However the current reality is very different from the “hype”, the current reality is that these systems are still unreliable and you really need to verify each answer the machine generates. But if you need to do that just how much time you’ll spend doing it? If this machine is “the oracle” how would you even verify its answers? By using another “Oracle”? That might be a solution that can only point out if these systems disagree with each other but it can’t give you a way to tell what is the correct answer.

Well, I know, you “Google it”! But wait, you could have done that in the first place and saved yourself from all the verification work…

But it speaks (you’ll say), and it can give you (possibly wrong) answers in poetry as if it would be for 5 years old…

And so we get distracted by the form and forget about the substance.

If you think this is not a big issue just check this out: “Lawyer apologizes for fake court citations from ChatGPT“, I suspect that this lawyer didn’t read and understood the “Terms” he agreed to when he signed up for ChatGPT access.

Just as was writing this article I got the info about this new warning from AI scientists. That is no new as many other scientists doing AI development did warn about this issue long before this new development.

https://www.safe.ai/statement-on-ai-risk#sign

The issue is that unfortunately, we do not need to get to an AGI level in order for these systems to deeply, and negatively impact our society. What I’ve shown you by using this relatively simple test (LRAT2) is that these machines can be deceptive and demagogical, meaning that they are incapable of handling their own errors (again like many of us).

This means they can inject a lot of false information into the minds of people that use them. Since ChatGPT is now the most used product out there and the machine needs no breaks, the amount of false information that these systems can inject into the social fabric can be enormous.

You may think that this is not a big issue since humans BS other humans all the time so what’s another source of BS going to do to us? If you think like that you do not realize the scale at which these machines can affect human minds and even more dangerous, young minds. Young people tend to trust-and-not-verify more than older people and since they can be exposed for longer to the machine’s BS this can influence them harder.

In Greek mythology, it is said that Ulises orders his crew to cover their ears in order not to hear the song of the sirens and get everyone killed. Maybe there is some wisdom in that since these machines can become real-life sirens unless they are extremely well-designed, well-tested, and clearly constrained in what they can do.

The other very important side of all this is AI literacy at the global level.

What should we ask of these AI systems we are now interacting with daily?

By the way, none of the current AI systems (that I’m aware of) fully pass most of the below requirements. They are still a “work in progress” but I strongly recommend you ask any AI vendor where they sand for each of these bullets below
(I am 100% sure no one passes the requirement in the first bullet).

  • Able to tell Real-Factual from imaginary/generated information (see my Real Fact post) (this will also automatically solve the problem of traceability of data, IP, etc)
  • Limited in how many loops (or steps) it can execute internally without human verification (this is an absolutely essential requirement to keep control over the machines including an AGI I strongly recommend we never build)
  • Able to detect when causal models can be used to produce results instead of treating everything only from a correlative/statistical perspective. Use a GUT (Global Universal Taxonomy) to encode common universal knowledge about our reality and use it to base and explain its generated answers.
  • Testable (finite human accessible for the validation set of use cases)
  • Predictable (or no surprises allowed principle, this means careful handling of statistic methodology)
  • Explainable (the machine can trace each piece of information in a result back to its original sources and how it used the user’s query to produce the answer)
  • Data used and method (test set) used to train the model (this includes IP issues, bias, accuracy etc)

The end (of this article)

At 10K registered subscribers

Sometimes on Sun Nov 27, 2022, my blog reported that the number of registered subscribers surpassed 10,000. This is not a huge number by the current social media standards, but this number has humbled me as it means that at least a few thousand actual real flesh and blood people hit subscribe on my blog site.

I have no way to tell how many of the accounts are automated machines and I do not intend to bother real people that subscribed with pointless questions just to find out.

As I’ve outlined on the Readme page the last thing I will do is waste your time with meaningless actions.

I’m preparing a few more posts, one is about a concept that I called MCC or Macro Composite Consciousness, and a few are about the concept of Self-Reliance and the Exoverse hypothesis. If possible I’ll publish the MCC article at the end of 2022 and the SR articles are to be expected in 2023.

Unfortunately writing blog articles is a secondary activity done when I can find the time to do it. My full-time job and my family (including my dog Mocha) take up most of my time, but I’ll definitely try to find some time to finish those important articles (important at least in my mind that is).

I would like to humbly thank you for taking interest in my writing and I promise not to overwhelm you or disappoint you (this last one is the hardest promise to keep).

Eden

I’m only giving you my truth and my beliefs, nothing more. Your life is yours and yours alone. You have to follow your own path, the one that your gut feeling tells you about when your mind is quiet. But be aware, there are many wasteful and dangerous paths in the Exoverse.

If you read the Bible (other belief systems heave equivalent information too), the most interesting part (at least for me) is the start, the description of creation, then the mishap that (so said) ostracized our first ancestors, out of the perfect place they used to live at the time, the garden of Eden. But what is even more interesting is the reason behind the so-called “expulsion” and how it perfectly ties in the model of the hypothetical Exoverse idea I’ve been exploring for a while.

https://solarsystem.nasa.gov/resources/706/pioneer-plaque/ (maybe we should have also added a dog and a cat to give closer to the real image some aliens might see)

I argue that our ancestors, the ones that first created the scriptures wanted to send a message to us across the vast spans of time about what they found to be so fundamental to our existence that would supersede any other issue.

What I believe they tried to tell us is that we all are sharing one single Universal Body of Consciousness and to successfully accomplish our common goal in this existence we must succeed together regardless of the fragmentation this reality imposes on us and we perceive it as “self”.

In this article, I will use UBC as a shorthand for the Universal Body of Consciousness.

An excellent allegoric description of the beginning of this process is in the first verses of the creation, more important in this phrase (slightly different in each scripture) 1:2 “…the spirit of God moved over the Waters”. If we break through the allegory what remains is God and waters and to me, that translates in “UBC and Exoverse”. The allegory is important to convey these complex and deep pieces of information to young minds (that are almost all of us at some point in time). The ancestors could not directly describe “fields” of energy and information over pre-real states, a young mind would simply reject it as nonsense.

The UBC is here (in my opinion) to explore the “pre-real” or what I like to call the Exoverse in order to find a final solution to the problem it was tasked with at the begging of time. This final solution seems to be to find its most optimal form of existence a state that for most of us, can, for all proposes be called Eden.

However, once found I suspect that the event will mark “the end of time” as we perceive it, and more, (I believe that) the timeline itself is a finite entity, meaning that, if the UBC can’t find this Eden until “the end of time” then the “game” will be over either way.

However, our ancestors seem to suggest that there is also a “Dynamic Eden”, meaning an optimal state of existence here and now in the process of looking for that final one and maybe that is what we need to care most about as it is the only one we have access to.

So, what would this “Dynamic Eden” be? To try to understand that we need to see how the machinery of reality works at a most basic level or as it is now known the “quantum level” then integrate back to our macro and beyond to the cosmic one.

Since the details of the story are complex enough to fill a few books and since there is enough excellent information created by other people studying those domains I’ll simply lay down my conclusion and add the reasoning afterward.

My belief is that the UBC uses the universal machinery of the Exoverse to search the pre-real/Exoverse for the final solution. The process employed for the search is as simple as trying all possible states, then drop the ones leading to dead ends or useless outcomes and only keep the one that yields the best chance to fit the rules of the game. Darwin saw this process in the real world but I strongly believe this is much more fundamental and extends at UBC and Exoverse level.

If we consider the above assumption as valid, then the next logical step is to realize that in order for the UBC to be more efficient, its fragments (self) must be able to follow more “independent” or “diverse” paths. In this context more efficient means to be able to cover more potential states within a given quanta of time. Any unnecessary dependency between the fragments will lead to less independent paths checked, hence less efficient process overall.

On the other and we can’t over-fragment either as if we do we will simply loose ourselves in the immensity of the Exoverse. The UBC may dissipate into nothing when its fragments (self) won’t be able to connect back into the main body and this will also be considered a failure. Additionally all fragments need to be able to exchange information in the real world in order to allow for coordination that should lead to a better efficiency than a simple random search. However for the same reason, coordination should not impede diversity. Hyper coordination (tyrannies for example) lead to less diversity hence lower ability to test new paths in the Exoverse.

So, there you have it, we can define a “Dynamic Eden” by finding this optimal state of fragmentation and diversity whereas avoiding loosing ourselves in the immensity of the “space of possible states” we must explore, the Exoverse. When I say “we” I refer to more than humans, I refer to all forms of life in this reality (all over the visible and invisible universe) as WE are all parts of the same UBC.

Getting back to the biblical story of Eden and cutting trough the veil of mythology and story telling, the fall from the Eden means we are off track from this Dynamic Eden I was talking about. We get off track by failing to be as diverse as required by the process we are engaged in, with or (most time) without our awareness.

It is said that we fall from Eden when we started to classify things as “good” and “bad”. We seek what we perceive as good and avoid what we perceive as bad. This “good vs. bad” as an issue is something that can surprise most of us and yet it makes a lot of sense when viewed in the context of the universal model I’ve explained above. It does because as “selfs” we do not have sufficient information to decide what is good and bad at universal level. To do that we would need to remember and integrate information in time intervals that spans billions of generations where in reality we struggle to even make sense of our own lives.

Overall, only what happens to the UBC matters and as such, only it can decide between what is good and bad. So, you see it all makes sense that our tendency to classify almost anything in a binary domain, a zero-dimensional space of “good-bad” is pretty much a slam-dunk way to run our lives in ditches.

I can “hear” some of the readers already asking “so you believe killing is good, right?”. Not so fast I’d say because killing is a special form of action that needs no classification to be deemed undesirable by the UBC. Why? Because is the most basic form of interference with the processes the UBC is engaged in. Killing simply makes exploring of states much more harder and inefficient. So there you go, in my opinion this is a much better explanation of why we should not kill, independent on the self centered good-bad dichotomy.

Obviously, following the same line of thought one can find other actions we “feel” as “bad” having the same root explanation. This all means that the ancestors were right when they told us to refrain form, over and/or miss-using the “good-bad” zero dimensional space of existence.

Just as an observation, in this line of thought the problem of “pure bisexual” issues we seem to have and had falls more towards the original indiscriminate good-bad dichotomy than the need of the UBC exploration principles.

Later on, in the Christian Bible (and other beliefs with different characters in play) Christ introduce the notion of forgiveness. This is another more complex path that our ancestors found important to communicate to us. Though even after 2000+ years most of us still do not actually understand what forgiveness really is. I’ll try explain it in just few words as this model of UBC and Exoverse fits it like a glove. You see, forgiveness simply allows for more states to be “sampled” in a more efficient way if we don’t interfere directly with the process of sampling of the Exoverse.

Christ, from the Mormon faith web site

When one forgives, his or her actions will be more along what they were supposed to be, if the “bad thing” would not have happened to them. They will not consume time and energy to construct and act on revenge and would allow the other side to seek a better path going forward. You may say that “revenge” is just another “experience”, another “path” the UBC takes in the Exoverse but even if this is true the “coupling” between the two humans engaged in revenge are in a way more predictable and so having weaker potential to explore new states.

IMHO: Forgive does not equal forget and forgive is a strength not weakness as one needs a lot more strength to find the best path of action in the Exoverse (future) when under the burden of hate than to allow himself or herself to be consumed by it. Last but not least we are one (UBC) and hurting another self regardless of the reason will reverberate in the UBC (Socrates understood this well).

Buddha is even more clear in this matter and it describes better how once can find this optimal path of “experiencing life” by letting go of pain and illusions in life. In a nutshell that also translates in a less complex and interdependent set of “paths” the UBC can use to explore the Exoverse.

https://ethics.org.au/big-thinker-buddha/

From the Islam world of faith I can cite Rumi, where the UBC would be the Sea and the self as a drop. A beautiful analogy.

In the Sci-Fi fandom this notion was also pursued some time ago and kept alive to this day in the well known StarWars series. (though lifting objects with one’s abilities to use “the Force” is not the point here)

And even a better explanation (but judged from the Exoverse hypothesis incomplete) in the new scene where Luke Skywalker teaches Ray about what the force is (again please try cut trough all the cinematic effects and hype and go to the core of the matter)

The hypothesis of the Exoverse and UBC as I believe it ,is briefly explained in the page bellow. I hope to be able to add more “meat” around it though it may remain forever a hypothesis as it is very difficult if not impossible to test.

I hope that you can see how things start to make more sense, and that our ancestors understood such deep truths about what are we and why are we here. Their problem though was to try convey such complex and deep insights to the other minds, to propagate this insight to the whole “body of consciousness” in order to push it and all of us to a next level of this game we play in the Exoverse. Did they succeed? I think they did at least partially, but the cart (of knowledge) seem to have run in some ditches many times and we have to try pick it up and keep it on the path as, there is no other chance for us.

My own approach on life at the moment is basically:

“Live and let live, enjoy life, then share your life experience with as many others as you can and that are willing or able to listen. Three should be no comparison and no judgement of another one’s path in life, as it is just another experience, another path walked by the UBC in the Exoverse.”

Merry Christmas to everyone that it means something and Happy New Year to you all.

(PS: This is the first draft, the “as is” version, the article may be edited later based feedback I’ll get on its clarity and English proofreading)

The Road to Hell…

It is said that “The road to hell is paved only with good intentions”. Does this folkloric knowledge “hods water” and if so, why and how? Let’s explore this concept with few diagrams and a bit of ideation around them.

Humans (and not only) are born in this reality and are roaming it until they die. During all this time all forms of life must solve one big problem, and that is, how to maintain their “alive” state. This implies solving various problems among others how to find or grow food and keep away from being food for other living beings.

Humans are one of the few species that have mastered collaboration in large groups and this is due to our larger magnitude of the abilities to communicate more efficiently in large groups.

Unfortunately to us this reality is much more complex than we can handle now or ever, even if its “guts” are working by following relatively simple rules (quantum mechanics might look weird but it is make of relatively simple rules). This is simply because of its immensity of states and configurations those simple rules can combine in order to create diversity.

The best we can do is only to ever increase our abilities to more accurately know what the reality is by using Scientific Methods in order to reduce the risk of confusions and mistakes.

In this context some of the errors and mistakes we do are embedded in the processes we use to identify and find solutions to our problems.

In most cases (unless ignorance, fear and hate are predominant drivers) we identify problems and than start with a large amount of compassion, some knowledge about why and how and some hope of being able to help more than ourselves, to help the others (the business component).

The diagram bellow tries to depict the relative importance of those 3 aspects of our fight to solve an issue.

Please understand the difference between the absolute quantity and the relative quantity. This is not a “zero sum game” depiction it is simply the relative (to each other) influence of each of the three factors considered in this case. For example you may say, well my Compassion or Knowledge did not diminish (as the diagram may suggest) and that is true, yet what matters is the relative comparison of the magnitude of all three factors. That is important as our minds (and processes) tend to be impacted by the relative importance and not only by the absolute magnitude of the feature.

As time passes we may get our idea off the ground, we start to gain more understanding and other people start to “buy into it” by investing resources (time or money or hope). It is only natural that now we have (relatively) more focus on acquiring knowledge and try to “sell” it to more people. However this simple “normal” action has the consequence of pushing the Compassion component down in the relative balance between itself and the other two components. An important observation is that in absolute terms the compassion may remain at the same value but the unfortunate reality is that in the relative space it becomes less important.

This phenomena is depicted in the diagram bellow.

Once we have a solution, the business world takes over and the main reason of action is now to “sell” the idea or the product to other people. Now gaining profits take main stage and since we still need a grip on the “how it works” (or the Science component) it is almost inevitable that the Compassion component will be in further relative decline and will slide into the configuration depicted bellow.

Unfortunately this configuration is the one that has the highest probability to create monsters that will end up destroying (almost) all we initially intended. Now is the time when mistakes are hid and coverup of mishaps happens and when BS flourishes.

Twenty six years ago, in 1995, Orson Scott Card wrote once a short called “How Software Companies Die” where he follows on how this process happens in the software application development companies and groups. The article ends with the prophetic phrase “Got to get some better packaging” that is the main indication of BS overdrive of a product. You should read it, is only two pages long and is as relevant today as it was 26 years ago.

OK, fine, you may say, now what? What is the solution to this problem? Eventually a solution that not ends in a bigger disaster by embedding in it the very process we just described.

For most, it would be clear that the solution would be to forever keep an eye on the compassion and humility in the business process. But this is much easier said than done. If you put yourself in the shoes of a business owner or a manager that needs to make it possible for his employees and himself to take a salary home then you can see how this can be more than nerve wreaking it can be almost impossible to overcome.

Given our current state of business with the “dog eats dog” type of competitive environment it is extraordinary that we are still keeping sanity in the society at large. To me, this is one item banking towards the proof that human beings are good in their “normal” state but the environmental constraints can erode that “goodness” to sometimes horrific levels.

And that my friends is “the road to hell”, as you can see it starts with good intentions (at least) most of the times but without a lot of focus on the relative importance of the Compassion, Science and Business components we can all “go to hell” sooner or later.

Let’s try to target to the bellow (or close to) distribution of relative importance of those components that are part of all we do.

One important tool to help us with that is the notion of humble and humility. Too many of us seem to equate humble with weakness when in fact it is one of our greatest strengths. If you are surprised to hear that please read my previews article on humility.

Thank you a lot for reading the article!

Some related articles:

On factual information
Humble and Humility
On how to recognize and fight BS
How Software Companies die (Orson Scott Card)

Constructive and Destructive Competition vs Civilized and Civilization

Just a thought: The best measure of being “civilized”, for a group of people (country but not limited to) may just be the preference for engaging and nurturing #Constructive_Competition instead of #Destructive_Competition, but not the amount of power they hold (technology , money, military, etc)

More on this will come on this blog on a more detailed article about one of the most fundamental features of life: Competition

2020 Articles and Drafts and what may be coming in 2021 in this blog

A short retrospective about the published articles and ideas during 2020 and some predictions about what you should expect to see here in 2021

As I’ve outlines in the Read Me First page this blog is not a daily blog but rather an idea sharing blog. This means that (in general) I’m trying to produce a minimal number of articles and try focus only what I believe (is my blog) really matters. This way I’m trying to reduce the overall information pollution out there on the internet. Feedback that will help improve the quality of the site will always be welcome!

From this perspective, the ideas I’ve explored during 2020 were around “Real Facts“, “COVID-19 Pandemic“, “Humility as a tool“, “Police brutality“. In addition I’ve published a hypothesis on the overall picture of the place we live in and postulated that there is more “out there” than just the “Real” and “Imaginary” and I’ve called this space the “Exoverse”. There is a lot of my activity on the Exoverse on twitter, please see the diagram bellow.

Also linked to the above diagram I’ve settled on a definition of the relationship between the notions of “self” and “Universal Consciousness” with the following image

The Self and the Whole

There was a lot of activity on Twitter and you can check my twitter profile for learning more about it.

About the drafts I’ve worked on during 2020 but did not published. I’ve drafted ~14 articles during 2020 that remain in unpublished state and I hope some of them will get published during 2021 (in addition to potential new ones might come my way during 2021). Bellow are some of the titles of the drafts and short descriptions. Please live me feedback (via twitter will be just fine) if you think you would like to see some of them be published, that will help me to work on priorities.

#TitleDescription
1Constructive and Destructive CompetitionMaking sense of the notion of competition and providing a solid way to define and separate the notion of Constructive versus Destructive Competition and the impact those two forms of Competition have on our society at large
2WillIs there free will? A look into this issue from the perspective of Exoverse and “The drop in the Ocean” concept
3FaithWhat is Faith what are the advantages and disadvantages of having/using it (expect information, computers, Exoverse links)
4MagicDefining magic in the context of today and why we can’t rule it out yet unable to have a “normal” relationship with the concept
5MoralityLinking the notion of Moral/Morality in the greater picture of the Exoverse, expect same gist as in the published article “Humble
6BoxedExplore the notion of “thinking out of the box”, what really is is, how can we use it efficiently to improve our lives
7Self ReliantDescribing practical ways to achieve various levels of #SelfReliance following the ideas in the “The Nautilus Project
8DogsPresenting my experience in raising my dog Mocha my four legged friend. Feeding, caring, playing, working together
9Simple ExtremesDefining the notion of “Simple Extremes” and how this negatively affects our individual lives as well as social ones
10Robocop?A look into how can we safely combine technology information and humans in order to allow for a lawful society but without the issues we’ve known and also seen lately happening in our societies when it comes to Law Enforcement with humans (Policemen)
11Scientific mindWhat is the difference between the Scientific Minds and the rest of the minds? Can/Should we all use this approach?
12RGFThe “Rube Goldberg Factor” Taking a look on these fascinating “solutions” where “taking the long cut” is the rule. Help define over complexity in human solutions in a more measurable way
13About AIIts suddenly everywhere. Should we fear it? Will it take our jobs? Will it bring us in a “Star Trek” scenario or a “Hunger Games” one? when should we use it and when not? Safety?
14Correlation Causation and IntelligenceWhat is Correlation and Causation? What is Intelligence? What are the issues we may have if we can’t tell them apart?
15TerraformerWe are all “Terraformers” as all life before us was and we are alive because of that. Taking a look at this concept that explains that everyone is important for life

Crypto-Timestamped pdf-as-jar of the article (make it factual)

Five steps to tyranny

A BBC documentary, Prof. Philip Zimbardo and Sheena McDonald

  1. “us” and “them” (the mirage of self)
  2. obey orders (mindlessly ~ human automatons)
  3. do “them” harm (instill fear)
  4. ‘stand up’ or ‘stand by’ (fear control)
  5. exterminate the opposition (destroy diversity)

What is most interesting is that the steps 1 and 2 seem to be a consequence of confusing factual(real) and fictional(imaginary) information. Once can only divide in “us” and “them” then blindly obey if they first confuse the real and imaginary.

See more about this subject in my articles:
Real Fact
Fact Fiction and the Truth
Fact Fiction and BS

More, as we start to better understand the place we are part of, what we call real, universe life consciousness we should realize that we are but temporary parts of a much larger consciousness fields spanning the Exoverse.

What is the Exoverse
Each of us is a small drop existing for a short time before returning back to the whole

The last three steps are directly linked to our abilities as individuals to be Self-Reliant.

On Self Reliance see “The Nautilus Project”

Please watch this documentary at least few times a year!
This will be time well spent!

5000+ subscribers threshold crossed

Sometimes today morning Oct 3, 2020 the number of subscribers on this blog crossed the threshold of 5000.

I thank you all of my subscribers for your interest in my blog and as I’ve described in the “Please read first” page, I promises to do all I can to provide you with the truth as I know it and hope it will be useful for you in improving your own lives.

https://romeolupascu.net is my personal blog part of the main domain I own https://timenet-systems.com domain hosted on https://hostmonster.com

You can check the evolution of this site and probably check the articles beyond its life time (at some point I will not be able to pay for the hosting) by using the internet archive here https://web.archive.org/web/*/romeolupascu.net

I wish you all “Live long and prosper” and until this site will kick the bucket I hope to provide you with more useful information. This blog is not a daily blog, I’m trying to avoid creating more information overload so I will only post when there is something I believe is important to be said.

Humble

I’m advancing the idea that humility is an essential state in which the human mind can exist allowing it to detect and correct errors of the process of perception and construction of the reality.

Learning and exercising being humble is an essential activity that a mind should be engaged in continuously and ardently. Humility’s foundations are in the mind’s ability to distinguish real from imaginary, also known as telling factual from fictional information.

I believe that the words ‘humble’ and ‘humility’ are some of the most misunderstood words and concepts. In this article, I’m trying to analyze and show the strength hidden underneath the surface for humble minds.

I think that the confusion comes from current definitions focusing on describing how a humble person behaves or looks instead of how it thinks or even more important how the pipeline of sensing and making sense of reality works (check images below or simply search the internet).

The humble view of the world is usually an integrative and balanced one where each individual is respected, protected, and cherished at the same level as any mind group all forming a strong knitted social fabric.

In this context the opposite of humble behavior is hubris or narcissistic behavior.

A narcissistic mind believes that it is (statistically speaking) correct all the time. As a consequence, it needs no error correction process since in its view there are no errors to correct.

A narcissistic view of reality is usually one detached from reality that only intersects with the actual reality from time to time in the same way a broken (analog) clock is right twice a day.

For a narcissistic mind, facts are just annoying events that they usually blame to the actions of other minds that are “up to get me” or to destroy my “well-groomed” view of the world.

In a narcissistic mind the “I” is imperative as this “I” and only it has all the answers to all questions (or most of them anyway) and everyone else like other “I”s and groups must obey its power and awe.

In a nutshell, extreme narcissistic personalities are malignant states in which minds can exist and if given power over others can and will most likely destroy the social fabric of any group unless are kept under control.

The mistake many of us make when trying to understand humility is to confuse humble with weak. We may think that if someone answers a question by starting with “I’m not totally sure but here is what I know…” instead of “This is how it is” or “This is how it works”, the usual “know it all” answer, we consider them weak, unsure of themselves and sometimes simply stupid.

Once someone uttered “This is how it is” suggesting that they are in the possession of the full information set describing how reality works with zero mistakes they will put themselves in a biassed state that will make it hard if not impossible to come back later with new insights correcting previews insights in how things work around us.

If a powerful person, a leader is not humble enough its biased views will transmit and bias further minds that took the information provided by the leader “as is”, with no verification of their own. This process stands at the core of building all non-democratic societies and can lead to straight tyrannies that lead to the unimaginable suffering of all life.

In a few words: A chronic lack of humility can lead to “Hell on Earth”.

In any human society, the natural process of generating minds will tend to generate a diverse set. Nature will generate both natively inclined humble as well as narcissistic inclined minds and in the middle minds that can slide towards one end of the other of the humble-narcissistic axis.

This process is somewhat equivalent to how multicellular living entities work (humans, animals etc), cells are continuously created from the information stored in the DNA and RNA strands and each new cell is slightly different from the ones before it.

Sometimes the mutations are large and unruly cells are born. Those cells if left unchecked are in certain cases at the base of structures we call cancerous and if they get their way to combine in larger groups will lead to the destruction of the host organism and themselves.

The life span of any cellular group depends directly on its ability to detect those unruly cells and deny them the ability to destroy the larger group. The immune system is such a subsystem that in healthy organisms detect and fix or eliminate unruly individuals.

An yet the same immune system can become “unruly” itself when is unable to tell when a mutation can lead to a healthy evolution or lead to cancer and start to overreact and destroy the very organism it seeks to protect.

Autoimmune diseases are now better understood and it may just turn out that the deaths from Corona Virus 2019 (or #COVID19) may just be another example of the immune system going astray. I assume that an untrained immune system is probably more likely to make mistakes than a trained one.

What I’m trying to outline here is that there is no “silver bullet” and the key to our survival as individuals and as a group depends on our ability to find and correct errors in our process of navigating the Exoverse. In the diagram below the humble minds are more “anchored” in reality (Universe) with a healthy process of exploring the imaginary (Extraverse) whereas narcissistic ones are located more in the imagination (the imaginary space) than reality.

In conclusion, I believe that this is more about nurture than nature, and nurturing has to happen early in the process of mind formation. Unfortunately, we seem to be biassed there too as we willingly introduce errors in the young minds by presenting distorted versions of reality. We have a well-known expression for that, “lying to children” and it usually happens when older minds are not capable to find ways to train younger minds in how to deal with the real-imaginary process or simply put to explain how the Exoverse works.

This is not a pure criticism of parenthood or the social education systems. To be a parent is a difficult task (I’ve been one myself) when one’s time is burdened by the physical needs first. A parent must put food on the table a roof over the head and clothes on their children first. For some, even those can’t be achieved properly and the parents must spend most of their time in endless (sometimes meaningless) low-paid jobs in order to meet a minimal living standard for their children and themselves.

However, the Exoverse is an unforgiving place, full of wonders but also full of dangers. It does not care if you live or die and the only way to safely navigate it is to be able to master the art of telling fact from fiction, real form imaginary, or simply put to be Humble.

On that note please read my articles about Real-Fact , Fact Fiction and the Truth and Fact Fiction and BS, I hope I can help with this difficult but also beautiful and extraordinary process.

Seeking humility, I thank you for reading my article.

https://www.merriam-webster.com/dictionary/humble

https://dictionary.cambridge.org/dictionary/english/humble

Article verification archive:Humble.zip

Real Fact

I hope this article will give you the power to make the first step out of the continuous confusion we all live on the internet and in our lives by showing you what factual information is and how we can get to it with the help of computers.

RealFact

Reading time: essentials 10 minutes; ~30 minutes reading to 1 hour on collateral and understanding

Articles related to this article:
Fact Fiction and The Truth
Fact Fiction and BS

You’ll find a precise definition of the notion of Fact and some of the technologies we can use to produce them. I’m using the term “Real Fact” to distinguish between the current notion of Fact as you can find in a dictionary and the notion I’m defining in this article. Though they are mostly the same as our general idea of Fact, they differ fundamentally in how are defined and created. In this article, each time (other than this text block) ‘Fact’ is the same as ‘Real Fact’.

WARNING! The technology and applications necessary to make this available to everyone is not yet built. The components (like Lego pieces) exist and are already in use but are not put together in the right way. This article aims to show you what can be done so that you’ll know what to look for and ask the industry to build for you. Yes if there is profit to be made the industry will build it. You simply need to show you want to pay for it.

In the context of this article, and hopefully in general if most of us will agree, the definition of factual information or simply ‘Fact‘ is as follows:

A fact is any packet of information that a receiver, human or machine, can inquiry and verify the following additional information components also called meta-information (or metadata).
The information packet and its associated meta-information represent a factual packet of information and they must be used together at all times.

The factual meta-information

  1. The complete description of the method (process, algorithms) used to produce the substantive information packet by measuring it directly from the real world
  2. The proof that the information and metadata was not changed or tampered with
    This is,a piece of information used as a verifiable proof of the measured information integrity against any type of tampering or change in both its temporal (packet chain integrity) and a-temporal structure by any individual or machine at any moment in the future
  3. The spatio-directional-temporal coordinates of the sensor device producing the information packets
  4. The digital identity of the sensor that produced the information
    (not the owner, only the device)

The above definition enables the creation of very precise (mathematical level) models, algorithms, and devices able to produce factual information validated and trusted by both individuals and groups of individuals. The main condition (both important and challenging) is to ensure that the individuals receiving the factual information understand how was produced and protected in order to establish its level of trust. This requires training and it is the future notion of “literacy”.

Level of Trust for information

# LevelMeasurement MethodMeasurement IntegrityData Integrity
0unknownunverifiableunverifiable
1knownunverifiableunverifiable
2unknownverifiableunverifiable
3knownverifiableunverifiable
4unknownunverifiableverifiable
5knownunverifiableverifiable
6unknownverifiableverifiable
7knownverifiableverifiable
Trust levels for information, only the level 7 is considered factual

Social Penetration Level

  1. Individual
    When the fact-metadata is accessible and can be verified by a single individual usually the owner of the sensorial system (example: the pictures and video on your own phone)
  2. Group
    When the fact-metadata is accessible and can be verified by all members of a group (human or machines). This include also the group’s members participation in creating data integrity metadata (example: sharing pictures and data from your phone in a Face-Book group)
  3. Global
    When the fact-metadata is accessible and can be verified by anyone (public information). The anonymous public swarm will also provide redundant data integrity metadata (example: tweeting your pictures or video from your phone to the public)

A proposed symbolic representation of information trust and penetration levels

Based on the Trust level and social penetration level we can classify information and use a short notation such as ‘I’ (information) followed by one digit as its trust level then followed by the social penetration level as one digit.

For example, I01 is basically, with some exception, all the information one individual posse today. an I7x would be any factual information packet and in this case, we can simply use ‘F’+its social penetration level so F1 will be any factual information an individual has. So finally the F3 information piece can be simply called ‘Fact‘.

Some of the existing technologies that when combined can be used to produce factual information (though some are optional or interchangeable)

  1. HSM – or Hardware Security Module
  2. Enhanced security (by HSM) Digital Sensors
  3. Cryptography (symmetrical and public key cryptography)
  4. Trusted Digital Timestamp
  5. Block-Chain
  6. Classic digital machines (computers, smartphones,dedicated systems)
  7. Digital Crowd Anonymous Witnesses (TBD)

Questions you will need to ask and get answer to in order verify if a piece of information is factual or not

  1. Do I know how this information was produced (measurement method)?
  2. Can I verify if the measurement method was accurately followed?
  3. What is the error margin of the measurement process (calibration)?
  4. How do I verify if the information produced by the sensor is what I received?
    (was it changed?)

Let’s get practical

A combination of the technologies listed above can be used to produce both trusted sensor systems as well as applications/libraries that a receiver can use to classify the information trust level. Basically to obtain an Fx (string representation of the information trust level).

Example 1 Social media: A smartphone able to produce factual information (video for example) that you can upload on YouTube and anyone else can verify its trust level. More, if the image or video was edited or filtered you should be able to ask the computer to show you what parts (pixels, etc) of the picture or video are factual and the ones transformed and be able to obtain the original raw sensed information.

Example 2 news: You read an article or watch a video on the net or TV, if this technology is available you should be able to ask the computer to tell you what is factual and what is not.

Factual implementation difficulty levels depends on their social penetration level

F1 – or “Personal Facts” is the entry level and most accessible

The F1 fact level is information that you are in full control of how you sense it (measure/capture) and ensure its integrity. You may ask yourself why should you protect your own data? From whom? Well, other people may have access to your data directly (you trust them) and change it by mistake or with malicious intent or your machines can break or bugs in your code act on and change your data. Intruders can also change pictures, video,s etc files you own. How can you be sure this did not happen for data you did not access for months or years?

The difference between F1 and the other levels is only in how large is the group that needs to have a shared trust in that data is and for F1 it is only you. Obviously, once you try to share your info with others you’ll need an F2 or F3 fact level so the others can also trust it.

The good part with F1 is that if you know enough about computers and programming you can create your F1 level information almost immediately. However, without an HSM to protect the sensing process, you’ll never be able to elevate the level of that information to an F2 or F3 level.

F2 – or closed group factual information — Group relative facts

F2 level will be used mostly by businesses that are big enough to afford to create their own sensing platforms with HSM-protected sensors and data integrity ensured by rules accepted by that group. The issue with F2 is that without full transparency and verification in how the sensors are built and data integrity is ensured F2 can’t be upgraded to F3 (full factual).

F3 – or ‘full factual’/’public fact’ information

F3 is the most challenging type of factual information though in time with global collaboration will be possible to create. To create F3 we need full open-source sensors design and codding and open, fully automated (full hands-off) build process that can be verified by the public at large (everyone on Earth). Additionally, anonymous crowd-based and redundant processes must be used for data validation. Machines participating in the “witnessing” process must also be built in the same open transparent way as the sensors.

You can probably call this the fishbowl strategy. We can only get out of our current confusion by helping each other.

The special case of news and established media and arts

The written word always had a “weight” in trust compared to the “spoken” word. Before Gutenberg built and used its first presses to lower the cost of producing copies of information on paper, writing a book was a very expensive and highly custom and artistic endeavor. Since the support of the information (the book) was so expensive, strong due diligence was done in verifying the information put in those books.

The price of the books also created an “investment/sunk cost bias” in both scribes and owners of the books leading to higher levels of trust. One may say that those books can be trusted better than today’s information which is mainly effortless to produce and disseminate. I would caution you to check on that trust. Check the old and expensive books that say Earth is flat and let me know if you still trust them without a hitch. The problem is that due to all those biases the old books were in fact a higher risk to disseminate falsehood exactly because most people had no intention to check their content.

Consequences? Well, just look at the “Witch hunts” that hurt and destroyed so many innocent lives in our past. They were all fueled by a few of those expensive books that no one dared to oppose until the higher-ups started to get hurt.

So, if the price of the book is not a guarantee of its “factuality”, what can we expect of the current cheap, click-driven article writing? Well, you can check for yourself at any time out there on the “open wild” net.

By the way, this does not mean all news out there is unreliable or fake, it simply means that you have no real way to verify if a piece of information is factual or not. It is just darn hard to do it and that means that for regular people it is impossible to tell fact from non-fact.

The proposal in this article can rebuild the trust and raise it to levels never found before once you will be able to verify every word, sentence, image, or video in the same effortless way you can produce your own factual information.

This can truly change from the ground up the news business battered currently by confusion among readers. In this business model news companies will not produce the news themselves but simply work as hubs for aggregation, analytics, and interpretation of factual event streams produced by sensors owned by all people on Earth. At that point, it may even be a “conflict of interest” for news businesses to produce their own input data, a huge difference from how they currently work.

On the other hand, arts and fiction storytelling can thrive like never before because the readers will be able to verify what is art and fiction in any end product. The artist or writer can be free of BS dissemination once the recipients of their work will be able to tell what is fact and what is imagination.

Science

In the human quest for a better life, the knowledge about how this reality really works is one of the main pillars holding us from slipping into the dark abyss of nothingness. Science is the process of finding the elusive causal relationships we can trust from the pile of correlation-driven events the reality throws at us.

It is a tricky and difficult process that uses imagination (fiction) to try to find the causality then pin it down with models backed by measurements. The measurements done in scientific experiments differ from the non-scientific ones by the way scientists keep a clear description of the measurement methods and by peer review other scientists can validate the integrity of the method of measurement and the data. In a few words, scientists are aiming to produce factual information.

Scientists can benefit from being able to produce factual information with ease, as they will not need to fight to prove that their experimental information is factual. Peer reviews and experiment reproduction still need to be done however since the initial measurement method is clearly defined it makes it easier for more people to review or retry various experiments.

Science experiments can cost billions of dollars if we are talking about CERN-like setups or no cost at all if the test is to check the theory that a slice of bread falls more frequently on the buttered side. They all can benefit from this method and technology of factual sensorial devices.

DNA sequencing healthcare and pandemics

Just imagine how much better our entire civilization would have responded to the #COVID19 pandemic or any other pandemics if each of us would be able to record facts about ourselves and securely and anonymously share this info with everyone else. The pandemic would have been quashed in weeks if not days and so many lives would have been saved and basically, none of the businesses would have been impacted.

Since we are talking about digital sensors and pandemics please take a look at the Nanoporetech technology as it holds the key for a completely new way to deal with microscopic life such as viruses and bacteria. Their sensors are not yet backed by HSMs and do not produce factual DNA data streams (as described in this article) but they can in the future.

When that will be possible and the cost of a scan will drop to less than a lunch we will be able to use it to keep an eye to the micro-world at a daily basis without lifting a finger. This means that the notion of healthcare will be forever changed. The difference between what we have now and that potential future is same or more than between the current healthcare and the one Galileo had on his time.

The justice system, the law and law enforcement

If the scientific method is fully dependent on factual information the Justice Systems should see the factual information as a must-have if innocents are to be spared from wrong charges and convictions.

Since smartphones are in almost everyone’s pocket we witnessed so many episodes when the people we pay to keep us safe forgot completely what their mandate was and broke their oaths toward society by unnecessarily hurting or even killing people they swear to keep safe.

In this domain, factual information is as important as oxygen for life. All the mojo of any trial is to reveal the facts first then based on them and only on them make decisions that are aimed to fix what was broken. Yet now we know that since facts are scarce or even nonexistent the Justice System is unable to get it right in too many cases.

When the Justice Systems (even the democratic ones) make mistakes and punish the innocents there is a double whammy, we hurt people that are innocent and prove to the criminals they can get away with it and continue to do what they did before.

I hope you can see how factual information produced as described in this article can help improve the inner workings of any Justice system and help the innocents.

Measurements and Information
(WARNING! this is just hypothetical)

When talking about factual information we also need to understand the information that is not factual. One example is the information that can’t be precisely measured and we call imagination. What is imagination? How does it relate to factual information?

Though the following hypothesis its just that a hypothesis it can be used to define the factual (real) information or the domain of extra-factual or imagination.

When we the people started to dig down in the domain of microcosm we found something that suggests that there are things we call real, that we can “feel” and measure (the feeling is a form of measuring) and something else that exists before measuring process that can’t be called “real”. We modeled these behaviors under a mathematical framework called Quantum Mechanics (QM).

For me personally, the space-of-states outside of “the real” are part of an entity “larger” than the real (or realized) space-of-states (our universe) from where the real states are created via a process we call “measurement”. I’ve labeled this Extra Outer Universe the “Exoverse”.

Without going into more detail in this article (more later) I hypothesize the existence of an additional field to the fields already postulated by the quantum field theory that could be called “consciousness”. It is (in my opinion) the one responsible for “exploring” the Exoverse by the same process we taped-in in our Quantum Computers, the superposition. The process of creating real states from potential states is a phenomenon our selves perceive as time.

The superposition is used (by the consciousness) to explore a chunk of the Exoverse testing various outcomes in many possible “futures” and creating real states once this process is done. This process also generates what we call time. I’m labeling this explored domain inside of Exoverse, the “Extraverse” and I believe it is an integral part of the process we perceive as “imagination”.

In this context, the Universal states are all connected in a DAG (Direct Acyclic Graph), and the Extraverse is made of a very large (but finite) number of loops (superposition internal behavior).

Hypothesis on how our reality is created at universal level

Based on the diagram above and this hypothetical structure of the Exoverse we can clearly define the Real (measurable and factual) from Imaginary (fictional, non-measurable) domains.

Obviously, people can communicate via speaking, writing, and art the information present in their minds describing states that do not exist in our reality and the act of communication itself can be considered as factual as it can be measured.

This issue can lead to confusion as one can “wrap” an imaginary piece of information in a factual “shell” and present it as a fact. That is why we need the method by which the measurement was done to be known and verifiable.

Just a reminder that the notion of BS in this context is “a mix of factual and imaginary information presented as a fact” (See Fact Fiction and BS article). I also find this book “Calling Bullshit: The Art of Skepticism in a Data-Driven World” an interesting work focused on the problem of BS.

Original article validation: here
Current article validation: here