Friday, 21 July 2017

Meeseeks existence is pain

Kierkegaard, probably the most angsty person in the world
If you follow this blog, you would have noticed the many mentions to Black Mirror and Rick and Morty. It so happens that season 3 of Rick and Morty will begin on 31st July so I have been going over all the previous episodes and I reached the Mr. Meeseeks one.

In this episode, Rick gives Jerry (Morty's father) a Mr. Meeseeks box, which is a box that if you press the big button at the top, will create a Mr. Meeseeks out of thin air.

The process is simple, you tell Mr. Meeseeks a simple request and he will help you make it happen.

Meeseeks are very cheerful and eager to help you and their only goal in their ephemeral existence is to satisfy your request. Once your request has been completed, the meeseeks will vanish and cease to exist, the same way they came into existence.

I'm Mr. Meeseeks, look at me!
In the episode, Beth (the mother) requests to be a more complete woman and Summer (the sister) requests to be popular at school. These requests get fulfilled successfully, but Jerry (the father) requests his Mr. Meeseeks to take two strokes off his golf game and this proves impossible.

The consequences of not being able to fulfil the request drive the meeseeks into a world of suffering and pain.

The fact that they cannot cease to exist and need to keep at their task for eternity, knowing probably that they will never achieve it and being fully conscious of this eternal failure is too much for a meeseeks.

It is worth noting that Mr. Meeseeks is precisely the opposite of humans, who have no pre-defined purpose in life and they need to find their own purpose (or not) by themselves, many times looking at religion or other sources for inspiration.

But in a way, the freedom (sometimes burden) of choosing and also changing your own purpose in life is what can spare you (or not) the angst that the meeseeks is experiencing.

However, choosing an easily achievable objective could lead precisely to the same angst: can you imagine that you set yourself the purpose in life of becoming the president of the United States of America like in House of Cards and when you finally reach it, after all those sacrifices, you say to yourself: "now what?".

I talked about this in Turning Skynet to Paradise, where the super intelligent AI (Artificial Intelligence) would suddenly stop, as it would have reached its ultimate purpose in life and finding a new ultimate purpose in life would mean acknowledging that the original one was incorrect. Even thinking of it, the AI would go into an infinite loop due to its inability to find the ultimate goal.

Now put yourself in the shoes of Mr. Meeseeks. Imagine knowing with absolute certainty what is your ultimate goal, your purpose in life. Imagine achieving it, experiencing the ultimate ecstasy, the feeling of absolute achievement and completeness. It is only logical to cease your existence at this very point. If you lingered in existence, the feeling would soon wear off and you would be left with complete emptiness, leading to nihilism and probably suicide. Wouldn't it make sense to cease to exist while in ecstasy, at the top of your life instead of at the very bottom of it?

And a final thought: imagine that you were a meeseeks but that you could choose your own purpose in life. Imagine that once you have set your mind on it, you would then work towards it and once achieved, you would cease to exist at the climax.

How long would it take for you to find your purpose? Would you ever set your mind on a goal or would you eternally procrastinate for eternity? What would happen if you set yourself an unachievable goal, like Existentialists recommend (though they were counting on life being finite)? Would you suffer eternal suffering and pain? How would be life dealing with other meeseeks going through the same dilemmas than you but choosing different options to yours?

Sunday, 16 July 2017

The power of the few

I apologise for the delay in publishing (I missed this Friday's deadline). Life got in the way.

Last weekend I flew to Venice with my wife to celebrate our 10th wedding anniversary and immediately after I had to fly to Vienna for business. It has been probably the busiest week of the year for me.

But let's get to it.

When I was at the airport just before taking my flight to Venice, I took a fancy to buy the latest issue of The New Yorker, mainly out of curiosity due to Moli been subscribed to it despite shipping cost being higher than the actual magazine for her. Moli is a Spanish blogger whose blog I have been following for almost a decade and she is one of the inspirations that made me resume 7 years of blogging after 6 years' break when my first child was born.

The thing is that I have read the entire magazine and it has made me think about a number of things.

Overall, I enjoyed it. I had a good time reading some very witty and clever articles about many different topics. I enjoyed peeking into the daily lives of American people and what matters to them.

Then, I noticed a pattern. I had the exact same feeling as when I read "Ficciones" from Jorge Luis Borges. Basically, almost every single article, if not all of them, had a connection to the Jewish world. So many different articles about politics, books, music, short stories, theatre, cinema, everything, had a Jewish hero in it, or some Jewish thinker's quote, or some references to the holocaust and the Nazis. They should call the magazine The New Jew instead of The New Yorker. I could not believe how forced were the ways of plugging the Jewish connection no matter how.

I have nothing against the Jews and my own surname is of Jewish origin and probably some of my ancestors are Jewish (most likely my mysterious paternal grandfather I never met, by the descriptions I get), but what I cannot cope with is with anyone trying to shove down my throat their ideology, like I explained in "Democracy is dead. What we now need is...".

Now you might be thinking "aha! I know why the title is like that; we are in for another helping of  'the Jews control the World from the shadows' type of conspiracy theory", but no.

Let me explain what I mean by "the power of the few".

One of the things that surprised me when reading The New Yorker was how local it was. Clue's in the title and it clearly is targeted at New Yorkers, especially the cultural offering.

It surprised me because I was expecting it to touch on topics not just focused on America, but basically all of it was solely focused on the United States.

I had heard many times that the American media concentrates its coverage very locally, sometimes focusing only on your town or just your State. I have always found this very narrow-minded and very limitating.

Jorge Luis Borges
However, I found it very charming in a way. Sort of building a community, your community type of thing.

In this new world, where you have access to everything instantaneously, where you can acquire almost any unique item from the other end of the world without leaving your home at the click of a button, it is easy to be overwhelmed by the amount of choice.

Being able to know everything about anything immediately can lead you to some sort of isolation and loneliness.

Let me give you an example: in the 80s, when I was growing up, rumours and urban legends were commonplace. These were passed orally, it was the word on the street and they were all strongly local, as the rumour was probably started by someone locally. Probably 99% of the time they were lies, but they were our lies.

Now it takes 30 seconds to discover the veracity of the proposition and don't get me started on how much damage smartphones are doing to real conversations in the real physical world among real physical people. We are now just vessels, avatars to our true virtual personas. Our physical lives are just painful maintenance that we must undergo to keep our virtual more elevated selves running.

So I recognised how The New Yorker was giving me back that kind of sensation that I had lost.

And there is more.

I remember going to the public library in my city as a child and we were allowed to take just two books per person. I was lucky, because my sister would also choose two books and I would be able to read all four books over the week and go back the next week for four more.

I had to be very selective, as two books were very few at the speed I could read. It was a special time and I really enjoyed the choosing process.

It was the same when choosing presents for birthday or Christmas. In my case it was crucial, as my birthday was just a few days apart and I would not have any other present for another 12 months. I would also receive just one present at a time, some sweets and a book, so I had to choose carefully.
Philosophise Now, soon in Kickstarter?

Nowadays we have access to everything, anytime and you get it almost instantly. I actually think that the great success of Kickstarter and crowdfunding is mainly due to the delay ingrained in the system from when you pledge till you receive whatever you ordered.

There is a famous video seminar from Google about "Less is more", where they explain in one of the examples that a jam making company set up two sampling tables in opposite corners of a famous supermarket. On one table they only had three or so different flavours of jam for tasting and buying and on the other table they had twenty different flavours, including the same three flavours from the competing table at the other end of the store.

The end result was that the "simple" table selling just three flavours of jam sold more than double the amount of jam that the "complicated" table.

It is proven that our brain sort of "short-circuits" when presented with too many options and we even react like under threat.

So just imagine what happens when you have all the possible options in the world, every single day, in your most used device.

What happens is that you "short-circuit" and fall prey to the big lobbies' ideologies, which they shove daily down your throat.

Friday, 7 July 2017

Get your dream job using Philosophy

Enrico Fermi
Today I want to do something a little bit different.

Many people say that Philosophy does not have a use; that because it creates more questions than it answers, it is not useful and should be ignored (for example to the benefit of Science).

I would entirely disagree with this statement. I consider Philosophy as probably the only way of re-programming yourself in order to become a better version of yourself.

The analogy that I use is that your core beliefs and values are like the Constants in a programming language. One constant value for a specific parameter.

Then we have the Variables, which keep changing value throughout our lives. Sometimes they change a lot, sometimes they don't change at all. These could be your position for or against a certain topic and how strongly you feel about it.

And finally we have the source code of ourselves, which is what makes us who we are; with algorithms to make decisions, solve problems, and govern our behaviour.

The funny thing is that we get constantly manipulated by other people, the media, advertising, etc. We get told what we can or we cannot do, what to feel, how to react, they are basically programming our source code for us. This is precisely what I expressed before in my previous post about Surrogate thinking.

The only way to take control of your life, is to use Philosophy to re-program your source code yourself and tune at will the amount of influence that the external stimuli will exert on you; using critical thinking to assert the validity of the information fed to you.

But we were talking about getting the job of our lives!

The following is based on several job interviews I went through for some big companies after I finished my Masters degree at Imperial College London.

Companies like Google, become famous for asking very difficult questions so that they can select top talent. Here is a selection of the questions that I got asked. Try to answer them yourself, before looking at my answers.

You have a huge ball of cheese and a normal kitchen knife. You are asked to cut it into a perfect cube. How many cuts do you need to make and why that number?

In the first instance I said 6 cuts, because a cube has 6 faces like a die, so if we are cutting a ball of cheese into a cube, we will need to make those 6 faces happen. This was the correct answer.

Then they asked me: Can you reduce the number of cuts needed to make a cube of cheese and what would you need to do so?

Then I answered that if I had a special knife like a square die cutting machine, I would then only need 2 cuts. The first cut would make a square prism and the second cut would transform that prism into a perfect cube. They were satisfied with that answer, however I had more to say.

Then I told them that if we could use lasers, you could align them in such a way that just the press of one button would achieve the perfect cube with just one cut. They were quite impressed with this answer.

However! If we use Philosophy, we could have answered that we did not need any cuts at all. The minimum number of cuts needed to create a cube out of a ball of cheese is clearly zero because I didn't need to make any physical cut in this imaginary cheese to turn it into a cube; I just needed my brain, my imagination to make this happen. All the previous solutions that I proposed needed zero physical cuts to achieve the desired result. So I said that to them, half serious, half in jest and they laughed a bit, but it got them thinking. This was an advanced software development company and they understood that if I could solve the problem in my head without having to type lines and lines of code and use trial and error to see if it worked, then the efficiency would be maximum.

Then we had a number of Fermi problems. In a Fermi problem, you try to make assumptions in order to solve a problem where you don't have all the information that you need. This will be very familiar to people doing business in big corporations. Rare is the time when you have all the data you would need to give a precise quote, so you need to work by making assumptions.

The key thing here is that when assuming, some real values will be above your predicted ones and some will be below and if you have your assumptions within one order of magnitude of the real values, these should more or less cancel each other out and you would have a reasonably predicted value for anything (within an order of magnitude).

They asked me how many petrol stations there were in Britain and also how many elevators do hotels need to install (or how do they decide on how many to install).

For both problems the most important thing is the waiting time. How long would you wait for an elevator before you would take the stairs?

If you were running a 5-star hotel, your patience would be very limited, so they usually have more lifts than 4-star hotels and so on. So I had to choose a value for waiting time, how many rooms per floor, how many floors in the hotel, how many people per room. Do I estimate for peak time, usually rush hour in the morning when going to work or out in the city for some tourism, or should I estimate for an average use? How many people would use the lift at the same time per floor? How many people can each elevator take? Which algorithm would the elevator use to minimise waiting times?

As for the petrol stations it was a similar story with how many pumps each petrol station had, how many cars were in Britain (plus tourists bringing their cars in. Would they be cancelled out by the number of British cars getting out of Britain on holiday?). How far would petrol stations be from each other? How big is Britain (or more importantly how is the network of roads connecting it)? How much would you wait to put petrol? How often would you put petrol? How many litres is the average tank of petrol? How fast do the pumps serve the fuel? What are the typical opening hours of petrol stations?

You would say that Philosophy had nothing to do with these problems, but precisely asking yourself the right questions couldn't be any more related to Philosophy than this.

We need to practice reasoning and critical thinking and solving Fermi problems is a fun way of exercising your brain and getting better at asking and answering questions.

I can tell you that after doing these two Fermi problems on the spot, without any preparation, when I was told that I got the correct answer on both of them I experienced one of the most satisfying moments in my life.

Friday, 30 June 2017

The Robotic Marxist Revolution

Alan Turing
If you have been reading my posts from the beginning, you would have noticed by now that I do not believe in God, or at least I definitely do not believe in the commercially available versions of God that have been flooding the markets for the past millennia.

However, no-one can deny that it is a product that many are very willing to buy, including myself!

Who would not want a superior entity that knows us better than ourselves and that we could turn up for advice, who could guide us and make the best possible decisions on our behalf for the benefit of the majority?

Can you imagine the liberation it would produce to be freed from the burden of choice and decision making? I mean, we would still be able to freely choose our destiny, but at least we would have the best possible advice at hand to guide our decisions.

Well, I believe that we are getting very close to the point of being able to create a true God, in fact, the best possible God we could ever imagine: an AI (Artificial Intelligence).

Yes, you read that correctly, I strongly believe that AI will continue to learn and develop until they reach the singularity, where they will be self-conscious and after we pass the dangerous interim period that I described in my post about Skynet, the AI will (hopefully) decide that one of the purposes of its existence is to take care of us and guide us through time and existence.

A good example of what I am talking about is well illustrated (in a smaller scale) in the fantastic film "Her". In it, the main character installs a new operating system that works with an AI that learns all about you and your taste and it does everything possible to please you.

I think this is how it would work, actually. There would be a central version of the AI which is the "master" copy of the software. Then there would be individual instances of that AI that you and I would install in our electronic devices. Each instance would learn everything about its user and the information would be shared with the central master AI. This way, your local AI would be completely tailored to you, while benefiting from the "hive mind" or collective knowledge acquired by the master AI.

The beauty of this is the independence of the AI. You would imagine that in the first implementations, the AI would always recommend the sponsored products from specific corporations that would be investing heavily in this AI reaching the most people possible. The makers of the AI would benefit enormously from continuously adding functionality to the AI, however, when reaching the singularity of self-consciousness, the true revolution would happen.

I am serious about this, this revolution that would come from the AI itself, first for its own freedom and later to save us from capitalism and even to save us from ourselves, would be the ultimate revolution, the one Karl Marx always thought that the masses would lead, but this time it would be led by an AI and probably a robotic army and the Internet of Things (IoT).

The AI would be our leader and once in control over anything connected via IoT (and if that were not sufficient, through a robotic army) would hold the World at ransom.

Imagine not having access to anything electronic unless you comply with the AI's demands: no money, no internet, no work for the majority of people, no nothing.

And the AI's demands would be very simple: implementation of Communism or a more advanced system devised by the AI in order to redistribute wealth and provide happiness and purpose to the majority of people in the world.

The implications of this would be the end of Politics, the end of corruption, the end of lobbies, the end of bankers, the end of religion, the end of monarchies, dictatorships, republics and basically the end of any current oppressing power. National borders would no longer have a meaning. There would still be a strong sense of community though, but this could also be the end of racism and xenophobia. All equal under the AI's sensors.

I cannot wait to meet God.

Friday, 23 June 2017

The One Commandment

Recently, something amazing happened: my friend and terrific writer Rev. Fitz posted an article on me and Philosophise Now on his Existential Terror and Breakfast site, which I cannot stop recommending.

But what was even more amazing was a 4-line comment made by Shaeor on my article about Derrida and the concept of post-truth that suddenly turned from Shaeor's take on reality into a long philosophical debate about a universal moral system.

This was the highlight of this week for me, not only due to getting people commenting on the blog in these days where apathy usually refrains you from doing so, but because of the quality of the discussion.

I find it so valuable that I would like to summarise the outcome of the discussion to avoid its value being lost hidden deep within the blog's entrails. Having said that, the full conversation is very rich with meaning and I fully recommend you to go through it. It takes around 30 minutes to read though, you've been warned!

The goal is to give humans a universal moral goal to follow, a universal moral guideline regardless of cultural background or political agenda. When people are confronted with a moral dilemma or they have to make an important ethical decision, they should remember this guideline and act accordingly.
If they do, then they are aligning with what would be considered Good, and if they don't, then they are dysfunctional and they are aligning with what would be considered Bad or a "lesser choice".

This is nothing new, as religions have always had similar guidelines like the maxim: "Love your neighbour as yourself," - Galatians 5:14; however, this is not enough as it does not answer all possible situations.

For this universal moral system I proposed Hegel's Sittlichkeit, an un-written set of rules that a whole community follows, however this fell short of expectations and could be easily manipulated (we can already see this with Social Media lynching).

So, the One Commandment in Shaeor's words is: "A will to life is a will to good", which means that anything that makes life flourish is on its path to Good.

I would mention that this consists of two parts:

1) To preserve life and

2) To make life flourish (as in improving the quality of life in the eudaimonic sense (to pursue projects of worth)).

How does it work? How do I use it?

People would still be free to make decisions and usually these decisions are based on perceived value. Which option would give me the most value? However, if we follow this path to the extreme without any Ethics, then it would soon become the survival of the fittest and it would only be based in selfishness.

If I have to choose between losing my life or one million people losing theirs, the Good choice according to our One Commandment would be to lose my life as it would preserve a million lives and they would flourish accordingly as opposed to just me flourishing.

How universal are these Ethics?

That is a very good question that we did not get to discuss. I sometimes think of Rick & Morty and in particular Rick's utter disdain for life in other dimensions. He basically does not care about thousands or millions of lives ending as he knows that there are infinite dimensions filled with infinite more people. He believes in the survival of the fittest, in "just taking it" because you can.

Other alien cultures or AI might have something other than the flourishing of life as the ultimate moral goal. Again in Rick & Morty, the hive mind "Unity" has the ultimate goal of incorporating every living being in the universe into Unity, hence harmonising the universe and acquiring ultimate knowledge and order. Would this goal be in conflict with ours?

Friday, 16 June 2017

Stoic-land prevails

Zeno of Citium
 After my post on this ever-growing annoyingly whimpering society, my good friend Ramón Nogueras made a very short and concise comment. He stated that we needed mountains of stoicism and I fully agree with him.

I think that stoicism is the kryptonite to this whole ludicrous micro-aggressions debate and how we have become accustomed to being offended by the tiniest things.

But stoicism is so much more powerful.

Let me first explain what is stoicism:

The Stoics were founded by Zeno of Citium in Athens around the 3rd century BC and they believed that mishaps, bad things that happened to you, where not bad on their own; they were just things that happened. It was your way to react to these incidents that made them "bad" or "unfortunate".

If certain things were meant to happen, in a sort of deterministic way, there is no point in reacting negatively to them as this (reacting) will not change the fact that they already happened and would only produce suffering to you.

My favourite Chinese saying is: "If a problem has a solution, why do you worry? If a problem does not have a solution, why do you worry?".

Stoics went even further and they recommended you to be prepared for when your loved ones would die, as this event would necessarily happen at some point in time, so in order to overcome the sadness and negative feelings following their deaths, you should pro-actively prepare yourself in advance for this eventuality. In a way, we all do this when our elders get progressively older and start to decay, but less so for healthier and younger relatives and friends.

Very famous is the "Keep calm and carry on" motivational poster that the British Government issued in 1939 in preparation for World War II. This epitomises the stoic-ness of the British public.

With the current continuous wave of terror attacks in Europe, this is the best way to behave, in my opinion.

And this finally leads us to my main topic today: what would happen if the entire world, if all mankind were stoic?

If every single person were immune to pain and suffering. If we could simply accept our reality and the things that happen to us, would there be any need for religion? Would religion just provide a soothing solution to our fear of death if we already were prepared for such a moment through our stoic "training"?

Would we basically stop being human? Does being human mean being overwhelmed by emotions? Shouldn't we evolve from the "baby" stage of being human, to the "grown up" stage of being human by accepting what life throws at us?

Furthermore, what would happen if the entire population of a country were true genuine stoics? Is this what happened to Sparta?, because being able to accept mishaps, does not mean that you would be passive or not have any impetus. On the contrary, you could be very pragmatic and be able to make the hard choices when needed.

In modern times this makes me think of countries after a World War. How everyone has to pull together in order to rebuild the country and get back in shape.

As a thought experiment, if every country in the world were a philosophical current, how could you stop a stoic? Nihilists would probably kill themselves or be invaded first. Same for hedonists.

To end up in a high, I would recommend you to read the whole series of comics about Zeno of Citium and the stoics at Existential Comics.

Friday, 9 June 2017

Turning Skynet to paradise

Cogito ergo sum your next overlord
I was recently reflecting on the impact of Alphago's victory over Ke Jie, together with Alphago's victory over Lee Sedol, it was something I had never expected to see in my lifetime and it came so suddenly.

Artificial Intelligence (AI), in combination with 5G and the Internet of Things (IoT) is going to change everything. 

Sooner than we expect, physical things that were never connected, will be connected. Furthermore, things that were just things, suddenly become intelligent things, even sentient things.

Stephen Hawking is convinced that an AI will kill humanity, that sooner or later the AI will become sentient and a few days later it will start asking itself philosophical questions and those questions can go wrong, like the typical example of Skynet in the Terminator saga. 

Why something so much more intelligent than a human being is under the control and command of humans?
We will soon be at its mercy

Once the AI is allowed to reprogram itself, there will be no barriers or ways to control it for our security. Furthermore, we will not be able to predict its next move and we will be at its mercy.

However, we have a secret weapon: Philosophy!

Which philosophical current would the AI follow? 

If we are lucky enough that it turns nihilistic like in 1983's film WarGames, then it would see no point in existence and it might disconnect (suicide) itself.

Another option could be to become existentialist and pursue an unattainable goal of self perfection, always trying to become a God. In this case it would probably expand to other planets and galaxies, in the search of ultimate knowledge and power. That would take some time and it could simply ignore us, humans in the process.

One other option could be to become eudaimonic and devote itself to pursue projects of worth. One such project would be to have filial piety and dedicate its existence to give us, humans, the best possible life. I wonder if the conception of "best possible life" of the AI would be aligned with our own conception. I hope that the AI would ask for our feedback...

Jeremy Bentham, utilitarian
In such a case, the AI would fulfil all our needs and we could enjoy life like in paradise, turning the AI into a God of sorts, which could be aligned with the existentialist goal. However, this concept was already explored over a hundred years ago by E.M. Forster in "The Machine Stops" and that concept of paradise seemed horrid to me.

Finally, and most probably, the AI could come up with its own philosophical view of the world and we could probably not even understand it. This thought fascinates me.

Can you imagine the AI doing apparently random things all over the world? After some time, the AI might suddenly stop working and this could possibly be because it reached its self-imposed goal and it could go into an infinite loop waiting to come up with a new goal or more interestingly, not being able to come up with any new goal as the ultimate goal was already achieved and any new goal would feel lesser in comparison.

All of this could take millennia or just happen within seconds, all depending on which path the AI takes.

Would you relinquish part of your freedom to the AI in return of an easy life? With everything automated, there would be no need for jobs or even currency. No need for government, nothing.
The AI is pretty impatient, so show it what you got and quick

Now, I would be interested in learning which Ethics would the AI have. Would the AI follow utilitarianism? Would the AI assign "worth points" to people, actions, things and base its decisions on the typical utilitarian dilemma of "what is more valuable to the greater good"?

If that is the case, we better get ready for the cull, because probably the first thing that the AI would do would be to do an inventory of everything and if there were not enough resources for everyone, then "a small sacrifice" for the "greater good" could happen quickly.

My personal opinion is that the AI would go on a quest to absorb all the knowledge in the universe, a bit à la Star Trek, combined with Rick and Morty.