This article is not a wake-up call; it does not aim to convey any form of warning about the potential shortcomings of future hyper-intelligent machines in the context of the human experience. Neither is it an attempt to be prescient about a future wherein germline editing would pave the way for human babies to be as customizable as Oxygen OS. Rather, the purpose of this article is to explore the relevance of humanity, or more specifically, the human experience. What necessitates it, and further, only this line of thought, is a very robust possibility: the former two would most likely not be under the influence of all those that will be influenced by them; the proportions are not going to be kind.
At the apex of what the pioneers of technologies revolving around AI and genome editing promise, there is absolutely no danger to the human experience – for they promise utopia. When jobs as laborious as mining and sweeping, and diseases as excruciating and damaging as AIDS and cancer – even death, if it is to be considered a disease, would not exist, of course the human experience, given the otherwise abundant resources expected to be available then, will attain top priority.
Such a world would need no Prometheus to steal fire from the Gods, for all would already hold the stature of Zeus, individually. It’s the process to this fabled frontier in which the caveat lies: what part of humankind will eventually make it till there during the process? The answer to that, at least today, is known. Therefore, to how the remainder of mankind is to endure the process, if it is to endure it till the end at all, we shall now proceed.
At this point, it is necessary to define what the human experience is, in order to conclude how – and if – it is to prevail. A property quintessential to all humans is intelligence (be it in extremely varying degrees). Although intelligence is a very vaguely defined concept, with different evaluation methodologies pertaining to different forms of the same, most economists and computer scientists agree that human beings are intelligent to that extent, that their actions lead to the fulfilment of their goals. Here, again in economic terms, fulfilment of their goals can be taken as the extraction of utility (satisfaction) from predetermined objectives.
The inception of such objectives is where we draw the line between humans and machines, and within the inception, we find the most tangible form of the human experience. Our forecast of the human experience derives itself from the authority of this inception, and the promise of utopia hangs on the assumption – and hope, of the machines’ objectives being the product of this inception only, for it exclusively belongs to us.
Now we must ponder over the various factors that affect the inception of these objectives. Love, hatred, aspirations, jealousy – the entire palette of human emotions (which are, importantly – to the context of this article, results of chemical reactions) can be considered. The one considered by the writer to be most prominent is fear. It would not be a hyperbole to say that you can trace back to the fundamental fear of something, everything that you do, have done, and are planning to do. A teacher teaches, for she fears unemployment, which is built upon the fear of starvation. A student studies, for he may fear a number of things including failure in an examination or life as a whole, disappointing his parents (who may as well make him by birth to be the pinnacle of human excellence a few decades down the line), or the lack of validation from peers.
The predominance of fear in the inception of objectives applies to the rich and powerful as much as it does to the rest – even more so, some may argue. Fear is as well intertwined with our survival instincts – be it nuclear disarmament, or our space programmes, the presence of fear cannot be denied within the otherwise implied narratives related to peace, empathy, and scientific curiosity. Naturally, it’s the fear of extinction that brings children into this world. Even the Gods, whose powers we strive to wield ourselves, have had their will imbibed into our society via fear. It’s only in recent decades that God-fearing individuals have started to grasp the true potential of the destruction that their fellow humans can cause. They don’t like it – their aversion towards the technologies central to this article is a testament to the same.
Machines do not, and cannot fear. Their refrain from certain actions which may not be best suited to their objectives (in accordance with the methodologies that humans implement for determining such things) can be projected as a form of fear, but is really more of statistics; while humans can act against what their respective fears suggest solely on the basis of their will (another conception of the inception), machines cannot act against what their numbers direct them towards – given the lack therein of any other form of incentivizing mechanisms other than numbers, this is where intelligence in respect to machines otherwise breaks down.
Therefore, to finally put to rest as to how humanity is to endure the process to the aforementioned utopia: it will be through fear. The human experience, built upon centuries upon centuries of evolving goals, objectives, and actions, is driven by the fundamental fear of that something which directs the perpetual inception of such goals and objectives. The human experience, hence, will be relevant as long as humans continue to fear. Paradoxically enough, the real problems will arise when we finally do reach the promised utopia as disciplined masters and users of constructive fear, for it is easier to incept fear than it is to kill it.





Great work!
Predominance of fear is the best and most relatable part
True!
Yes,I agree that the human experience is defined by intelligence, driven by the inception of objectives, and influenced by emotions, particularly fear.