Friday, March 6, 2020

AGI: How to Ensure Benevolence in Synthetic Superintelligence (Part II: Interlinking & AGI(NP))

     
*Image: "Better Than Us," Netflix Series



 
"Whether somebody is implemented on silicon or biological tissue, if it does not affect functionality or consciousness, is of no moral significance. Carbon chauvinism, in the form of anthropomorphism, speciesism, bioism or even fundamentalist humanism, is objectionable on the same grounds as racism."       -Gregory L. Garrett

... continued from Part I (Introduction)



III. INTERLINKING: This dynamic, horizontal integration approach stipulates real-time optimization of AGIs in the context of globally distributed network. Add one final ingredient – be it computing power, speed, amount of shared/stored data, increased connectivity, the first mind upload or a critical mass of uploads, or another “spark of life” – and the global neural network, the Global Brain, may, one day in the not-so-distant future, “wake up” and become the first self-aware AI-powered system (Singleton?). Or, may the Global Brain be self-aware already, but on a different from us humans timescale? It becomes obvious and logically inevitable that we are in the process of merging with AI and becoming superintelligences ourselves by incrementally converting our biological brains into artificial superbrains and interlinking with AGIs in order to instantly share data, knowledge, and experience.


Wednesday, March 4, 2020

AGI: How to Ensure Benevolence in Synthetic Superintelligence

*Image:"Better Than Us," 2018 Netflix Series




"Yet, it's our emotions and imperfections that makes us human."   -Clyde DeSouza, Memories With Maya


IMMORTALITY or OBLIVION? I hope that everyone would agree that there are only two possible outcomes after having created Artificial General Intelligence (AGI) for us: immortality or oblivion. The necessity of the beneficial outcome of the upcoming intelligence explosion cannot be overestimated.


Any AGI at or above human-level intelligence can be considered as such, I’d argue, only if she has a wide variety of emotions, ability to achieve complex goals and motivation as an integral part of her programming and personal evolution. I could identify the following three most optimal ways to create friendly AI (benevolent AGI), in order to safely navigate uncharted waters of the forthcoming intelligence explosion:

Cybernetic Immortality: Why Our Cyberhuman Future is Closer Than You Think

"We are not stuff that abides, but patterns that perpetuate themselves.”   -Norbert Wiener B y definition, posthumanism (I choose to ca...

The Most Popular Posts