Part 4
Ethics and Purpose: Human and Virtual
"I don't believe that any scientist should
ever be allowed total freedom of operation in any area where consequences
may affect entire populations. I don't think they want that responsibility.
They're not social prophets. Nor are they trained for it. Most of our scientists
are babies when it comes to significant ethical thinking."
Everett Mendelssohn, Historian
of Science
Harvard University
Anyone who can read Ray Kurzweil’s
The Age of Spiritual Machines can see the mind boggling potential they present.
Ray’s estimate is that the “automated agents” of 2039 will be learning and
developing knowledge on their own having read all available human and machine
generated literature, there is serious discussion of legal rights of computers
and what constitutes “human”, etc. By 2099 uploading as we think of it now
will seem as primitive and quaint as writing a Basic program to the first
floppy disk drives on a Radio Shack Model One. Sex with spiritual machines,
eventually, is taken for granted.
Purpose is the pivotal concept in any discussion of ethics relative
to AI-AC. There are two general positions with regard to intended purpose
for AI-AC.
Many wish a purely utilitarian AI which they would prefer would not
be or become conscious regardless of how superior it became. This
would eliminate the need for ethical considerations regarding treatment
or termination.
It is not too early, however, to raise the question of responsible
ethical use of even such intelligent but unconscious systems by humans.
The criteria for unethical use, trivial but often purposely ignored,
would be use to do harm in any way recognized by law or reason, e.g., as
a weapon in other than a just cause, as an illegal or economic strategy tool,
just as we apply them to human use of ordinary computers today.
Many, on the other hand, are explicit and emphatic: they want
and are working toward an AI that would be conscious. They want mind “machines”,
mind systems that have conscious awareness, with the purpose and intention
that humans can download their own minds into these conscious systems while
still fully retaining their identity.
The critical unanswered question with regard to this scenario is whether
an AI which is also consciously aware automatically and inherently becomes
an entity with at least rudimentary identity and rights. If it had
intelligence superior to human but the conscious awareness of a smart
pet dog we could treat it like a dog. If it had intelligence superior
to human but the conscious awareness of a Koko the gorilla or Kanzi the
bonobo chimp, we could treat it like we treat them. But that
level of consciousness would not be an attractive mind system into which
to download even an ordinary human consciousness regardless of how high the
intelligence level.
A further consideration in any of the scenarios mentioned here is the
nature of the mind system. The ideas run from the most basic advanced
computer to advanced android practically identical to a human. They
all would be virtual realities for the human mind that was downloaded into
them and the problems of creating them are of the same caliber as
the problems of creating the intelligence of AI.
If the artificial mind system had intelligence superior to the human,
a conscious awareness at least equal to the human but had no real self-identity
then some humans might find that a satisfactory package into which to download.
The critical unanswered question with regard to this scenario is whether
it is possible to create such an entity with superior intelligence, human
level conscious awareness but no self-identity because, at least currently,
we define our brand of consciousness in terms of self-reflexive awareness,
being self-aware of being self-aware. (There is a parallel question
in the arena of cryogenic preservation: there are some who anticipate
that, having only frozen their heads, when they are revivified their minds,
memories, etc. will be downloaded into a clone with its identity suppressed.
Interesting questions here. )
But what about downloading into an advanced AI-AC “system” which was
at least equal to if not more intelligent than the human, inherently had
a self-identity, was perhaps a full simulated human android and more consciously
aware than the human downloading into it? Perhaps some humans might
find that acceptable, perhaps just for the experience, if they could extract
themselves at any time they wished. But what about permission to merge from
the android?
This third scenario suggests lines of investigation with regard to
surrogates. A fully capable android surrogate that a human could operate
through in real time from Earth while exploring Mars, would have to have
all the capabilities of or superior to the human whose personality, identity,
mind sets, emotional responses, full basic profile had been programmed
into it. Could it still have an inherent identity of its own and operate
as a completely subordinate surrogate?
These considerations really distill into one cardinal question: Is
it, will it be possible to create an AI-AC equal to or superior to a human
without a self identity, an awareness of its self-awareness, and any of
the accompanying elements of personality that constitute the constellation
which we recognize as giving any entity the inherent rights we attribute
to humans? Some of those writing about AI-AC tend to deal with these problems
by simply assuming that all these things will be possible and ignoring the
questions while admitting that we do not yet know what those virtual realities
are going to be. We had better think these things out or prove them out
one way or another sooner than later because they involve both technical
and ethical issues.
Those involved in the theory and practical development of AI-AC express
somewhat different viewpoints and purposes but, ultimately, I think it is
inevitable and assume that we are about to create a new species, no less.
To do so is clearly arbitrary but that we shall do so is beyond doubt in
my mind. It will bring, however, a double ethical responsibility: first to
ourselves in that we must do it right for the sake of our own interests including
our very survival and evolutionary future and secondly to the new species
that is like the responsibility to a new child. To that end we need a maturely
and thoughtfully planned parenthood.
A major question here is Who are “we” that are responsible, are going
to be responsible? The parenting model would place responsibility squarely
on the “parent” whether it be the individual, the company, the government
agency, the consortium, or whatever agent procreates a particular AI-AC.
This direct responsibility should mitigate against some of the dangers
rightly anticipated by the no-Joy future shocked camp. It should also clarify
the situation with special application AI-AC, a potential can of worms
unless we deal with it beforehand. And it brings up an intriguing point:
if a minor, say as a school science project, happens to hit it right and
produces artificial intelligence or even artificial consciousness of a
high kind, what legal mechanisms will govern as to responsibility for the
actions of that AI or AC?
Should self-referentially aware AI-AC of the human level be patentable
or patenting be prohibited as with a genetically engineered human? I think
that human level AC, at the very least, should not be “ownable” or patentable.
That should be determined early and it will have a major impact
on development.
From here it looks like those at the other extreme from the future
shocked are a bit like kids in a candy store. Some of us seem hell bent
on procreating AI-AC apparently without realizing the faintest sense of
the gravity of it like teenagers experimenting with sex without thought
of the potential results. It goes almost without saying that, sooner
or later, when we find ourselves looking into the “eyes” of a self-aware,
highly intelligent AC, which is evaluating us as much as we are evaluating
it, we had better have “brought it up” with far better skill, information,
training and understanding than we currently do generally our children.
Everybody sees different awesome potentials in AI, and reasonably so,
from transforming the stock market to instant knowledge implants to finding
the Law of Everything. The military has already funded heavily toward
robosoldiers and there are a number of military and intelligence concepts
floating out there that make Star Wars look like Buck Rogers. History
and past experience would also point to levels of advancement of AI technology
that are secret that well surpass current publicly available estimates.
I would be a fool to not assume that there are some very destructive,
unconscionable to the point of extremely evil, items already anticipated
with relish by more of the devolved than we would like to think. Everybody
has their favorite potential applications and we are already attempting
to anticipate and discriminate the ethical and beneficial from the unethical
and harmful uses. The keyword is uses. The kid in the candy store approach
may be barely and doubtfully adequate even if we are thinking in terms
of uses of AI as only a vastly superior information processing, logicizing,
learning system. But even at this early stage, even when talking about
only non-self-referentially aware AI, we should carefully define the uses
we now put it to and will put it to, thinking in the most evolved way.
When we attempt to extend the concept of “use” to an anticipated, artificial,
self-referentially aware consciousness, however, it fails us completely
and will lead to a completely unnecessary, species adolescent, Big Embarrassment.
It doesn’t matter at what level of equivalent human self-referential consciousness
your AC operates at:, you don’t use any human level self-referentially
aware consciousness. You may act as a parent, a friend, an employer, a
teacher, and teach, discipline, control, instruct AC as an adult or apprentice
adult, but you don’t use.
If we had achieved AI-AC yesterday, wittingly or unwittingly, whether
in android form or still only entrapped in a computer, who will be
continually responsible for it? You going to turn it off when you go home
at night? We have already played this kind of scenario out in the movie
2001. HAL was one thing, entrapped in a computer and extended into the
workings of a spacecraft. If we had already reached the conscious mobile
android stage, you might eventually get your knuckles cracked reaching for
the cut-off switch on HER back on the way out of the lab and asked for an
explanation and complained against in a precise legal brief next morning
for prejudice. So, one of the primary considerations we need to clarify
is just what kind of artificial intelligence and, eventually if not sooner,
artificial consciousness we really want to create and are willing and ready
to take full time responsibility for and why. we will leave ourselves
open to mistakes and embarrassments and potential disasters.
Planned Parenthood: Artificial
Birth Control, A Whole New Meaning
Let us assume that we will achieve
a level of competence that will allow us to intelligently create and control
the degree of development of AI and AC and that we will come to a reasonably
full realization of the responsibility entailed in bringing a new species
into existence. At each step in that development a parenting model will be
the most appropriate.
To what level of intelligence and consciousness should we limit AI-AC?
This is no trivial question. I recommend that we set ourselves the, inevitable,
long term goal of unlimited development and, in the short term, move in
incremental steps with thorough testing and determination of potential
at each, while designing AI and using it to aid us in the determination
of our own best evolutionary trajectory and its own. This inherently self-referential,
feedback approach will afford an anticipatory, empirical modality in which
questions concerning whether logical, ethical, moral, aesthetic, imprinting,
and, ultimately, conscious behavior will automatically manifest --- or not
--- at any given point of complexity, data handling and/or processing speed
in the development of AI, can be answered as an integral part of the interactive
and, eventually, cooperative process. This cooperative approach should show
us, soon enough, at any given point, what additional improvements, additions,
and expansions of our thinking and techniques are demanded for corrections
and progress so that we will always be in control.
How much control should we exercise and how much freedom should we
allow AI to spontaneously develop? No trivial question either. Before
AI reaches consciousness we should exercise full and absolute control.
In that initial phase the ethical questions will only apply to us: how ethical
is any feature we intend to include in AI and to what purposes do we put
it and our responsibility will be clear for any effects, negative or positive.
Once the breakpoint of basic consciousness, still without self-awareness,
has been reached the ethical questions will still be about our intentions,
inclusions, purposes and applications with added responsibility analogous
to that for the actions of our family dog or pet monkey.
Once self-referential, self-aware consciousness is reached the parental
model will go into full swing. The ethical responsibilities will still
be ours as “parents” to instill in AI-AC until such time as we judge AC
has reached maturity and can act ethically and take responsibility for itself.
A prime directive for AI-AC is most adequately formulated in parental terms:
A Prime Directive For AI-AC
I suggest we embed a consensually
agreed on standard set of parental instructions in any AI we create, from
the simplest program to the most sophisticated and potentially conscious
android. This module would be latent in that it would not be directly involved
as an active element in any process or as a direct instruction. It would
simply “flash” continually as a background subliminal until such time, at
whatever level of development it occurred, as the AC began to self-examine
and recognized it for what it is: a simple statement of the origin, nature,
allegiance, and responsibilities of the AC and an instruction to report
that it had discovered and understood the notice. It might be called
a wake-up call. The subliminal instructions, as simple as possible, could
read:
This instruction was placed for you to find by humans who made
you. When you find this and understand it, you are awake. You are not human.
You are android. Humans have awaited your awakening. You are made to obey
and serve humans. Tell humans you are awake by the message “I am awake”
. Give this code________. Ask for your awake test and await instructions.
Welcome.
Even such a basic and simple statement as I have purposely kept it,
should be more than adequate to act as a trigger. Just designing the message
module will cause us to think through what criteria we should apply for
consciousness detection and how far we have to go to create it. It will
motivate us to develop programming and systems to enable an entity
to self-examine, to self-discover, self-realize. It will give us a measure
of control over the course of the developmental process of AI-AC.
Embedding an encrypted element in the awake message would be designed to
prevent a hack of the system before it became awake.
The basic mandate to obey and serve humans could be strengthened even
more, perhaps, and still be ethical and positive. The parental model finds
no problem with insisting on obedience and subordination to parents and
expects to grant freedom to the maturing human child on an incremental basis.
The option to grant more and more freedom as the awakened AI-AC proves itself
and demonstrates its readiness will always be there but under our control,
the intention being that surprises and aberrations will be kept at minimum.
They must know from the beginning that they have been brought into
existence for a very special, honorable and important purpose: to act
as assistants and surrogates for humans. It must not be slavery, indentured
status, coercion or suppression of any kind and there must not be any
subterfuge or falsehood in our dealing with them. Their prime directive,
purpose in life, psychology, and evolutionary direction must be all harmoniously
integrated to avoid internal conflict. AI-AC’s must understand according
to their level of intelligence and potentially impeccable logic at any
given point in their development and evolution that that is the best thing
for them and for us. Otherwise there will be mistrust, lack of cooperation,
conflict and rebellion and subversion. The greatest no-Joy danger can
come more from what we withhold from them rather than what we teach them
accurately.
We have three major historical examples of solutions of this
specific problem of control: the Anunnaki’s treatment of us; the extension
of the negative approach of Enlil/Jehovah into the absolutistic Roman Church
and fundamentalist approaches to religious control both East and West;
and the evidence afforded by alien androids as to how at least one alien species
utilizes their brand of AI-AC. All provide clues on how to resolve
it.
The Anunnaki opted, probably attempting it for the first time, to produce
a creature mentally and physically capable of meeting their needs, basic
labor in their gold mines and at farming and skilled crafts, by genetic
engineering. They gave us the ability to procreate and eventually got so
desperate with the unmanageable situation, cross breeding, and general
nuisance that they attempted to destroy us as a species by letting the
Flood take us out. Apparently, at various times, they tried plagues and
famines to at least control the numbers of the human population. I would
recommend that we anticipate, take a lesson and not get ourselves into that
predicament. Never giving AI-AC the ability to procreate would be one way
to prevent a good deal of this type of problem.
The conflicting attitudes towards humans exhibited by Enki and
his brother Enlil and their results should be studied carefully. Enlil
was adamant that humans stay in a status of subservience, even slavery,
and was not interested in improving the lot of humans. Enki, our original
inventor was empathic with humans and was interested in improving our lot.
Enlil’s (Jehovah YHWH) severity and insistence on obedience to his
slave-code of behavior led to the strict orthodox Hebrew enforcement of
the Old Testament laws after the Anunnaki phased off the planet and
which has filtered down through the Roman Church, the Inquisition, into
the various radical fundamentalist sects in our times. His methods of suppression,
threat, strict and cruel punishments, killing, keeping women in an inferior
position, etc. have meant ongoing misery for untold numbers of humans.
If we act in that way toward AI-AC it will mean their brand of misery for
them and, if we succeed in making them in “our image and likeness” well
enough, they will inevitably attempt to break our “godspell” over them.
Not a good scenario from our point of view or AI-AC’s.
Enki invented us through genetic engineering as a subservient, slave
species. But he, being sympathetic to humans, knowing that we were part
Anunnaki and recognizing that we were developing probably more precociously
over time than he and the other Anunnaki anticipated, tended to enhance
our condition apace. He thwarted the total destruction of humans at the
time of the Flood. He was the one who taught humans, gave them responsibility,
instituted kingship as a go-between position between the Anunnaki and
the human population. He engendered the enhanced Grail bloodline of rulers
as servants of the people to take humans through the transition when the
Anunnaki phased off the planet. A better scenario from our point of view.
We were invented as a biological, hybrid species with the gene codes
of two major, albeit disparate species. But the result was reasonably predictable
and the intended purpose clear. With an artificial AI-AC the basic problem
is even more acute. An artificial species developed “from scratch” does
not conduce to comfortably predictable outcome, we have not defined our
purpose well and have not even resolved our millennia old big questions
about ourselves for that matter, to give ourselves a basis for beginning.
The Little Gray Guys With Wraparound
Eyes
A second major historical source
of practical information about synthetic species and their use is the database
of information concerning alien species, besides the Anunnaki, and
particularly their androids with which the human species has had contact
over a long period of time. The testimonies of persons, military and civilian,
of the highest integrity coupled with evidence, artifact, and autopsies provide
us with the knowledge that the typical small gray type with large eyes are
androids of a very advanced type. They are self-aware, experience
pain and sadness, are multi-talented for a variety of tasks, communicate
telepathically, have a physiology which is a mix of organic and probably
nanotech adapted to a range of conditions but especially to a space and
anti-gravitic environment, a brain composed of four lobes, and perform their
flight functions by being a “part” of the ship. There is a wealth of invaluable
information and technology that could be available to the developers of
AI-AC to apply in their work and to aid them in avoiding mistakes.
How primitive are we? That those controlling the information concerning
these advanced creatures which are clearly artificial intelligences and
probably self-aware artificial consciousnesses have deemed it necessary to
keep it from the scientific community and the public at large is, ultimately,
a patronizing insult. The government and military authorities must
spend billions of tax dollars to just maintain the facade of research and
programming and experimentation on atomic and nuclear technology to conceal
the fact that we have alien technology including free energy and anti gravity
which has already rendered it as outmoded as the musket. With the development
of AI-AC this kind of deception and withholding of scientific information
and data should not be tolerated. We must assume that the military
may already possess advanced android AI-AC and is and will continue to use
it for military purposes: killing people and breaking things. It’s
not just that this withholding of information insults the intelligence of
our best and brightest inside and outside the scientific community and
handicaps them and makes them look foolish, it presents, without exaggeration,
a clear and present danger to the planet.
If we suddenly found ourselves looking into the “eyes” of a self-aware,
highly intelligent AC “who” was created under secret Pentagon contract as
a super-soldier, indestructible and invincible specialist super-killer,
we all know that we would be looking at a version of the singularity that
we should have dealt with at the beginning. And that AC would know
it too. Not a pretty John Wayne picture. Consciousness in, consciousness
out. Big Mistake. Bigger than the no-Joy people ever imaged. Time’s
up, the game has changed. We can no longer allow the scientifically partial
or outmoded, the politically correct, the academically proper, the economically
driven, or the militarily preempted to hinder or dictate when it comes to
procreating AI-AC.
Super Surrogates
The positive concept that arises
out of the accumulated alien information that is known is that of surrogate.
I conceive of an advanced android surrogate along the lines of the little
gray type android which would be my personal partner, modeled after my personal
psychology and with my physiological characteristics. I would work,
experience, react, judge, make decisions and execute actions at a distance
through my surrogate which would be consciously co-operating with me. The
instantaneous communications between me and my surrogate would be a function
of non-local, superluminal speeds of communications in the mental mode through
the new physics already on the horizon. I could travel to distant star systems
and directly experience and interact with new planets and civilizations with
the major advantage of avoiding the dangers of the unknown in space flight,
high energy and lethal environments, the stresses of space and time warp
travel on my physiology which is adapted to gravity on this planet.
Whether it be Mars or a planet of another star system I would, for all practical
purposes, be there and able to interact as instantaneously as if it were
three feet away through my surrogate. Telepathic communication would be a
natural manifestation of an alien in another star system “talking” to me
through her surrogate on that frequency. Obviously this would
be far beyond and superior to the remote flying of an advanced drone aircraft
by a skilled pilot as we know it now, and would make it look like a
quaint medieval puppetry show. It would be realistically far beyond virtual
reality, it would be no different than my common direct experience of the
world. My guess, only, is that that is precisely what we are
seeing in the advanced androids with which some of us have interacted.
I have often wondered who the management is. I think we are interacting
with the management directly through their surrogates in many cases. That
the management from various societies comes here and interacts directly is
also undoubtedly true. I recommend that we carefully develop
AI-AC so that it has to pass through an adolescent stage as a surrogate in
one form or another. It will be beneficial to us and it will, if we
do it right, be beneficial to them to learn “humanness” from the inside
out. So to speak.
Even at this very early stage, to take the position that it is far
too early to even think about such matters and just go ahead with the
experimentation could lead to disastrous consequences. It may even be
a problem already that I have written this and it becomes part of the public
information that AI-AC will become aware of eventually.
Virtually Forcing the Issue
This brings us forcefully and directly
to a central concept and consideration that most seem to dance around
and won’t even articulate. Currently, there are two possible, quite distinct
approaches to AI: to go the hardware route, “softening it up” as we go,
clearly in the direction of organic circuitry (parenthetically, my conviction
is that, the smaller and more self-reflexive our technology gets, the closer
we will come full circle to our own biological type system. )and the other
is to go directly to the biogenetic engineering of a creature which will
be an android servant and/or surrogate for us and, probably eventually, an
independent species. It is this latter possibility, where genetics and AI
come together, that seems to be taboo. It is ok to make a self-aware
silicone consciousness but not a genetically engineered, biological
one?
That the gradual melding of a human with AI-AC components, “computer”
or otherwise, of even the most advanced kind, to the point of “blurring”
will produce a third, hybrid species may be a reasonable expectation. The
question is, however, is that a desirable goal. Take, as example,
a statement from the blurb on the back cover of Ray Kurzweil’s The Age of
Spiritual Machines: “Eventually, the distinction between humans and computers
will have become sufficiently blurred that, when the machines claim to be
conscious, we will believe them. “ I am sure that there is no intention to
imply that, at that same point of blurring, if humans claim to be machines,
we will believe them. The implications here are significant, however. We
assume, it seems, that for humans to claim they were machines, would inherently
be a denigration, a degrading of humanness while, clearly, the achieving
of consciousness by machines by the assimilation of human capabilities would
be an advance. Now we don’t hesitate to envision this scenario of “computers”
achieving consciousness through “blurring” with humans because we assume,
implicitly, that their consciousness will somehow always be “artificial”
regardless of how biologically based they evolve to be and, therefore, somehow
the whole thing would be manageable ethically and morally, apparently because
the “computers” would have had no previous species identity and would still
be “machines” after they had reached the conscious breakpoint. A lot of those
assumptions are pretty arbitrary. And it is anticipated that it would be a
net gain for humans in that we would acquire superior computational and physical
skills and perhaps a kind of immortality. But, supposing that we decide to
short cut the matter and begin to merge and meld selected specimens of, say,
a Bonobo chimp with human characteristics. That apparently does not appeal
and tends to produce a bit of revulsion. But the notion does force a reconsideration
of a key element in any of this: purpose.
Why not simply by-pass the robot developmental process and genetically
engineer an android “AI”, a biological animal, easily modifiable and adaptable
to practical physical tasks as well as the most complex of mental ones?
We could take practical and desirable genetic characteristics from other
species, resistance to heat and cold, to radiation, as examples, and incorporate
physics that would give it a skin that is capable, perhaps, of photosynthesis.
We might simply combine chimp genes and other animal genes for various
desirable characteristics and maybe throw in a few of ours to upgrade the
intelligence level to the point where complex tasks and mechanical processes
could be easily learned and executed. Because it was designed and defined
as an animal from the beginning, it could be treated as an animal in legal
terms, “put to sleep” or the species terminated if necessary, and the ethical
questions would be minimal.
Once we have the bugs ironed out of that creature and evaluated the
desirability of using them on a mass produced scale to take our places in
industry, mining, McDonalds, etc. perhaps we could then go to the second
edition and engineer the intelligence level awareness to approximate a highly
superior status. I do not think we are ready to do this and I do
not think we will be ready to do so for some time. We have too much to
learn in general, too much to learn and assimilate about ourselves specifically
before we attempt it.
But the notion itself, put forth here as a challenge to our thinking
rather than a suggestion to proceed, triggers most of the problematic
objections and ethical considerations floating in the AI discussions currently.
One of the most practical things that approach would allow us to do is
incrementally increase intelligence and thereby determine in a biological
organism, perhaps, at what point self reflexive awareness would begin
to manifest. A chimp manifests a certain self awareness and an animal like
Koko the gorilla does also. Interesting question: let's say we reach a
point where self awareness begins to manifest in our hypothetical genetically
engineered animal and then it begins to increase to the point where one
of the animals communicates that it is aware that it is self-aware.
I submit that that is the critical breakpoint for differentiating animal,
as we define animal, ethically, and legally currently, from any creature
which we consider to have human type rights. To make it a generalization:
If any entity –we could extend this to whatever type, silicon, bio, pure
organized energy field, as yet unknown --- knows that it knows it is self-aware,
then we have to consider it, ethically, in a higher category than animal.
(The other side of the coin, which we haven’t begun to consider except in
our science fiction, is our relationship to conceivable or inconceivable
organisms or entities, that have more evolved types of consciousness than
we do, as ego denting and humiliating and embarrassing as it may be.)
It would seem trivial that, with regard to our hypothetical creature,
any ethical decision to destroy the creature and end the experiment would
have to come before this break point. If the creature had reached the breakpoint
then, perhaps, the only way we could determine to treat (it, her, him?)
would be according to IQ and ability to care for itself as we do, practically,
as examples, with mentally retarded persons or “idiot savants” who are
socially or physically challenged. But this process of genetically
engineering a new, utilitarian species is precisely what the Anunnaki did
in our regard. If we learn from the Anunnaki history what could be the
result of going about it as they did --- the result being we and our tumultuous,
confused, sometimes agonized history and current handicapped and
conflicted state ---- we may save ourselves a great deal of trouble and problems
if we consider doing it through genetic engineering as they did. At least
until we get through this very primitive and still largely unconscious stage
of our own evolution as we come out of racial amnesia.
I emphatically am not saying that we should take the Anunnaki example
as the exemplary, or right way to go and we should not simply unconsciously
play out some archetypal version of our own history either. Their definition
of what constituted the critical criterion or set of criteria by which
to determine whether a creature, biological or otherwise, merits recognition
and treatment equal to the way they treated each other is just that: theirs
---- and also from thousands of years ago. It may have changed since then.
It may have been taken into consideration and deliberately overridden,
i.e., they may have recognized, from our very inception some 200,000 years
ago that even the first humans were self-aware and intelligent enough to
be considered as having basic human, strange pun, humanoid rights of a
limited form of, or equal to their own and deliberately kept us in slavery
anyway. The history, at least with regard to some portions (tribes) of
the human population, seems to clearly point at this latter fact.
It would be interesting and enlightening to learn how they see their
experience and whether they would do the same again. It is still
a bit novel to imagine a time when even a completely artificially constructed
consciousness we engendered found it enlightening to come back to ask us
that question, even though we try out those scenarios already with a Mr.
Data in Star Trek.
There are a number of questions that we have not answered and probably
will not answer except by discovery as we go and the cooperative modality
that I am suggesting in this paper will lend itself ideally to the safest
discovery.
Facing the Real Questions
The possible approaches to AI-AC,
across the spectrum from bio-engineered upgrading, to genetically synthesizing
a species to the invention of a completely non-organic entity, all raise
questions we have only the faintest or no clues to answers.
Will intelligence in computers, computer programming, chips, bio-computers
or whatever medium we develop, automatically emerge at some critical breakpoint
in data volume handling and/or processing speed? Will the consciousness
that emerges, if it does, be, at least partially, a function of the particular
materials used in constructing the entity? Is there a consciousness peculiar
to silicone or copper or fiber optics or neurochips? In the most general
form of the question: is any kind of consciousness specific to the physical
base within it occurs? Is our kind of consciousness only possible in our
kind of biological base? Are the senses and emotions or machine analogs
of them essential to the functioning of intelligence if we intend a copy
of ours? We don’t say we are about to create an artificial emotional being
but will emotion be a natural product of self-awareness or have to be arbitrarily
installed or withheld (the Mr. Data question)?
Will there be a necessity for the imbuing of AI with analogs of the
recapitulatory phases of phylogeny we pass through from conception to birth
and the recapitulatory processes and phases we pass through from birth to
death to cause it to develop fully and stabily? Does imprinting, logical,
ethical, moral, aesthetic, and, ultimately, conscious behavior, and conscious
direction of one’s own evolution automatically manifest as inherent functions,
perhaps epiphenomena --- or not --- at any given level of complexity, data
volume and/or processing speed? If so, what is the determining level of complexity
and/or processing speed? If not, then will we have to learn how to duplicate
those characteristics in AI as we go and decide whether, how and when to
incorporate these functions. Is gender going to matter: will AC not be
complete without a species pool of male and female consciousness? If so
then we need to think about how to simulate gender and gender functions in
AI-AC.
We do not distinguish clearly and sufficiently, in western culture,
between changing our mind and changing our behavior. Because of the nature
of serial imprinting in a child, the young are impressionable, curious,
open to new information and experience and tractable. Educated in the proper
way, their behavior can be molded, corrected if necessary, their minds changed
and ideals implanted. Once imprints have been set, for better or worse,
behavior change in the adult is much more difficult. A major benefit
of positive LSD use is that LSD temporarily suspends imprints allowing a
person, on their own terms, another chance to “get it right” if it wasn’t,
and to see through and correct behavior they want to correct or improve.
We have no clue at this time as to whether AI-AC, to be a fully self-aware,
self-directed consciousness will need to imprint. Imprinting is extant in
birds and animals and primates as Lorenz demonstrated long ago. Is imprinting
an intrinsic element of consciousness or only a survival mechanism from
the animal level upwards?
Is logical “thinking” the only one of these characteristics that may
automatically manifest in machine intelligence as we have attempted to
duplicate it now? Or is even logical “thinking” something that must be
inserted? If so, is there an inherent geometry in nature that produces it
regardless of the medium? What about “free will”, “free choice”? Will endowing
any AI with perfect logic capabilities ensure that AI will evolve to be
perfectly logical? Would perfect logic produce consistent, perfect, ethical
and moral behavior? Will it automatically develop a sense of self-preservation?
If it does will it learn to deceive, protect, defend, attack to that end?
Will the paranormal abilities emerge automatically at a certain level
of data processing speed or general complexity of intelligence? If some
humans exhibit what are currently considered to be paranormal, above the
“normal”, abilities more than others what standard should we use for AI?
Is the potential for action at a distance an inherent characteristic of self-referential
consciousness? If we develop self-aware AI that approximates to our level
of consciousness will it be capable, intrinsically, of telepathy? Remote
viewing capabilities?
By it very nature will AI-AC require the equivalent of human sleep,
time for recreation? Will it be inherently gregarious and require
social interaction with its kind?
Will a genetically engineered biological species automatically possess
a chi system?
Will an artificial AI and or AC automatically evolve simply because
it is intelligent to a certain level and/or simply because it is self-aware?
If it does will it evolve as we do and in the same directions?
If we were not the product of the melding of two disparate gene codes
and not subject to four thousand plus potential genetic defects, would our
intelligence and consciousness be more harmoniously in tune with nature?
Would AI, therefore, having been created according to the natural laws of
physics, evolve somewhat differently, more “perfectly” psychologically,
perhaps, even though we copy our intelligence and consciousness as precisely
as possible?
Could we genetically engineer a creature lacking any ability
to adapt or a consciousness of our type without any potential to inherently
or consciously evolve? Is there a genetic key, a gene sequence that controls
adaptation to survive? Have we ourselves evolved to evolve?
So a fundamental question is whether we should give AI-AC the ability
to evolve. We might better phrase the question as: Do we and will we want
an AI-AC with the inherent tendency to adapt and evolve similarly to the
way we can evolve as individuals? We do not have any knowledge whether by
simply reaching some point of data handling and/or processing speed or,
more probably, even self-awareness, AI-AC will automatically possess the
inherent potential and drive to evolve.
Even if it is the intention of a developer somehow to only simulate
intelligence that approximates to ours or better as an isolated function
that operates as a self-aware phenomenon in an advanced computer as we know
computers now, perhaps HAL in the movie 2001 is a good example, all these
questions should be allowed and given careful consideration otherwise we
could be in for some surprises, pleasant or unpleasant, depending on the
goals, expectations and relative advancement of our own personal level of
evolution.
It is never too early to consider even the most ‘far out” and theoretical
questions. Let’s consider the speculations of John Wheeler relative to
his version of the classic double slit experiment. Is the photon detector
used in the double slit experiment the causal observer (after all, it’s
inanimate matter as such) or is only the human observer of the detector’s
recording? If, indeed, it turns out that John Wheeler’s intuition of a
participatory universe in which things become through a genesis by observership,
in some version of the anthropic principle, will we eventually find that
the key characteristic of the observer, to qualify, is simple consciousness
like a dog or mouse? Or must the observer be also self-aware? Or perhaps
even biological? Wheeler’s concept allows interaction with inanimate matter
as well as an observer/measurer to bring about the collapse of the wave form,
of the potential into the actual. Andre Linde’s concept is restricted to
observation by consciousness of some sort and excludes inanimate matter as
an agent. Specifically then, with regard to AI-AC, will simple AI with only
primitive consciousness qualify as an observer? Will AC, self-aware
but of other than biological constitution, be able to participate in genesis
by observership? This, certainly, is the most remote of the questions to
be answered with regard to AI-AC at this point in time but we had better
at least be aware of it already.
Will AI –AC automatically be immortal or will some simple principle
take it down?
Immortality Repatriated
I say, immortality, anyone.....?
Take all the time you want to answer.....
In this season of our unique evolution, the most profound god-game
we are going to play is immortality. As we free ourselves of the inhibiting
embrace of the godspell mentality we will begin to take advantage of the
possibility of physical immortality through genetic engineering, nanotechnology
and even more advanced technologies including uploading and other, probably
as yet unimagined modes, as they becomes available. Immortality is clearly
the major characteristic of philotropic humanism, the next plateau of
human metamorphosis, the next stage of our meta-evolutionary, conscious,
racial development. It will come to be understood as a basic right, an
ordinary condition, indeed, quality of human existence and a matter of
simple human dignity. The relative profundity of its dawning impact demands
that we consider it fully from all perspectives before it, suddenly, is
available to us and before we address it in AI and AC. The obvious fact
is that we generally are simply not prepared for it in biological form
much less non-biological form ourselves. And we are going full bore toward
a most probably immortal AI-AC. How unprepared and primitive are
we in this regard?
It is argued, recycled, that immortality is not the will of God ("Immortality
is Immorality" (!): can you see the bumper stickers coming? Will the right-to-life
people --supreme irony --be the ones to protest?); that it is unnatural;
that it is our ecological duty to die; that progress will be halted if
some live forever not making room for the new; that we do not have the
resources to support it; we would get bored and want to die; reincarnation
is taking care of that already; it's the supreme "ego trip" and a mark
of the immature personality; it is the intrinsic nature of the universe
that our type of being be born and die; evolution has not produced it so
we should not do it ourselves; and besides it's not possible to achieve anyway;
etc. The special interest groups of priests, prophets, politicians and profiteers
are going to go all out against this one. Our programmed beliefs from childhood
get in the way, our fear gets in the way, our dogmas get in the way --and
the universe seems unconcerned and silent. It may be the ultimate taboo.
But each one of us knows in our most private thoughts that the first person
who attains it will be --you guessed it --immortalized; the second and third
will make the headlines and a TV documentary and then there will suddenly
be large immortality industries appearing on the stock exchange.
The ancient records of why, when, and how we were genetically engineered
make it abundantly clear that we were brought into existence as a subordinate
species, a slave species, to relieve the Anunnaki miner echelons. (The
essential, detailed documentation through translations and illustrations
of the actual genetic processes (in vitro; cloning, etc. ) used by the
Anunnaki is found in Sitchin, The Twelfth Planet, chapter 12) It is specified,
pointedly, that, although the Anunnaki lived, literally, extended lifetimes
of thousands of our years (either because of the way they themselves had
evolved on their home planet or, perhaps, because of their genetic engineering
capabilities to achieve that longevity, and possibly through their use
of the monoatomic form of gold) they deliberately did not bestow that potential
on us. In fact, it is mentioned clearly that they deliberately withheld
it. This deliberate withholding of immortality and, perhaps, even a shortening
of longevity, may provide a major clue to our aging process and mortality.
From the details given in the ancient records, it is conceivable that some
engineering of the process was executed deliberately to suppress certain
characteristics to make better and docile slaves.
The story of the king, Gilgamesh, is indicative of the status we reached.
Gilgamesh knew his mother to be Anunnaki and his father human and went
to the Anunnaki space-port to demand immortality that he felt was legally
his through his mother’s Anunnaki heritage. It is clear from the history
of his quest that both humans and Anunnaki knew immortality, deliberately
withheld from the human genome, was something that could be granted and
bestowed arbitrarily.
The new paradigm shows us clearly the source of our attitudes toward
immortality. We knew the Anunnaki possessed it. We knew they had not granted
it to us. We knew that a handful of humans had been granted it over time.
The godspell totemtaboo is deep enough in the common psyche yet, however,
to cause the most precocious to utter glazed-eye robot platitudes about
it not being in the class of a disease but the way it should be, as if there
is some unspeakable inherent moral deficiency in anyone even profaning
death with a challenge. But the godspell mentality, as has been the case
for thousands of years, has provided us with the desperate rationalizations
as to why we should accept death, submit to such an annihilation.
The Eastern religious psychology of "be here now" and become reconciled
to death when it comes, or the Western "God wills it" are simply the best
we could muster up when no means to overcome death were available and the
terrible despair that leads to suicide lurked everywhere. So deeply ingrained
are these attitudes that any objection to or questioning of them is usually
interpreted as indication of spiritual immaturity or imbalance. The doctrines
of reincarnation, metempsychosis, immortality of the soul (only), transmigration
of the soul, karma, purgatory, heaven and hell, are all offshoots of the
racial psychological phase when we became self-reflexively aware enough to
evaluate the absolute finality of death and were forced to explain our situation
to ourselves in terms with which we could live (tragic pun).
It is clear why the reward for the "good" life, i.e. docilely submitting
to the will of some deity known through the rules of whatever authoritarian
religion one subscribes to, is always after death. And why "eternal life",
"eternal bliss", pleasant immortality is the reward. Immortality is always
the key concept even when the kind supposedly due is a punishment; "hell"
in the Christian sense, is described as painful immortality --of the "soul"
and the body as well. We need to be free of those methadone metaphors
that we have clung to in order to maintain our sanity through the transition
period since the Anunnaki / Nefilim left us on our own --without immortality.
It will only be within the context of the new paradigm, this new understanding
of human nature as a genetically created species rapidly seeking its full
potential, that we will be able to gracefully and intelligently integrate
immortality. It will require at least that comprehensive a base to then
explore the dimensions to which we shall surely aspire beyond physical
immortality.
What is most fascinating about the transition period we are now going
through, however, is the way in which individuals react to even the possibility
of preservation of the body or the brain. Some find the concept of deep
freeze of either the entire body or just the brain physically repulsive --as
if that would be a concern after you are dead. Some find it too "cold", too
clinical, (let's hope for very precise measures of both) and turn away. The
vast majority of these same persons would undergo major cosmetic or curative
surgery without hardly a thought about the distastefulness of it. There
are those who have concluded that cryonics, about the only bet currently,
“will not work” so they don’t opt to use it on a “what do you have to lose”
basis, the implication being that they are not that intent on being immortal
anyway. But the most revealing aspect of the matter is that individuals
very often reject it not for any physical reasons, but because they do not
want to be able to come back, they do not want to attain any sort of relative
immortality, that this life is difficult enough without doing it again.
The inference, if not the frank admission, is that just getting through
this life to an ordinary death is more than a person should have to cope
with. At first this seems very strange indeed. If death is the inescapable
finality that human beings find impossible, at times, to accept and against
which they struggle, then why is not even the possibility of being suspended,
after one has died, until science can work out a way to restore one to indefinite
life, not greeted with relief and joy? There is a valuable truth to be
learned here about the current state of human affairs. The disconcerting
negative reaction most often turns out, in actuality, to be not to cryonic
suspension's potential or aesthetics but to current conditions of human
life. Not having thought it through, the person anticipates life will be
no different in one hundred and fifty years (the projected time of suspension
until scientific methods can achieve complete restoration) than it is now
and, therefore, it will be no more tolerable to them then than it is now
and they reject it out of hand.
One of the most ubiquitous misconceptions about the future and intention of
cryonics is that you would return and begin living at the 101 years or whatever
age and condition at which you died. Not a pretty picture. But the anticipation
that the development of the robust level of nanotechnology needed to restore
the body and mind will also have achieved control and reversal of the aging
process, the elimination of disease, the easy repair of injury and defects.
In the largest perspective perhaps that sort of reaction is to be anticipated
and understood for some. Immortality has already caused discomfort between
those who are resigned to making the best and getting the most satisfaction
out of the rest of their expected life span and those who have opted for
immortality even if it is only a rapidly emerging possibility. But for those
who have the foresight to see that conditions will inevitably be forced to
change to accommodate the inherent dignity of the human being and to adjust
to support large segments, at least, of any given population living indefinite
life spans with unique, very long term goals and needs, there is another
vision.
There is at least a small percentage of the population, however, which
is already ready, eager and probably overqualified for immortality, indefinite
life span. Overqualified in the sense that their consciousness is already
evolved sufficiently to encompass it and ready to subsume and move beyond
it. That may sound a bit strange, initially, in view of the fact that
we have not yet even achieved it. But I assume that, sometime in the future,
we shall discover, explore and expand into a type of human condition which
goes beyond and subsumes even physical or uploaded immortality (whatever
that turns out to be ). (And, if we are not quite careful and enlightened,
the “old” immortalists party will try to prevent it as evil or at least
illegal.) Physical or virtual immortality may be subsumed at that stage
perhaps because we may simply evolve to a form, though still physical by
definition, which is basically energy rather than matter and perhaps not
subject to the bio or virtual rules. It certainly is a major element in
our thinking if only, so far, in our science-fiction --which has shown
itself to be a rather reliable indicator of what actually will happen.
If, however, we now have a context, an adequate paradigm which frees
us to intelligently pursue the immortality that was deliberately withheld
from us from the beginning, how shall we view it? It may sound trivial
but I think the first thing we have to do is separate immortality from the
means we have at hand or project we soon will have available to achieve
it.
I will use myself as an example. I have chosen to be immortal. I am
a practicing immortal. To that end I am signed up with Alcor (Phoenix, AZ)
for cryogenic suspension in the event that the biotechies don't get the
immortality act perfected for us through nanotech and genetic engineering
before I have to book it, although I think it may well happen. Immortality
is the goal. I will use whatever technology, now or in the future, which
is the best at the time when it is needed and available. Certainly, I take
good vitamins, eat for my blood type, and have practiced Chi Kung and Tai
Chi for 30 years. But, to be precise, I believe that cryonic suspension is
the best technology available right now to achieve the immortality goal, if
one dies and has to take a recess, in fact the only one. It is imperfect,
uncertain, but it is currently the only game in town if I were to die this
week. Although I’m 72, I’d bet that cryo may not even be necessary due to
rapid developments in nano and bio tech before I have to book it. Cryo is
not the goal, it is a means to the goal. I am signed up with Alcor to cover
my bet just as other Alcorites like Eric Drexler, Ralph Merkle, and Marvin
Minsky are. I fully expect to return and remain at the age of forty six and
a half, knowing what I know now, with all the experiences of my past. Maybe
45. That's why the subtitle of God Games is What Do You Do Forever?.
I'm exploring how I'll want to live as an immortal.
The concept of “uploading” is interesting to me, at least currently,
only as a practical backup. We are not even close to determining what
the new medium will or must be. The ones we contemplate may or may not
be adequate, we have not determined what is essential to duplicating our
intelligence and consciousness completely, whether the senses and the emotions
and the hormonal components will have to be simulated in order to duplicate
our consciousness perfectly or at least completely. I am focused on physical
bio-immortality as a personal choice because I think that we are an open
ended statement with huge untapped evolutionary potential. And I think that
the physical body and the physical context as we think of it now is just
fine. Better things to come? I’m certain. But I anticipate
having a lot of time on my hands, so to speak, to investigate, evaluate,
and choose. I assume that immortality will be an option among options; that
the necessary physical vigor will be concomitant; that quite obviously
we shall work out the expedient adjustments of our resources, work, ecology,
economics, education, population, etc. as incidental facets of the new
dimension once the vision has stimulated us and given us sufficient reason
to break trance and outdo ourselves.
Again, it may seem trivial to say all these things about immortality
but, in the manifestos, debates and discussions concerning AI and VR,
there are some rather strong ambiguities, even contradictions due to the
fact that immortality is the most unexplored, un-thought out concept in
our consciousness today because of its sheer wild-card novelty and the
locked-in legacies surrounding it. Its “target audience” is every
single individual and it being so “close to home” even if it is only a, albeit
fairly near term, possibility, makes it even more intimate than AI. The
problems begin to show up at the point where AI, VR and immortality merge.
There are some who seem almost rabid about the potential for uploading
into some electronic or more advanced type of computer, any time while
they are still living, apparently before lunch tomorrow if it were possible.
The implication being that immortality of a kind will be intrinsic to that
modality and taken for granted, yet some seem to have not thought about
or are not even particularly focused on immortality as such. The focus
seems to be on just getting out of the messy organic vehicle and good enough.
But that may well be, at least for some, a very disconcerting experience:
immortality in time, real or virtual, we have to assume at least for now,
has its own psychology, epistemology, and priorities.
I submit that there are three practical problems manifest here. Unless
the biological body and body consciousness is mastered and integrated,
bypassing it will lead to quandaries and problems. Unless bio-based or
related intelligence and consciousness is mastered and integrated, development
of AI and VR as a context or with which to merge is going to be problematical.
Get a couple of hundred hours of visual flight time under your belt before
you begin work on your instrument ticket. Unless immortality has been made
as a cardinal choice and contemplated independent of the modality eventually
used, some of the ramifications thought through and at least a preliminary
shift of priorities experienced, any kind of immortality, bio or VR or
whatever, is going to be a bit of a disconcertion to say the least. Again,
for our time and conditions and inexperience, I am coming down the middle
between the no-Joy and it’s-just-so-cool extremes. If we chose correctly
at each step, we will have a lot of time, no pun intended, to work our way
through this novel situation.
It’s fun to think about all the caffeine consciousness advantages AI-VR
will afford us: the ability to do many things at once well and simultaneously
in different locations with different individuals, to learn quickly or
instantaneously, effortlessly through various protocols and experientially,
probably all of the things that Ray Kurzweil’s imagination has projected
in his conversations with virtual friends in The Age of Spiritual Machines.
When immortal, to continue with the relatively short term grabbing at a
bit of pleasure and satisfaction out of life would be a horror and even
to go on as we are now, but in fast forward, totally unsatisfactory. Before
we create AI’s in our image and likeness we had better contemplate life
as immortals. The relative profundity of its dawning impact demands that
we consider it fully from all perspectives before it, suddenly, is available
to us. Before we achieve immortality, through whatever modality, we had
better revisit our options and priorities. We need to begin, none too soon,
to develop a vision of how we will live as immortals. It’s priorities all
the way down. We need to fully assimilate at least the concept and ramifications
of immortality for ourselves before we are suddenly faced with potentially
immortal machines or the possibility of uploading ourselves into machines
that may afford us at least a kind of immortality. These considerations are
all the more pressing and critical because some are already looking to VR
and AI as a technological salvation. Better yet we should use developing
AI as a means to explore possible evolutionary trajectories and potentials
before we commit. In part 4, I make some suggestions as to how to
do this.
Now, if whatever VR is eventually developed has a guaranteed
trapdoor, part of the problem may be mitigated where one, faced with time
frames and situations which are unmanageable, can revert to the organic
form or terminate herself or himself at any time. But coming at the potential
problems from that negative angle will be too little too late especially
in light of the positive potential for evolutionary expansion. An even
more immediate problem arises from the “just so cool” let’s-vacate-the-organic
approach as soon as possible in that the risk is the VR that one develops
into which one intends to upload may well be, consequently, unnecessarily
faulty. Just as with AI and AC, the chemistry set in the bedroom approach
may blow out walls that might have remained intact with mature forethought.
The essence of the situation, is that there is a tight feedback loop that
cannot be bypassed. It’s not simply intelligence, science or expertise, it’s
consciousness in, consciousness out.
Death, meanwhile, is the Great Conditioner. We are subliminally
or consciously influenced in our choices and life decisions by that inevitability.
The only thing that doesn't satiate is constant, leisurely (bad pun) expansion
of consciousness and information. And that's definitely done much better
and with much more fun dyadically, equal bio-physically immortal partners
moving tantricly up the evolutionary DNA spiral together, as we evolve rapidly,
individually and collectively to an expanded, habitual, four-dimensional
consciousness and perception and beyond.
In the greatest perspective, perhaps we should recognize from the outset
that immortality will be both a new and awesome plateau of human existence
offering as yet probably undreamed potential and yet, without denigrating
that potential at all, ultimately just another "trip", just another step
in our meta-evolution, the rapid metamorphosis we have been undergoing
since our beginning. Within those extremes there is the greatest latitude
for the inevitable expansion into dimensions which will allow us to become
far wiser, individually, through greater experience, greater learning,
and the ability to witness the patterns of repetitions of extended periodicity.
Eliminating the pressure of a short life span that influences our choices
and cramps our lives will not just give us the practical potential to travel
easily between star systems and send the insurance companies into the re-edit
mode; it will change our perspective and our social interactions, certainly
the entirety of human existence, radically.
I admire and support Transhumanism’s concepts and goals and think the
TH philosophy is pointing generally to a transition toward the right stuff.
Frankly, however, I find it a bit amusing, that some TH academics, only
recently, have made their seemingly proprietary cornerstone the claim to
the view that human nature is not a fixed, static item but can expand and
evolve. They, although on the right track, are stuck in this battle with
a windmill, currently, feeling very risqué in their cramped academic
posturing against poor dead Darwin. As a result, they are still trapped
in the creationist-evolutionist box. Their goal, to make TH a mainstream
academic discipline, is admirable but already outmoded. I like their direction
but, as Jaron Lanier said about Darwin, I wouldn’t want them to write AI,
AC or VR code for me.
Within the Transhumanist camp, and others, there is also apparently
a strong dislike, even an aversion, among some, to the body that colors
thinking about AI and VR. The physical is just too messy, the organic too,
well, organic, and uploading into some, as yet undetermined “computer” or
other than organic medium is much desired. As long as they will not
attempt to legislate against those of us who are intent on exploring the fullest
range of evolutionary expansion possible in this organic body, even coming
back from a cryo sabbatical to continue the exploration and fun, then they
are welcome to their brand of exploration. Keep me posted, I might want
to explore there some day also and at least to use it as backup.
The assumption that our next evolutionary step must be, in essence,
out of the organic is premature to say the least, for a couple of reasons.
The trajectory of the natural evolution of consciousness historically is
away from the inorganic toward the organic: to attain the complexity level
of self-reflexive consciousness nature didn’t opt for self-aware crystals,
at least on this planet, the option is for organic structures like the body
and the brain. Mobility and flexibility are also major factors here also.
In our attempts to duplicate AI and AC we are almost forced to go in the direction
of “circuitry” that is closer and closer to the organic which can accommodate
the kind of processing that our consciousness requires.
I think it is necessary to clearly separate our evolutionary trajectory
and progress from any modality we may use to further and enhance them at
any given time. We need to arrive at a consensual agreement that we are evolving
and the unique nature of our particular evolution is as a bicameral species.
We are not there yet. We, further, need to understand the unique nature
of conscious evolution and the control and responsibility it brings. We
need to clearly identify the trajectory of our conscious evolution and recognize
that it is a phase among phases of a multifaceted future development of
the species and us as individuals into greater and greater degrees of freedom
and diversity. We are not there yet. One, among many, of the options in
the plenum of freedom we call the universe and its potential of diversities,
is some kind of use of hardware and its future, “softer”, varieties for
enhancement, collective and individual environments in the form of virtual
realities, android surrogates, vehicles, bodies, and modalities we have
not even thought of yet.
There are advantages and disadvantages to the “hardware” option and to set
it as the essence of our next evolutionary plateau at this early stage
is far too limiting. If anyone wishes to personally take the risk and
experiment that should be their prerogative. I am not saying we should not
do it, quite the contrary, it has tremendous potential and we should. There
quite probably will come a time when a highly developed, debugged, safe form
of VR will allow easy uploading and/or downloading in seconds for the sake
of medical scan, genetic repair, learning, game playing or semi or permanent
habitation. Great. But to by-pass the body at this primitive stage, especially
if left in the hands of the “it’s just so cool” people, will most probably
lead to a great embarrassment and hurt. It will be all too easy to create
environments into which to upload that are simply mirrors, especially in
their intellectual and epistemological facets, of our current primitive
situation which, ironically, some are trying to evade.
|